Whenever someone asks me this question, my response is pretty similar followed by "it takes out the fun in programming for me". Programming for me isn't just building product, but also solving new problems, writing good code. AI tools for me, atleast, takes the fun out of problem solving.
"But what about mundane tasks? Boilerplates?" I do enjoy finishing the switch case with all the cases, typing those makes them engraved into my brain. Plus, is asking Copilot really faster than just typing it? Doesn't it break your "flow"?
One last thing i don't like about AI agents in code editors is, they're very eager to complete my code. Whenever I stop to think how im gonna write or solve this problem, they're always suggesting me code. Which in turn distracts my brain.
Although i do sometimes use AI tools but usually its not once every hour but once per week.
One nice thing I can say about the AI integration in Zed specifically is that it stays out of the way unless summoned, with the default settings anyway.
When it comes to flow, I think AI can enhance it by saving trips to the browser. All those small questions for Stack Overflow can be answered inline and in context, often with a small bit of working code.
Curiosity can take many forms and many directions. Perhaps they'd more curious about the domain they work in for example, or the problem they're solving, not the code they use to express a solution - which might be quite basic, but still valuable code.
Equally whats mundane for the worlds best programmers, might be the more interesting aspect of the job for those whose day to day software engineering is generally mundane.
The reality is most software engineers are building business as usual systems, not working on the latest, cutting edge technology.
What is the provenance of the output? Is any and all output properly licensed for your intended use? What's your liability? How do you feel ripping someone's work off? If using an AI made a task trivial. It will also be trivial for your competitors and they benefit from you already teaching it the answer you wanted. There is no competitive advantage. You program a service which instruction set is not well defined, in fact, it actively changing. Do you care about reproducible builds (from prompts to binary)? Does it bother you the same people that rode the Ponzi scheme (crypto currencies) train also ride this one? Did anyone find a use for block chain other than crypto? Why didn't you go all in on NFTs? It's dead by now. How is AI different? I would rather have a refund button than suffer through a "conversation" with a chat box, failing that, at least let me talk to a person that understands by request. Let's say you are all in on AIs... well, you don't have the billions of dollars to compete in the space so you are forever going to be renting resources to be cutting edge. If you have run-time dependency on an AI service you are very much locked in. Ask the mainframe customers how that feels. Currently, there is a de-cloud movement going on due to the cost savings you can realize. Those costs, btw, will close businesses in the next recession. The AI providers can and will raise API costs to recoup those billions of dollars of investments. Do you care about global warming? These AI systems takes so much power that it's delaying shutting down coal plants, and firing up nuclear power plants. For what... a video? They are technically impressive, but I just don't care for the fake photos or fake movies. Let's say you AI to auto-generate tests? It's wonderful. Did you learn something pressing the auto-generate button to inform changes about your code? What's the point of these tests? Why generate them? The next version will be better so just do it on the fly? Do the test catch errors or regressions (i.e. are they useful)?
I used copilot for about a year before dropping it. There were two main reasons:
1. It was great at the simple stuff but mostly wrong for anything not “standard”. I found myself reading the bad suggestions and getting annoyed at them or feeling like it was slowing me down when I had a flow going. The bad suggestions outweighed the benefits from the simple stuff for me.
2. I felt like I was becoming worse at writing and understanding code. I could get to a working solution but sometimes would feel like I didn’t really understand what I wrote. With the LLM helping me out, I didn’t need to understand it and I didn’t like that. If I don’t understand the code base it makes it harder to add new features and track down bugs.
I never want a tool to make me feel like I am getting worse or annoy me more often than it helps.
Well there’s nothing to understand. Some people are not interested in AI and that’s that. Personally I dislike very much the lack of regulation around AI. We never learned in our industry but I believe that the damage of this trend will be much more terrible.
I’d have the same reaction as hallway guy. I thought everyone understood why. I myself am the intelligence. My goal is to learn and improve, always. If I can’t make the things I can think of then either I’m not a very good artist or I need a better instrument. If I pay someone else to play my instrument for me, then I stop getting better at it and they start getting better at it.
When I find myself doing heavy coding work such that my typing is the bottleneck, it usually means I need better tools or a more expressive language. AI can eliminate this pain, allowing me to plow on with mediocre tools indefinitely, hooray! But is this really better for me?
I think the people selling AI are necessarily being a bit disingenuous, because a major reason they are investing in learning about it is that they believe that the skill set is one of the most valuable to have right now. So the AI-evangalists are telling us "you shouldn't care about self-improvement" but that is manifestly not how they behave themselves...
> I myself am the intelligence. My goal is to learn and improve, always.
And yet you don't want to spend a few hours in two years to see what's up with AI and what others are doing when they say they use it?
> If I can’t make the things I can think of then either I’m not a very good artist or I need a better instrument. If I pay someone else to play my instrument for me, then I stop getting better at it and they start getting better at it.
Do you use a spell checker, or do you remember how everything is spelled? Do you use a fuzzy finder, or do you remember every file path? Do you use autocomplete or do you type out every name?
The last paragraph here has definite "everybody does things the way I do them" vibes, so:
> Do you use a spell checker, or do you remember how everything is spelled?
I avoid spell checkers. Usually I know how a word is spelled or can work it out and if I'm not sure then I fire one up to check or look in a dictionary. Even turning on the fancy mis-spelling highligher is fraught with danger because the American dictionary is always the default and computer code is full of non-words, abbreviations and other correct things which distractingly skittle-ise my screen.
> Do you use a fuzzy finder, or do you remember every file path?
I have no idea what a "fuzzy finder" is. I remember where I put things, sometimes assisted by find(1) or grep(1) and similar tools.
> Do you use autocomplete or do you type out every name?
I type things out in full, in the same way that I don't mumble and include "y'know?" in every other sentence when I'm talking to people, using some sort of search if I can't remember a symbol precisely (see: spell checker).
Addendum: Notwithstanding pedantic frippery like i-before-e-except-when-a-victorian-era-dictionary-publisher-feels-like-it, not knowing how to spell a word usually means an entertaining half hour spent reading etymonline.com, and why would I give that up and let the machine have all the fun?
Yes, I think you're getting the idea. When the spell-checker tells me I made a mistake, instead of clicking the word I make a point to go back and retype it (usually). I do this because I value knowing how to spell.
Instead of a fuzzy finder I use a file tree, because I can use it to get around very quickly and it gives me a chance to think about whether the organization of my files is still optimal. I do use a very naive autocomplete for variable names in order to help stave off carpel tunnel.
Using these techniques has allowed me to go in a product-development head-to-head with the entire Zed team as (mostly) a single individual, a battle which I think I am currently easily running away with: I'm innovating by leaps and bounds while Zed innovates in the tiniest pre-proven increments.
I allow myself to be guided by pain, but that means not being on painkillers.
The Zed team must be in agony over the Rust choice by now, I would think. The compile times, the intense formalism. Plus you seem to be locked out of providing the Zed user experience in-browser, and a great deal of your organization's energy is now consumed by framework, OS, and hardware support, while very little energy (compared to the amount of talent at least) now seems to reach the highest level goal of innovating on the core experience of code editing.
> No, he said. With a shrug, he added: I tried it once, it was completely wrong, so I stopped using it. Never used it for coding, he said.
>
> What’d you use, I asked. Claude? ChatGPT? Have you tried GPT4?
>
> Not sure, some website, he said with another shrug.
...
> What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology
You answered your own question. He *was* curious, and then he tried it, and learned approximately how it works, and knew right away that it wasn't going to work out for him. Sometimes people are really good at knowing when something isn't for them really quickly.
Eh, I've never even wondered if I want AI in my IDE...as I don't typically want an IDE at all. Professional software engineer since 2012 and I do most of my coding in nano, vi, notepad, occasionally emacs but tbh that's too heavy and I was never really able to get into it. I do use Eclipse at work but still often find myself working in nano or notepad++ or even just the browser if I'm only trying to analyze the code because those are just so much faster...I basically just use Eclipse to compile to automagic the ant/maven/spring dependency crap.
I can't even tolerate auto-complete when I'm coding. Because it *always* makes things harder. I'm typing away and up pops a suggestion and before I can even react it's throwing its suggestion into my code right in the middle of what I was typing and even if the suggestion was correct it just means I get to choose if I'm deleting the copy it inserted or the copy I typed myself. And then I've lost my flow from trying to figure out what it screwed up and how to fix it. I've seen plenty of tools that promised to write the code for me going all the way back to Frontpage and Dreamweaver in the late 90s and every single one of them without exception has made the job harder. I gave up on such promises a long time ago. Now my priority is a tool that just gets the hell out of my way.
Something that I rarely see mentioned is the non-technical argument against using LLMs. In an environment where we're direly under-provisioning our efforts to slow climate change and the challenges of energy and resource scarcity, it feels ethically unconscionable to increase the scope of the challenge by virtue of using LLMs en-masse.
There's one argument to be made against the distributed cost of inference workloads that write people's emails for them, and another to take issue with the ethics of the training costs that power each inference. I think a similar argument can be made to that about using the radiological data from Hiroshima, or the human-experimentation performed during WW2 -- once these data exist, is it ethical to use them?
I'm not an absolutist, and there's nuance to be found here. But, in a world where I'm concerned about consigning plastic to landfill, and taking unnecessary international flights, it feels egregious to me that we aren't questioning whether we _should_ use LLMs and their siblings.
I use LLM a lot for writing emails, translation, writing blog post assistance, but for coding I find it horrible, either the problem is too niche and I get a bunch of hallucination which looks like the right solution, but take ages to figure-out they are not working, or code style that doesn't fit my needs. I find it useful for language/env where I'm not confident, but I think for seasoned developers it's quite useless
I'm curious about tech, "AI" is just not it, unless you are writing React JS widgets or simple python scripts is really not that useful, you end up asking it "do not hallucinate", many times. I'm curious about RISC-V, Zig, Ladybird, C3, Physics, Math, etc... One thing is not being interested about AI, and another thing is not being interested on whatever half baked products Microsoft/OpenAI wants to shovel down your throat. I do understand you guys spent a lot of time on the AI integration in Zed. Funny enough, I follow Andreas youtube channel, and I haven't seen him using any AI in CLion.
> What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology that’s said to be fundamentally changing how we’ll program in the future.
Because I've been embedded in the fundamentals that the latest round of AI expands on for 30+ years and it doesn't introduce any new ones except for maximising randomness and chaos, which I have spent those 30 years trying to abolish.
Just a thought, but given that LLMs are so utterly environmentally destructive; most of the major ones are being built and/or stewarded by companies with distinctly awful ethical track records; and they're all built off the stolen work of other people, consent be damned, that maybe. Just maybe. People don't feel the need to get into the reasoning behind why they aren't interested in using them during a random hallway conversation 🤷 Especially in such politically volatile and divisive times, it may just not be worth the argument.
Obviously, I don't know if this is what happened here or not. But it's another interpretation.
(And yes, I have used both Claude and GPT3/4/4o for coding. They're sometimes helpful, often not. I struggle immensely with the ethics of using them at all, and in a public setting I probably wouldn't want to admit to having used them out of embarrassment either. Context is king.)
I have used an AI chatbot to generate some sample code for a task I want to perform, but I'm not going to just copypasta that into my IDE. I'm going to go over it and understand what the code is trying to accomplish before I write the actual code.
In the end, AI as a code generator is going to be kind of like those wizard-based tools that we saw back in the 90's: it'll be good for getting a project together quickly, but, when something goes wrong with the code generated by the wizzy tool, you'll have to understand the underlying code to be able to remedy it. AI and wizzy tools are both abstractions, and all non-trivial abstractions leak, as Joel Spolsky pointed out.
Whenever someone asks me this question, my response is pretty similar followed by "it takes out the fun in programming for me". Programming for me isn't just building product, but also solving new problems, writing good code. AI tools for me, atleast, takes the fun out of problem solving.
"But what about mundane tasks? Boilerplates?" I do enjoy finishing the switch case with all the cases, typing those makes them engraved into my brain. Plus, is asking Copilot really faster than just typing it? Doesn't it break your "flow"?
One last thing i don't like about AI agents in code editors is, they're very eager to complete my code. Whenever I stop to think how im gonna write or solve this problem, they're always suggesting me code. Which in turn distracts my brain.
Although i do sometimes use AI tools but usually its not once every hour but once per week.
One nice thing I can say about the AI integration in Zed specifically is that it stays out of the way unless summoned, with the default settings anyway.
When it comes to flow, I think AI can enhance it by saving trips to the browser. All those small questions for Stack Overflow can be answered inline and in context, often with a small bit of working code.
Curiosity can take many forms and many directions. Perhaps they'd more curious about the domain they work in for example, or the problem they're solving, not the code they use to express a solution - which might be quite basic, but still valuable code.
Equally whats mundane for the worlds best programmers, might be the more interesting aspect of the job for those whose day to day software engineering is generally mundane.
The reality is most software engineers are building business as usual systems, not working on the latest, cutting edge technology.
100%, John, I agree. But I think all of this can still be true even if you take, say, 8 hours in 2 years to see whether you can use it or not.
What is the provenance of the output? Is any and all output properly licensed for your intended use? What's your liability? How do you feel ripping someone's work off? If using an AI made a task trivial. It will also be trivial for your competitors and they benefit from you already teaching it the answer you wanted. There is no competitive advantage. You program a service which instruction set is not well defined, in fact, it actively changing. Do you care about reproducible builds (from prompts to binary)? Does it bother you the same people that rode the Ponzi scheme (crypto currencies) train also ride this one? Did anyone find a use for block chain other than crypto? Why didn't you go all in on NFTs? It's dead by now. How is AI different? I would rather have a refund button than suffer through a "conversation" with a chat box, failing that, at least let me talk to a person that understands by request. Let's say you are all in on AIs... well, you don't have the billions of dollars to compete in the space so you are forever going to be renting resources to be cutting edge. If you have run-time dependency on an AI service you are very much locked in. Ask the mainframe customers how that feels. Currently, there is a de-cloud movement going on due to the cost savings you can realize. Those costs, btw, will close businesses in the next recession. The AI providers can and will raise API costs to recoup those billions of dollars of investments. Do you care about global warming? These AI systems takes so much power that it's delaying shutting down coal plants, and firing up nuclear power plants. For what... a video? They are technically impressive, but I just don't care for the fake photos or fake movies. Let's say you AI to auto-generate tests? It's wonderful. Did you learn something pressing the auto-generate button to inform changes about your code? What's the point of these tests? Why generate them? The next version will be better so just do it on the fly? Do the test catch errors or regressions (i.e. are they useful)?
Sounds like you spent more time than the person I mentioned in the post thinking about AI :)
I used copilot for about a year before dropping it. There were two main reasons:
1. It was great at the simple stuff but mostly wrong for anything not “standard”. I found myself reading the bad suggestions and getting annoyed at them or feeling like it was slowing me down when I had a flow going. The bad suggestions outweighed the benefits from the simple stuff for me.
2. I felt like I was becoming worse at writing and understanding code. I could get to a working solution but sometimes would feel like I didn’t really understand what I wrote. With the LLM helping me out, I didn’t need to understand it and I didn’t like that. If I don’t understand the code base it makes it harder to add new features and track down bugs.
I never want a tool to make me feel like I am getting worse or annoy me more often than it helps.
One year! That's roughly one year longer than this person even considered looking at it.
Well there’s nothing to understand. Some people are not interested in AI and that’s that. Personally I dislike very much the lack of regulation around AI. We never learned in our industry but I believe that the damage of this trend will be much more terrible.
The hype does get a bit much at times :)
I’d have the same reaction as hallway guy. I thought everyone understood why. I myself am the intelligence. My goal is to learn and improve, always. If I can’t make the things I can think of then either I’m not a very good artist or I need a better instrument. If I pay someone else to play my instrument for me, then I stop getting better at it and they start getting better at it.
When I find myself doing heavy coding work such that my typing is the bottleneck, it usually means I need better tools or a more expressive language. AI can eliminate this pain, allowing me to plow on with mediocre tools indefinitely, hooray! But is this really better for me?
I think the people selling AI are necessarily being a bit disingenuous, because a major reason they are investing in learning about it is that they believe that the skill set is one of the most valuable to have right now. So the AI-evangalists are telling us "you shouldn't care about self-improvement" but that is manifestly not how they behave themselves...
> I myself am the intelligence. My goal is to learn and improve, always.
And yet you don't want to spend a few hours in two years to see what's up with AI and what others are doing when they say they use it?
> If I can’t make the things I can think of then either I’m not a very good artist or I need a better instrument. If I pay someone else to play my instrument for me, then I stop getting better at it and they start getting better at it.
Do you use a spell checker, or do you remember how everything is spelled? Do you use a fuzzy finder, or do you remember every file path? Do you use autocomplete or do you type out every name?
The last paragraph here has definite "everybody does things the way I do them" vibes, so:
> Do you use a spell checker, or do you remember how everything is spelled?
I avoid spell checkers. Usually I know how a word is spelled or can work it out and if I'm not sure then I fire one up to check or look in a dictionary. Even turning on the fancy mis-spelling highligher is fraught with danger because the American dictionary is always the default and computer code is full of non-words, abbreviations and other correct things which distractingly skittle-ise my screen.
> Do you use a fuzzy finder, or do you remember every file path?
I have no idea what a "fuzzy finder" is. I remember where I put things, sometimes assisted by find(1) or grep(1) and similar tools.
> Do you use autocomplete or do you type out every name?
I type things out in full, in the same way that I don't mumble and include "y'know?" in every other sentence when I'm talking to people, using some sort of search if I can't remember a symbol precisely (see: spell checker).
Addendum: Notwithstanding pedantic frippery like i-before-e-except-when-a-victorian-era-dictionary-publisher-feels-like-it, not knowing how to spell a word usually means an entertaining half hour spent reading etymonline.com, and why would I give that up and let the machine have all the fun?
Yes, I think you're getting the idea. When the spell-checker tells me I made a mistake, instead of clicking the word I make a point to go back and retype it (usually). I do this because I value knowing how to spell.
Instead of a fuzzy finder I use a file tree, because I can use it to get around very quickly and it gives me a chance to think about whether the organization of my files is still optimal. I do use a very naive autocomplete for variable names in order to help stave off carpel tunnel.
Using these techniques has allowed me to go in a product-development head-to-head with the entire Zed team as (mostly) a single individual, a battle which I think I am currently easily running away with: I'm innovating by leaps and bounds while Zed innovates in the tiniest pre-proven increments.
I allow myself to be guided by pain, but that means not being on painkillers.
The Zed team must be in agony over the Rust choice by now, I would think. The compile times, the intense formalism. Plus you seem to be locked out of providing the Zed user experience in-browser, and a great deal of your organization's energy is now consumed by framework, OS, and hardware support, while very little energy (compared to the amount of talent at least) now seems to reach the highest level goal of innovating on the core experience of code editing.
It's weird how he told you
"No, he said. With a shrug, he added: I tried it once, it was completely wrong, so I stopped using it. Never used it for coding, he said."
and you took that to mean
"A couple of shrugs saying: I don’t care about all that AI stuff, I’m not interested, I just want to turn it off."
Work on the AI until it's not completely wrong, but completely right, and then come back to us.
You go on and on, in the comments even: Why was he not curious. Why. Why.
He maybe was, once. He tried the tool. It was completely wrong. Then he stopped wanting to waste time.
> No, he said. With a shrug, he added: I tried it once, it was completely wrong, so I stopped using it. Never used it for coding, he said.
>
> What’d you use, I asked. Claude? ChatGPT? Have you tried GPT4?
>
> Not sure, some website, he said with another shrug.
...
> What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology
You answered your own question. He *was* curious, and then he tried it, and learned approximately how it works, and knew right away that it wasn't going to work out for him. Sometimes people are really good at knowing when something isn't for them really quickly.
Eh, I've never even wondered if I want AI in my IDE...as I don't typically want an IDE at all. Professional software engineer since 2012 and I do most of my coding in nano, vi, notepad, occasionally emacs but tbh that's too heavy and I was never really able to get into it. I do use Eclipse at work but still often find myself working in nano or notepad++ or even just the browser if I'm only trying to analyze the code because those are just so much faster...I basically just use Eclipse to compile to automagic the ant/maven/spring dependency crap.
I can't even tolerate auto-complete when I'm coding. Because it *always* makes things harder. I'm typing away and up pops a suggestion and before I can even react it's throwing its suggestion into my code right in the middle of what I was typing and even if the suggestion was correct it just means I get to choose if I'm deleting the copy it inserted or the copy I typed myself. And then I've lost my flow from trying to figure out what it screwed up and how to fix it. I've seen plenty of tools that promised to write the code for me going all the way back to Frontpage and Dreamweaver in the late 90s and every single one of them without exception has made the job harder. I gave up on such promises a long time ago. Now my priority is a tool that just gets the hell out of my way.
Something that I rarely see mentioned is the non-technical argument against using LLMs. In an environment where we're direly under-provisioning our efforts to slow climate change and the challenges of energy and resource scarcity, it feels ethically unconscionable to increase the scope of the challenge by virtue of using LLMs en-masse.
There's one argument to be made against the distributed cost of inference workloads that write people's emails for them, and another to take issue with the ethics of the training costs that power each inference. I think a similar argument can be made to that about using the radiological data from Hiroshima, or the human-experimentation performed during WW2 -- once these data exist, is it ethical to use them?
I'm not an absolutist, and there's nuance to be found here. But, in a world where I'm concerned about consigning plastic to landfill, and taking unnecessary international flights, it feels egregious to me that we aren't questioning whether we _should_ use LLMs and their siblings.
I use LLM a lot for writing emails, translation, writing blog post assistance, but for coding I find it horrible, either the problem is too niche and I get a bunch of hallucination which looks like the right solution, but take ages to figure-out they are not working, or code style that doesn't fit my needs. I find it useful for language/env where I'm not confident, but I think for seasoned developers it's quite useless
I'm curious about tech, "AI" is just not it, unless you are writing React JS widgets or simple python scripts is really not that useful, you end up asking it "do not hallucinate", many times. I'm curious about RISC-V, Zig, Ladybird, C3, Physics, Math, etc... One thing is not being interested about AI, and another thing is not being interested on whatever half baked products Microsoft/OpenAI wants to shovel down your throat. I do understand you guys spent a lot of time on the AI integration in Zed. Funny enough, I follow Andreas youtube channel, and I haven't seen him using any AI in CLion.
> What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology that’s said to be fundamentally changing how we’ll program in the future.
Because I've been embedded in the fundamentals that the latest round of AI expands on for 30+ years and it doesn't introduce any new ones except for maximising randomness and chaos, which I have spent those 30 years trying to abolish.
I've already seen Eliza.
Just a thought, but given that LLMs are so utterly environmentally destructive; most of the major ones are being built and/or stewarded by companies with distinctly awful ethical track records; and they're all built off the stolen work of other people, consent be damned, that maybe. Just maybe. People don't feel the need to get into the reasoning behind why they aren't interested in using them during a random hallway conversation 🤷 Especially in such politically volatile and divisive times, it may just not be worth the argument.
Obviously, I don't know if this is what happened here or not. But it's another interpretation.
(And yes, I have used both Claude and GPT3/4/4o for coding. They're sometimes helpful, often not. I struggle immensely with the ethics of using them at all, and in a public setting I probably wouldn't want to admit to having used them out of embarrassment either. Context is king.)
I have used an AI chatbot to generate some sample code for a task I want to perform, but I'm not going to just copypasta that into my IDE. I'm going to go over it and understand what the code is trying to accomplish before I write the actual code.
In the end, AI as a code generator is going to be kind of like those wizard-based tools that we saw back in the 90's: it'll be good for getting a project together quickly, but, when something goes wrong with the code generated by the wizzy tool, you'll have to understand the underlying code to be able to remedy it. AI and wizzy tools are both abstractions, and all non-trivial abstractions leak, as Joel Spolsky pointed out.
> And I keep thinking about it and… I don’t get it.
Maybe that's because you don't understand how 99.99% of programmers work? You only see what's inside your bubble.