AI will change programming. I’m convinced of it now.
How can it not? We now have machines into which we throw two, three sentences of instructions and out come hundreds of lines of perfectly working JavaScript or Python code.
Claude generated a 75-line Python script to turn the RSS feed of this very newsletter into a Markdown file. Worked on first try. Claude also generated this Python script to sync a Google Drive folder to a local one. Claude generated this entire Tailscale MCP server after 15min of back and forth. Claude wrote 219 lines of Rust for me so I can use Claude itself to make assertions about other code. DeepSeek R1 then improved the prompt itself. To double-check myself, I just had Claude generate a small server in Go that serves the Markdown files in the current folder, rendered as HTML, including syntax highlighting. Worked perfectly after removing an unused variable.
Yes, I know it might not work for you in your codebase. It certainly doesn’t work for me in many contexts. But my point is this: even if it will never work for everyone everywhere, even if all progress were to stop today, the fact that AI has made the writing and rewriting of a certain type of code very cheap will change programming.
It will change programming like compilers changed it. Like the Internet changed it. Like version control, automated tests, StackOverflow, and formatters changed it. Like WordPress changed how we build websites for clients.
The question is how exactly.
Here’s what I’m wondering about:
Will we never see another new programming language reach the mainstream, because LLMs aren’t trained on code in that language and thus there won’t be enough people writing code by hand to create enough training data?
Will we see languages that are optimized for synthetic data generation, allowing for millions of lines of code to be easily generated and verified, and languages having their own training data set?
Will we change how we modularize code and switch to writing many smaller programs because they’re easier for LLMs to digest than large codebases?
When will we store the prompt alongside the code it generated? Later, will we move from storing code to storing prompts, generating both code and commit messages on demand? What else will shift from write-time to read-time?
What will change once we start to optimize code and processes around code purely for the reader, because the writer’s a machine?
Will we write docstrings at the top of files that aren’t meant to be read by humans, but by LLMs when they ingest the file into their context window? Will we see better documentation of best practices and idioms in large companies, because
best_practices.txt
is easy to put in context?Will popular libraries and frameworks become even more dominant because they’re in the training sets?
Will
CONTEXT.md
become the newrobots.txt
— a file telling LLMs how to understand our code, our repositories, our websites?Will there be a new kind of technical debt, where code is optimal to be understood by the current AI models, but becomes problematic when the models change?
Will compilers change to allow for faster verification of LLM-generated code? Will they change their error messages to contain more context so that it’s easier for LLMs to fix these errors?
Will we see a melting of language servers and LLMs?
Will code become more verbose because a
processUser(user, rules)
function call is harder to understand for an LLM thanvalidateAndStoreNewUserInsideTransaction(user, validationRules)
?Will code be optimized for how it’s encoded into tokens?
Will techniques that trade code density for performance, such as loop unrolling, become popular because loops won’t have to be unrolled by hand? Will the default FizzBuzz solution contain an array of 100 values that were written out by an LLM?
Amazing questions. I never thought about most of them and I certainly have no answers.