Joy & Curiosity #76
Interesting & joyful things from the previous week
This week I found myself writing code by hand again.
Not a lot, maybe ten, twenty lines in total, which is far less than what I had Amp produce, but still: actual typing out of code. Miracle I didn’t get any blisters.
At our Amp meetup in Singapore I mentioned this on stage and someone in the audience cheekily asked: “You just told us that these agents can now work well when you give them a longer leash and yet you wrote code by hand, how come?”
The answer can probably be boiled down to something that sounds very trite: to build software means to learn.
When you build a new piece of software, you learn what the software is actually supposed to do, how it should do it, and why your pre-building ideas now seem naive. (If you’re thinking “well, can’t we figure out all of that before we build” go ahead and type “waterfall software” into Google.)
Right now, at Amp, we’re building something new. We don’t yet know everything about this thing we’re building. We don’t know how it should behave in this case, or in that case, how the runtime behaves here, or over there.
Writing code by hand is one way (!) to answer these questions, because you truly bump into what you don’t know when you have to type something out. You find yourself picking an array and write down that the type for clients is Client[] and then you wonder: wait a second, do we even need to allow for multiple clients to be connected at the same time? why? when? No, we actually don’t, it should be client: Client.
An agent is happy to pick an answer for you — without telling you. It will just write the code.
That might not be a problem. If you’re not building something new or if you don’t even need to learn how the software works (which is probably more often the case than you might think) or if you already have a good mental model, let the agent rip. In fact, I’d even say that in the majority of cases it’s not a problem, because most software development is not building something new.
But if you need learn, so you can make better engineering tradeoffs and product decisions, it seems to me that one of the most practical ways to do might just still be to get your hands dirty. Let’s see how long that lasts.
Ladybird adopts Rust, with help from AI. Now that’s engineering: “Our first target was LibJS , Ladybird’s JavaScript engine. […] This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.” And also this: “If you look at the code, you’ll notice it has a strong ‘translated from C++’ vibe. That’s because it is translated from C++. The top priority for this first pass is compatibility with our C++ pipeline.” That’s how you build software: step by step, and choosing tradeoffs carefully. And that, I’m rather sure, won’t go away.
Talking about ports: Cloudflare used “one engineer and an AI model” and “$1,100 in tokens” to create a drop-in Next.js replacement built on top of Vite. The sections on why this was a good fit for AI and the approach they took are very interesting. So is this point at the end: “It's not clear yet which abstractions are truly foundational and which ones were just crutches for human cognition. That line is going to shift a lot over the next few years. But vinext is a data point. We took an API contract, a build tool, and an AI model, and the AI wrote everything in between. No intermediate framework needed. We think this pattern will repeat across a lot of software. The layers we've built up over the years aren't all going to make it.” Let’s see whether frameworks like Next.js or vinext will still be useful in a few years. Oh and of course there’s drama between Cloudflare and Vercel so Vercel shot back.
Man, I had this link here, to Anthropic’s Statement from Dario Amodei on our discussions with the Department of War, saved so I can write about it in this edition, but good lord, there’s now fifteen other things to link to. Just type “Anthropic” or “OpenAI” into Google News. Or don’t, there’s a lot of noise and dust in the air and if you aren’t on the inside it seems hard to get an accurate impression of what happened (or is happening). What I did find very interesting, regardless of surrounding context, was this post by Palmer Luckey.
This really was as good as everyone said it is: The Very Hungry Caterpillar, an examination of Eric Carle’s famous book on the Looking At Picture Books substack. I highly recommend you read this. What a wonderful way to look at books, at design, at the world. It’s also funny.
This one too: How to Make a Living as an Artist. There are many things you can get out of this post if you’ve ever built and shipped something, regardless of whether that was a painting, some words, code, or something else.
Justin Duke’s scattered thoughts on LLM tools: “it seems like the logical endpoint is infinite and perfectly abstracted sandboxes with previewing, isolation, and very tight feedback loops. But right now the largest gap between where we and most other organizations are and that brilliant future is not on the AI side but on all the calls from coming inside the house that make it difficult to sandbox a mature application.” Question is: does “mature application” mean the same thing it did a year ago?
This Eileen Gu clip made the rounds recently and I find it incredibly fascinating. Over the last ten, fifteen years I made several attempts to get into meditation, read quite a lot about it, including some books, and now know that (1) I am not the thoughts that pop up in my head (2) my brain is a seemingly random thought-generator (3) you can influence what thoughts it generates by practicing (4) I am the thoughts I repeatedly think. The ability to modify what you think is incredible (as I wrote in admiration here) and I wish I could do it was effortlessly as Eileen Gu describes here.
Logan Kilpatrick: “The compute bottleneck is massively under appreciated. I would guess the gap between supply and demand is growing single digit % every day.” If you’ve never really dug into this topic, I recommend this podcast with Dylan Patel. He’s a smart guy and if I had listened to him all the way back in fall of 2024, when I first heard of him, I would’ve bought SK Hynix and Sandisk stock and made a lot of money.
Lovely and well-made: An interactive intro to quadtrees. Makes me want to build something with quadtrees. Notable: how it explains usecases for quadtrees, besides the very obvious one of, well, a map.
What Claude Code Actually Chooses. Interesting: “We pointed Claude Code at real repos 2,430 times and watched what it chose. No tool names in any prompt. Open-ended questions only. […] The big finding: Claude Code builds, not buys. Custom/DIY is the most common single label extracted, appearing in 12 of 20 categories (though it spans categories while individual tools are category-specific).” Make sure to click through to the full report to see how they came up with these numbers. And while it’s interesting, I’m also not sure whether it matters that much outside of an experiment.
The left is missing out on AI. I’m not sure whether I’d say “the left”, but when I read this I couldn’t help but say “oh boy” out loud when it reminded me that people still talk about “stochastic parrots” and “spicy autocomplete” and “these models can’t think”.
The Hardest Lessons For Startups To Learn, a vintage Paul Graham essay from 2006 that I somehow came across this week. I’m not sure whether I’ve read it before, but I must’ve because I nodded to everything he’s saying here. Or maybe it’s the last fifteen years, give or take, of working in startups. Really good.
Times are changing, there’s a lot of things to adapt, including interviewing: How We Hire Engineers When AI Writes Our Code. “I’ll hand you a small problem – one that we’ve solved ourselves – usually from a bare-bones Figma file or a short spec. This might be a simple flow or a lightweight feature that would ordinarily take a day or two to build and ship. But for this exercise, you’ll have just a few hours—and that’s not enough time to make a polished product. I want to see how you work within constraints. You’re encouraged to use AI to solve the problem. Whatever tools you would want to use as an employee, use them during the interview. We’ll give you a Claude, Codex, Cursor, or Gemini license if you need one. I want to see you balance LLM-generated code against your own judgment.
But make no mistake—even if you aren’t writing the code, you own the output.” I haven’t formally interviewed engineers in over a year but I think this is how I’d do it too.
Really, really, really good and thought-provoking: Nobody knows how the whole system works.
Phil Eaton started a company: “I quit my job at EnterpriseDB hacking on PostgreSQL products last month to start a company researching and writing about software infrastructure. […] This company, The Consensus, will talk about databases and programming languages and web servers and everything else that is important for experienced developers to understand and think about. It is independent of any software vendor and independent of any particular technology.”
“Cognitive debt, a term gaining traction recently, instead communicates the notion that the debt compounded from going fast lives in the brains of the developers and affects their lived experiences and abilities to ‘go fast’ or to make changes. Even if AI agents produce code that could be easy to understand, the humans involved may have simply lost the plot and may not understand what the program is supposed to do, how their intentions were implemented, or how to possibly change it.”
Ben Wallace: The happiest I’ve ever been. I’ve had quite a few conversations with programmer friends over the last year that ended with someone wondering: do I still enjoy this? Is this the programming I want to do? Some answer with yes, others with no. I understand both answers and the “code was never important” comments are not helpful to those who really, really enjoyed writing code. If you’re in sales, that might be because you love negotiation, or the product you’re selling, or making money, or, hey, because you love talking to people, love finding out what their problems are, love to visit them. If your job suddenly changed from that to never talking to a human again, I bet you’ll find it hard to take solace in “it was never about the people, it was always about closing the deal.”
747s and Coding Agents. Thoughts on learning and getting better and what coding agents might take away from us. Very good.
Interesting: Building An Elite AI Engineering Culture In 2026. This isn’t a guide for how to achieve an “elite” culture, I’d say, but more an examination. Interesting to read through and compare. For example, these two points: “The most consequential organizational change in 2025–2026 is the dissolution of the design-engineering boundary at top companies” and “No design-to-dev handoff. No PM-to-engineering handoff. No QA as a separate gate. Everyone ships.” — that describes what we do at Amp pretty well. Tim and Brett, our “designers” at Amp, do design, but they also ship what they design and ship other code and debug distributed systems stuff. I don’t think I ever saw a classic “design Figma” at Amp. We also don’t have PMs. I’m probably the closest thing we have to a PM, but I have a very different title and am the #2 contributor in code (Quinn is #1). Last year, when we started Amp, we started working this way because it was natural with just two senior people in a repository (Quinn and myself). Sure, push to main, we’re all grown-ups. But then over the year, we added more and more people and kept this way of working and now I’m pretty certain that it’s because of AI that we work this way. I need to write more about that.
Murat Demirbas on the End of Productivity Theater. This is something I’ve also wonder about a lot on the past few years, even, say, pre-AI: “I remember the early 2010s as the golden age of productivity hacking. Lifehacker, 37signals, and their ilk were everywhere, and it felt like everyone was working on jury-rigging color-coded Moleskine task-trackers and web apps into the perfect Getting Things Done system. So recently I found myself wondering: what happened to all that excitement? Did I just outgrow the productivity movement, or did the movement itself lose stream?” His analysis seems spot-on.
Now this is a great thought experiment: “There’s a well-known phenomenon in the facial aesthetics literature whereby ‘average faces’ (that is, faces formed by superimposing many faces atop one another) tend to be more attractive than the average person. […] Recently, I have begun to wonder if LLM-writing faces a similar challenge.”


