Joy & Curiosity #69
Interesting & joyful things from the previous week
Back to work this week and what a week it’s been! We had a massive launch, we shipped a lot of stuff, I felt like I’m entering a new stage of agentic programming and burned more tokens than ever before.
But the week was also full of surprises.
Apparently, thanks to Anthropic’s crackdown on other clients using the Claude Code subscription for things that aren’t Claude Code, a lot of people realized for the first time that $200 per month isn’t the real price of these tokens. Surprise: I assumed that everybody knew that $200 can’t buy you all the things that people have been doing with those subscriptions; that it’s heavily subsidized (or optimized, I guess that’s what Anthropic would say). Turns out that assumption was wrong. People are shocked. Yes, that’s why we’re working so hard to make Amp affordable by leaning on the Internet’s Best Business Model, independent of a model house, not even making a profit on individuals’ consumption, without burning VC money or compromising the quality of the product by doing routing tricks behind the curtain.
The other surprise: people were surprised about the crackdown. I had assumed that everybody knew that you aren’t allowed to reuse the Claude Code subscription. To get one of those $200/month all-you-can-burn API keys with special rate limits, you have to pretend to be the Claude Code OAuth client (also: see how many did that) and, I don’t know man, I was naive enough to think that engineers will understand that this isn’t how it was intended to be used, you know.
What I do know for a fact though: we’ve been told early on — in the middle of last year — that we can’t do that, we can’t reuse these Claude Code subscriptions in Amp, because they’re Claude Code only. And if we were told, I’m pretty sure, then others were told too.
But now there’s a lot of shocked faces and pearls being clutched and Mr. Officer I didn’t know you need to validate the ticket, I didn’t see the sign, I swear.
Yes, we launched the next generation of Amp Free this week: up to $10 per day in credits, powered by ads, usable with Opus 4.5. Up to $300 per month in Opus 4.5 tokens. Go use it. $10 can get you a lot.
More spicy news this week: “Scoop: xAI staff had been using Anthropic’s models internally through Cursor—until Anthropic cut off the startup’s access this week.” Feels good to be model-house independent, tell you that.
ezyang on the gap between a Helpful Assistant and a Senior Engineer: “In principle, you could prompt the LLM agent to act like a Senior Engineer. In fact, why stop at Senior, let’s tell the LLM to be a Staff Engineer! Imagine that scaling continues: what would you expect the LLM to do when instructed to act in this way? Well, imagine a human L7 engineer who has just been hired by a big tech company to head up some big, new, multi-year initiative. Will they say, ‘Sure, I can help with that!’ and start busily coding away? Of course not: they will go out and start reviewing code, reading docs, talking to people, asking questions, shadowing oncalls, doing small starter tasks–they will start by going out and building context.” I agree, our analogies don’t fit anymore, because we haven’t had Frankenstein Engineers before.
Dan Shipper on Agent-Native Architectures. This was very interesting. It’s about building agents into end-user applications, but my current campaign slogan is that 2026 will be the year in which agents and codebases melt and this article made me wonder: what if you see your codebase as an application with which the agent has to interact, which tools can you provide?
From the same thought-universe: Rijnard on the Code-Only Agent. “The Code-Only agent produces something more precise than an answer in natural language. It produces a code witness of an answer. The answer is the output from running the code. The agent can interpret that output in natural language (or by writing code), but the “work” is codified in a very literal sense. The Code-Only agent doesn’t respond with something. It produces a code witness that outputs something.”
The intro from last week’s issue made it into The Pragmatic Engineer: when AI writes almost all code, what happens to software engineering? Next to it are quotes from DHH, Adam Wathan, Malte Uble. This Holiday season apparently really woke something up. Part of me thinks I need to find a non-arrogant way to say “see! I told you! I told you!” and the other part goes “what for?”
Kevin Kelly: How Will the Miracle Happen Today?
Adam Wathan in his morning walk episode: “I just had to lay off some of the most talented people I’ve ever worked with and it fucking sucks.” This episode really blew up and resulted in viral tweets and HackerNews threads and apparently corporate sponsorship by companies that want to help Tailwind. The question on everyone’s mind: is this part of a bigger trend? It’s very sad that these layoffs had to happen and I really loved how Adam gave a long, personal referral to all three of the people involved. Dan Hollick (dude what a URL), Philipp, and Jordan. I’ve worked with Philipp before — he’s an outstanding, top-1% engineer. And, funnily enough, I’ve interacted with Jordan on GitHub before, because he worked on the Tailwind LSP server and I was working on Zed, trying to get it to work for some user configuration.
In the wake of Adam’s podcast blowing up, a lot of people commented on Tailwind’s business model. A lot of noise, to be sure, but it also sparked some very interesting comments. This one, for example, is a very interesting lens with which to look at AI: “What I keep coming back to is this: AI commoditizes anything you can fully specify. Documentation, pre-built card components, a CSS library, Open Source plugins. Tailwind’s commercial offering was built on “specifications”. AI made those things trivial to generate. AI can ship a specification but it can’t run a business. So where does value live now? In what requires showing up, not just specifying. Not what you can specify once, but what requires showing up again and again. Value is shifting to operations: deployment, testing, rollbacks, observability. You can’t prompt 99.95% uptime on Black Friday. Neither can you prompt your way to keeping a site secure, updated, and running.” That first sentence — “AI commoditizes anything you can fully specify” — man, isn’t that something to think about.
Talking about trends: the number of questions on StackOverflow over time. Astonishing.
This week I learned that Martin Fowler is publishing Fragments. And in that issue he links to this post by Kent Beck that articulates something I haven’t been able to: “The descriptions of Spec-Driven development that I have seen emphasize writing the whole specification before implementation. This encodes the (to me bizarre) assumption that you aren’t going to learn anything during implementation that would change the specification. I’ve heard this story so many times told so many ways by well-meaning folks--if only we could get the specification “right”, the rest of this would be easy.” I think this is exactly what makes me skeptical of leaning too much into the “write all the PRDs and Plans and then just execute”-agentic-programming-workflows. Of course the devil’s in the “how do you plan?”-details, but Beck has a point: why would this time be different, why would the magic of “just write a really good, detailed plan and then execute” be different with AI? I don’t see a reason. On the contrary, I think the opposite stance — building software is learning about the software — is truer than ever: you need more feedback loops, more ways for the agent to hit reality, to learn, to course-correct.
Fly released Sprites: Code And Let Live. This is very, very interesting. I’m starting to think that with agents we might be entering a new neither-cattle-nor-pet era, a time of pet/cattle-hybrids. Admittedly, Simon Willison’s piece on Sprites helped me make more sense of it after I had a ton of questions (which I also sent to ChatGPT, like: “so are they saying agents should be always-on in these machines?”)
Brian Guthrie’s Move Faster Manifesto. This is great. This part, on it being a choice, is spot-on: “But the hardest part of moving fast isn’t execution; it’s deciding that it’s necessary, and then convincing people that it’s possible.”
I’ve become fascinated with TBPN and their rise this year, but still didn’t know that much about them, nor their backgrounds. This Vanity Fair piece filled some gaps — it isn’t just software changing, is it, it’s also media.
And I really nodded along to this post by Jordi Hays, about AI needing a Steve Jobs: “Our AI leaders today seem to have forgotten to include humanity in the AI story. ‘If AI stays on the trajectory that we think it will, then amazing things will be possible. Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.’ - Sam Altman. I understand what Sam is saying here, and it’s not entirely fair to pick a random quote, but there’s no doubt that this type of phrasing is not what Steve would have done.”
Henrik Karlsson: “And you do the same thing with joy. If you learn to pay sustained attention to your happiness, the pleasant sensation will loop on itself until it explodes and pulls you into a series of almost hallucinogenic states, ending in cessation, where your consciousness lets go and you disappear for a while. This takes practice.” Made me wish I was better at directing my attention and thoughts.
If you squint really hard and make a face and bend your head, this one is related to the Karlsson piece: “Willpower Doesn’t Work. This Does.” But, hey, even if it isn’t related, it’s another good reminder.
Max Leiter from Vercel on how they “made v0 an effective coding agent”. The LLM Suspense framework is neat but it made me wonder: which model generation will make it obsolete?
Jason Cohen on the value of focus and what that even means. This is great and something I’ll reshare in the future.
Nikita Prokopov saying it’s hard to justify the icons in macOS Tahoe. I can’t say with certainty — none of the machines I have are on Tahoe yet — but it looks like I agree with him. Strange feeling reading this, like finding out at the gate that the plane you’re about to board as a new type of airplane seat that has an average rating of 2 out of 5.


