Yesterday morning, the day after Halloween, I went rucking. Quite the word, and it’s not a typo. It comes from rucksack and means to walk or hike with a heavy backpack or weights on your back. I like to use the weighted vest my wife got me for my birthday a few years ago. It weighs 30kg/60lbs. Its appearance has made other people stop and ask me things like: are you police? is that a bulletproof vest?
No comments like that yesterday. And I was ready for comments when I said hello to the older couple I passed, or when I talked to the guy who was lost with his bike and wanted directions. Maybe I detected a funny look — a barely noticeable shift in facial features?
Back home, I went to get a shower and right before, in the mirror, I saw what my fellow walkers and riders in the woods must’ve seen: I still had my Halloween vampire costume’s blood-dripping-from-edges-of-mouth makeup on.
“My tinkering habits picked up very late, and now I cannot go by without picking up new things in one form or another. It’s how I learn. I wish I did it sooner. It’s a major part of my learning process now, and I would never be the programmer person I am today.”
I agree with this, in that I think what he said will be misinterpreted: “Karpathy is wrong. Write that post, build that slide deck!” And I certainly agree with this too: “if you’re an engineer, writing — or creating a slide deck, or teaching something — is one of the most powerful ways to build understanding. Why? Because explaining something forces clarity. You can’t hide behind hand-wavy intuition when your audience knows the topic. You have to go deep. You have to really understand it.”
Linked to in that post is this one here: How to Increase Your Luck Surface Area. I’ve known about this idea of luck surface area for a few years, but I can’t remember where I came across it for the first time and now I think it might have been this post. Very good.
Jeremy Howard talking to Chris Lattner, about software craftsmanship, an AI, and the former in times of the latter. To quote just one quote-worthy paragraph, let’s use this one here, with which I agree fully (especially that last sentence), even though I think it’s useful for more than what he alludes to: “It’s amazing for learning a codebase you’re not familiar with, so it’s great for discovery. The automation features of AI are super important. Getting us out of writing boilerplate, getting us out of memorizing APIs, getting us out of looking up that thing from Stack Overflow; I think this is really profound. This is a good use. The thing that I get concerned about is if you go so far as to not care about what you’re looking up on Stack Overflow and why it works that way and not learning from it.”
Agents are now breaking the fourth wall.
Nathan Lambert: Burning out, an “essay on overworking in AI.”
Cursor released their own model this week: Composer-1. Here’s Simon Willison’s take on it. Personally, I’ve played around with it and while the speed is impressive it also isn’t that smart and I wonder how that will shake out. We’ve done experiments with fast-but-not-as-smart models in Amp and when I went to measure the difference between a slow-but-smart and a fast-but-dumb model, the slow model turned out to be faster every time, when measuring prompt-to-result time, because the fast-but-not-as-smart model had to retry and sometimes did the wrong thing. But, I’m wondering, what if that comparison is wrong? What if a fast-but-not-as-smart model works better if not used like a smart model, what if it’s not suited to be an assistant that you pair with, what if…? Lots of things to try.
Yew Jin Lim: Note to my slightly older self.
A comment that’s deserves a better classification than that, by dang, Hacker News moderator, on going “down an epic rabbit hole the other day—a rabbit labyrinth really—learning about what happened to the children of the Beats.”
Here’s a heavy triplet for you, on media, society, the Internet, all of it. Start here, by reading The Goon Squad, whose subtitle — “Loneliness, porn’s next frontier, and the dream of endless masturbation” — won’t prepare you. But don’t let the subject matter scare you away. The writing is good and, I tell myself, sometimes it’s good to stare into the abyss.
Once you’re done with that, go and read this excerpt of David Foster Wallace interview from 1996. “And that as the Internet grows, and as our ability to be linked up, like—I mean, you and I coulda done this through e-mail, and I never woulda had to meet you, and that woulda been easier for me. Right? Like, at a certain point, we’re gonna have to build some machinery, inside our guts, to help us deal with this. Because the technology is just gonna get better and better and better and better. And it’s gonna get easier and easier, and more and more convenient, and more and more pleasurable, to be alone with images on a screen, given to us by people who do not love us but want our money. Which is all right. In low doses, right? But if that’s the basic main staple of your diet, you’re gonna die. In a meaningful way, you’re going to die.” Don’t forget to note the name of the subreddit.
And now, number three: Seth Godin saying that “attention is a luxury good.” I won’t quote it here, because it’s eight sentences and pulling one out is playing Jenga.
Joy Scrolling: We Traveled the Real California That ‘One Battle After Another’ Imagined.
Amazing experiment: “I spent a few hours last weekend testing whether AI can replace code by executing directly. Built a contact manager where every HTTP request goes to an LLM with three tools: database (SQLite), webResponse (HTML/JSON/JS), and updateMemory (feedback). No routes, no controllers, no business logic. The AI designs schemas on first request, generates UIs from paths alone, and evolves based on natural language feedback. It works—forms submit, data persists, APIs return JSON—but it’s catastrophically slow (30-60s per request), absurdly expensive (€0,05/request), and has zero UI consistency between requests. The capability exists; performance is the problem. When inference gets 10x faster, maybe the question shifts from ‘how do we generate better code?’ to ‘why generate code at all?’” Comments are divided, as expected.
“‘Fake it until you make it’ is often dismissed as shallow, but it’s closer to Franklin’s truth. Faking it long enough is making it. The repetition of behavior, not the sincerity of belief, is what shapes character. You become the kind of person who does the things you repeatedly do.”
This generated a lot of thoughts and one longer-ish internal Slack message days after reading it: your data model is your destiny.
192 Weeks. Eric Zhang on working at Modal, going back to school, working at Modal again, and leaving Modal, all while moving to and from New York, and growing up.
Somehow I ended up reading the Wikipedia article on Adam Gopnik, a staff writer for The New Yorker, and got stuck right there at the top, in the second paragraph, due to this: “In 2020, his essay ‘The Driver’s Seat’ was cited as the most-assigned piece of contemporary nonfiction in the English-language syllabus.” Wait, what, really? And the essay’s from 2015! So is it good or is it it’s-assigned-reading-good? But then again, Bill Bryson was assigned reading for me at some point and it might have been what made me fall in love with the English language. So I read The Driver’s Seat. It’s great.
“Man, I love The New Yorker,” I thought and went to check up on Alice Gregory, who’s also a staff writer there. I then read her piece on the philosopher L. A. Paul. Fantastic writing and it contains this gem, which I’ll pull out next time someone says that something that’s longer than a b log post could’ve been a blog post: “Paul considers the repetition necessary and has compared it to examining a cut gemstone—holding it up to the light and turning it slowly to see every all-but-identical facet.” (See also: Matrix Theory of Mind)
If you do know what the words “tender offer” and “secondary” mean in the context of stock options, you might find this post by Zach Holman interesting: Taking Money off the Table.
This seems like it will have a huge impact: Bill Gates with “three tough truths about climate change”.
It’s a 32 second video, but I’m telling you: this is one of the best demos I’ve ever seen. It’s so good. So good! The chosen example code, the diagnostics, the TODO comment there, the way it shows how easy it is to migrate — incredible.
This seems like a very, very good idea: a heatmap diff viewer for code reviews.
Reminder: “Computer science is a terrible name for this business. First of all, it’s not a science.”
“one of my most productive work days ever when I decided I was sick of dealing with an intractable perf problem, took an uber to get ramen, had three beers with lunch, decided to walk the two or three miles back, and realized a redesign that would trivially parallelize 95% of it”


