As I’m standing here and packing for a ten day trip, first to Mexico and then to San Francisco, I can hear the three hearts beating in my chest again.
The first one, with its ba-dum ba-dum ba-dum spelling out its wishes in morse code, tells me that I should be envious of those hand-luggage-only people. True travellers; agile, light, flexible; their whole life in the overhead compartment; they probably laugh at those idiots standing around the luggage carousel when they walk past. Actually, they probably are somewhere else already.
My second heart, its rhythm clashing, instead laughs at the hand-luggage-only people. “Just wait until they get cholocate ice cream all over their white t-shirts,” it assures me, “and then sit down into some jam-filled pastry someone dropped on a chair at breakfast, right before someone tells them they can’t wear those shoes to dinner, which comes right before the rain, for which they’re unprepared. What if they pee their pants, dude?” It tells me to take the bigger suitcase.
The miraculous third heart is wise. It’s calm. Its ba-dum whispers and says “it’s fine, they’re both right, let’s do what we always do.” What it means, of course, is that I should do both. Pack a lot (a lot) but then also get stressed out about not having enough clothes (watch out for those pastries…) and obsessively ration the clothes I brought — like wearing the same shorts for an amount of time that’s barely accepted by society while telling myself that no one will notice that stain anyway. It'll come right out if you just, see, brush over it like this.
Really, really, really good: Behind the Scenes of Bun Install. Clear writing based on clear thoughts with explanations on just the right level of abstraction, consistently so, this is what technical writing should be. Oh, and, of course, it’s a lot of fun and makes me want to make things faster.
Also really, really, really good: Defeating Nondeterminism in LLM Inference. I’ve always been curious about why even at temperature 0 some LLMs can be non-deterministic and when, a few weeks back, I’ve tweeted about LLMs being non-deterministic people in the replies fought over not only over whether they are non-deterministic, but also what the source of that non-determinism could be. Floating point operations? The hardware, GPUs? Here’s the answer. It’s both, but not really: “In other words, the primary reason nearly all LLM inference endpoints are nondeterministic is that the load (and thus batch-size) nondeterministically varies!” They are ultimately non-deterministic because multiple requests are being sent through the model at the same time. Again: very good blog post! And also, note that this is Thinking Machines, the startup founded by Mira Murati, the ex-CTO of OpenAI; the startup that’s raised $2 billion; the startup that, so far, hasn’t published anything else yet. As far as I know, this post is the first thing they put out into the world. Well done.
You have to watch this and I’m sorry but not really and I’m not going to tell you what it is before you click, so just click here. I have to admit that I also cloned the code and tried to make it work for our Amp TUI. Admission #2: I didn’t know about the “classic” script that was referenced and that’s apparently by The Onion.
“I believe we have both the power and the responsibility to shape this technology’s future. That begins with a clear-eyed diagnosis of the present. One of the most useful diagnostic tools I've found for this comes from computer scientist Melanie Mitchell. In a seminal paper back in 2021, she identified what she claims are four foundational fallacies, four deeply embedded assumptions that explain to a large extent our collective confusion about AI, and what it can and cannot do.”
PostHog has a new homepage and it looks like an operating system in the browser. That in itself isn’t new, but this one’s very cute and it’s now the homepage of a company that’s raised a Series D this year and is valued at $920M. Let’s see how long it stays. But I guess even if they rip it out in 4 weeks, they’ve created some buzz. Good move.
Term.Everything allows you to run “every GUI app in the terminal!” The demos look very cool and the README is cool and the description of how it works is very inspiring. I want to play around with chafa now.
In February, Apple released the iPhone 16e, including the C1 modem, about which Mark Gurman wrote: “The C1 Apple modem is a monumental technical achievement. A several billion dollar effort that has been in the works for 7 years. In the end it gets two sentences in the press release and 15 seconds in the announcement video. Apple is clearly downplaying this intentionally.” I quoted Gurman here, back in February, and provided some more links to more comments that described what an achievement it is. Now, this week, Apple released the iPhone Air, that comes with “N1, a new Apple-designed wireless networking chip that enables Wi-Fi 7, Bluetooth 6, and Thread.” A HackerNews comment says: “Congrats to Apple for finally designing out Broadcom and vertically integrating the wireless chip.” I have to admit that I’m essentially clueless when it comes to global hardware manufacturing, but, man, I’m intrigued.
Talking about HackerNews: I found this whole discussion interesting. Linked article states that we’re all being sucked into the hole of short-form video but the top comment says: “Too simple of a narrative. At the same time, YouTube videos are getting longer, and people are watching more YouTube videos on TVs than on mobile devices. […] So I think we're seeing more of a bifurcation: in-depth longform videos are becoming 30, 40, 60, even 90 minutes long, whereas anything shorter than 10 minutes is being compressed to 30-60 seconds.” I’ve never installed TikTok on my phone, so I can’t comment on that, but I can say that my brain seems to be immune to YouTube Shorts, they just don’t do anything to me. I can watch one and stop. Twitter, on the other hand, well…
Another good comment: “Saying boilerplate shouldn’t exist is like saying we shouldn’t need nails or screws if we just designed furniture to be cut perfectly as one piece from the tree. The response is ‘I mean, sure, that’d be great, not sure how you’ll actually accomplish that though’.”
But back to Apple. Here’s some engineering porn for you: “Memory Integrity Enforcement (MIE) is the culmination of an unprecedented design and engineering effort, spanning half a decade, that combines the unique strengths of Apple silicon hardware with our advanced operating system security to provide industry-first, always-on memory safety protection across our devices.” But what is it? In their bombastic words, it’s “the industry’s first ever, comprehensive, always-on memory-safety protection covering key attack surfaces — including the kernel and over 70 userland processes — built on the Enhanced Memory Tagging Extension (EMTE) and supported by secure typed allocators and tag confidentiality protections.” But then you read it and think, god damn, that’s impressive, I’d be bombastic about this too.
Things you can do with a debugger but not with print debugging. 2025 is the year in which I discovered debuggers for myself and after feeling pretty snobby about it for the first few weeks, I’m now back down on earth and use both, print debugging and the debugger.
“My heart goes out to the man who does his work when the ‘boss’ is away, as well as when he is home. And the man who, when given a letter for Garcia, quietly takes the missive, without asking any idiotic questions, and with no lurking intention of chucking it into the nearest sewer, or of doing aught else but deliver it, never gets “laid off,” nor has to go on strike for higher wages. Civilization is one long anxious search for just such individuals. Anything such a man asks will be granted; his kind is so rare that no employer can afford to let him go. He is wanted in every city, town, and village - in every office, shop, store and factory. The world
cries out for such; he is needed, and needed badly—the man who can Carry a message to Garcia.”
My iPhone 8 Refuses to Die: Now It’s a Solar-Powered Vision OCR Server. Sounds like a ton of fun and I have a very old iPad lying around here…
Some very popular NPM packages got compromised, but it seems like we all got lucky, since the attackers only wanted to steal crypto stuff. That phishing email looks impressively real though.
“The Babel fish is a small, bright yellow fish, which can be placed in someone's ear in order for them to be able to hear any language translated into their first language.” And now it’s here. Well, not here here, if you’re in the EU, but here. Isn’t that, scusa, fucking crazy?
Being good isn’t enough. I think this is directionally correct, but I’d make two changes: I think technical skills, in this industry, are the foundation on which everything else needs to rest. When they write that “the biggest gains come from combining disciplines. […] technical skill, product thinking, project execution, and people skills”, I’d argue that you shouldn’t think of a pie chart, but a pyramid and the base layer is very thick and has the “technical skill” label. The other change: I’d underline, three times, the part about agency. I agree that it’s “more powerful than smarts or credentials or luck”, but if, years ago, you’d told me that you had seen programmers from bumfuck nowhere outwork Stanford graduates, not with “smarts or credentials”, but with grit, discipline, humility, reliability, and attention to detail? I guess I still wouldn’t have believed you.
I remember when people got pissed off at Sublime Text for showing “How about you pay us some money for this software?” every few restarts. Now? “The ROI is obvious, but budget for $1000-1500/month for a senior engineer going all-in on AI development. It's also reasonable to expect engineers to get more efficient with AI spend as they get good with it, but give them time.” Wild times.
“The real risk is not taking a risk. The scaling maximalism of the last decade allowed us to avoid many hard choices — now, we have to think strategically. […] The tragedy is that most teams are still fighting the wrong battle. They’re running the ‘more GPUs’ playbook in a world where the real bottleneck is the data supply chain. If your team is asking for more compute but can’t explain their data roadmap, send them back to the drawing board.”