Joy & Curiosity #84
Interesting & joyful things from the previous week
No big intro today. No time. I have to tweak some orbs, there’s a big release coming.
Evan Phoenix: Agile in the Age of AI. There’s so much in there and it’s all really good. Highly recommended.
This is one of the most interesting analyses of What’s Going On With Software Right Now that I’ve read in recent weeks: “To be a little less vague, I suspect that we’re likely (not certain, but likely) to be entering into a period of unprecedented software degradation, and we’re going to be seeing an increasing frequency of outages like this across many high profile products. But IMO the cause is actually not just the-one-thing-that-everyone-is-always-talking-about, it’s a number of things that have all been bubbling away at just below critical levels for a long time.[…]” You know this joke about the fish and the water, right: old fish asks young fishes “morning! how’s the water?” and the young fish are confused and ask “what’s water?” It’s easy (and probably not that wrong) to point at AI and declare it the cause of every change we see, but I think it’s equally likely that only now that we’re out of the ZIRP-era do we see what ZIRP has actually done to this industry.
Ghostty Is Leaving GitHub: “It’s not a fun place for me to be anymore. I want to be there but it doesn’t want me to be there. I want to get work done and it doesn’t want me to get work done. I want to ship software and it doesn’t want me to ship software. I want it to be better, but I also want to code. And I can’t code with GitHub anymore. I’m sorry. After 18 years, I’ve got to go. I’d love to come back one day, but this will have to be predicated on real results and improvements, not words and promises.” The times they are a-changing. Don’t forget to read Mitchell’s comment here. I don’t have the time right now to spell out how much GitHub means to me, but I can safely say that without GitHub I wouldn’t have the life I have today. And for many, many years I thought working at GitHub would be the best job in the world.
This chart made the rounds and kinda said the record straight: “I don’t work on reliability & scaling at GitHub, but the people who do aren’t bad at their jobs. They’re dealing with unprecedented scale from agents. It’s easy to shit on GitHub from the outside if you’re not in charge of 30X-ing capacity within a few months. Have some grace.”
I found Armin’s commentary on the whole GitHub situation to be very good: Before Github. This, for example: “GitHub is currently losing some of what made it feel inevitable. Maybe that’s just the life and death of large centralized platforms: they always disappoint eventually. Right now people are tired of the instability, the product churn, the Copilot AI noise, the unclear leadership, and the feeling that the platform is no longer primarily designed for the community that made it valuable. Obviously, GitHub also finds itself in the midst of the agentic coding revolution and that causes enormous pressure on the folks over there. But the site has no leadership! It’s a miracle that things are going as well as they are.” (Sidenote: I can’t be the only one who’s never used the word ‘forge’ before and now sees it everywhere as if there had been a big “this is the new word we’re going to use now” memo going around.)
Mat Duggan on the GitHub he’d build if he were “rich like a man who owns a submarine he’s never been inside. Rich like a man whose third wife has a skincare line. Tech-titan rich — the kind of money that buys you a compound in Wyoming and the confidence to wear the same gray t-shirt to congressional testimony.” Doesn’t look like what I’d envisioned but some of the points are very interesting, especially this one: “My local copy of the repo should be a representation of the entire repo, not just the code. I should be able to approve a PR from the same VCS I use to check in the code. I should be able to go through my issues by looking through local files.” It’s kinda funny that over the last decade git and GitHub haven’t really merged. It’s always been repository here and rest over there.
Highly, highly, highly recommend you read this piece by Kevin Kelly on Our Uncertain Uncertainties: “In other words, we have a sustained, extended period of uncertainty. Not just a few years, but a decade or more. As AI continues to progress, rather than resolving our perplexity, it expands it. So for the next 10-15 years we have perpetual, continuous, severe uncertainty. This is a burdensome weight because people hate uncertainty more than bad news. […] what should we do about it? The most effective response to this multi-layered persistent uncertainty is not to seek impossible stability, but to cultivate radical adaptability and radical optionality. Give up on having a reliable prediction of what happens next. Instead cultivate multiple scenarios of what could happen, and endeavor with each of them to maximize your options. Goals should be considered as disposable hypotheses, constantly ready to be discarded and replaced by better-fitting concepts later on.” As much as I don’t like to say it, I think it’s true. I think the last 30 years will look incredibly calm compared to the next 10. But hey, when the going gets tough, the tough get going, right? Or as the Hunter S. Thompson quote goes that I had pinned to me teenage bedroom wall: when the going gets weird, the weird turn pro.
My friend Tomás Senart is looking for a founding engineer to work with him on Perfloop. I worked with Tomás for many years at Sourcegraph, he’s a true hardcore programmer, incredibly high agency (probably came out of the womb with his sleeves rolled up), and has a great sense of humor. Also: I trust him blindly to order sushi for me whenever we go out. If you’re into AI and systems programming and performance optimizations: talk to him!
Had you asked me, when I started this newsletter, whether I’d ever link to something in the National Catholic Register, I probably would’ve laughed and said “What? What is that? What’s in there? Why would I link to it?” But now we’re here and I think this is one of the best things I’ve read on AI and education, or actually: education in general, in a long, long time: Reparing the Ruins: Why AI Can’t Replace Education. Listen to this: “Education worthy of the name has always understood this. Its end is not the delivery of content, however accurate. It is the formation of persons capable of judgment, attention and intellectual honesty. That formation requires a genuine encounter with difficulty — the friction of a hard text, the resistance of a problem that does not yield quickly, the discomfort of revising what one believed. It requires embodiment as much as intellect: reading slowly, speaking in one’s own voice, accepting the cost of standing behind one’s words. A person does not become capable of truth by managing information alone. Wisdom is formed in contact with reality, not in its simulation.” Amen.
Big oof: “Copy Fail is a straight-line logic flaw — it needs neither. The same 732-byte Python script roots every Linux distribution shipped since 2017.”
The West Forgot How to Make Things. Now It’s Forgetting How to Code: “Five to ten years from now, we’ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls “AI-mediated competence.” They can prompt an AI. They can’t tell you what the AI got wrong. It’s Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don’t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn’t transfer to the AI.”
This was delicious. Daniel Lemire “created something I call the SIMD Quad algorithm” which beats binary search due to parallelism. Essentially: divide your list into blocks of 16 elements, then divide the list of blocks into quarters, check independently which quarter must contain your target (which CPU can optimize), do that until you end up with a single block, then check all 16 elements at once. Slick!
Very interesting (but maybe a bit shallow) profile of Mistral in Forbes. This is brutal: “But Mistral has slipped ever further behind in leaderboards ranking AI performance. It’s so bad that Mistral’s best model would lose in a face-off against a version of Anthropic’s Claude that was released nine months earlier, per one popular benchmark. Worse, it’s also bested by a new crop of open-weight models from Chinese startup DeepSeek and tech giant Alibaba.” But there is a But: “But Mensch bets that a smaller, cheaper model made in Europe is better suited for governments and global companies than an American closed-source LLM with far more horsepower. Plus, it’s too risky for serious Western companies to depend on Chinese models, says Mistral investor Jeannette zu Fürstenberg of venture fund General Catalyst. The strategy has worked to the tune of $200 million in revenue in 2025. And Mensch says Mistral is on track to start making around $80 million monthly by December” Very interesting. But (another one), as a European myself, I have to say that I can’t stand it anymore when European tech companies pitch their product with what essentially boils down to: “at least we’re not [US company].” Yeah, the product might be worse, yeah, it doesn’t work as well as the other thing, but hey, at least we’re not …, at least we don’t store your data in the US, at least … As a colleague of mine once said about a similar sounding marketing campaign by Opera, the browser company from twenty years ago, in which it essentially said “at least we don’t track you”: that’s not what a winner would say.
3 constraints before I build anything. This was fascinating, because my first reaction was: yes, constraints #1 and #2 are right, but what does #3 even mean? But now, re-reading it, I think that even #2 can be argued. And, hey, #1 too, actually. It is interesting though that they all have some value and I’d definitely say it’s three things to consider before building anything, but [turning around and pointing at the choir behind me: now everybody!] it depends.
Why fat tailed costs emerge at scale: “I find that analysis of AI business models consistently underestimates the impact of unit economics. When people say AI startups face margin squeeze, they point to external competitors or monopolistic GPU pricing as contributing factors. But it seems that the internal resource variance would still exert pressure, even if there was only one LLM provider and chips were abundant.” We’ll probably never get it, but an in-depth blog post by one of the inference providers or model houses on exactly this would be very interesting.
Hell yes: ”I like art that feels like it was made by a free person. I like to see how a person chooses things. I like art before it gets noted and workshopped and homogenized. I like art that preserves the rough edges of the person. Polish can be taught, so it’s less interesting to me than that which can’t be. I like when I can sense how someone really talks, feels, and thinks. I mean consciously so, but also unconsciously so. Every choice communicates. Even the ‘errors.’ I embrace the errors.” That’s why I like to listen to live music a lot. As our admin on the Led Zeppelin bootleg forum in 2007 said: “They always bit off more than they could chew — and then chewed it.”
Very, very interesting: Inside macOS window internals: how SkyLight enables multi-cursor background agents.
Zed 1.0 is out! Congratulations!
CorridorKey: “When you film something against a green screen, the edges of your subject inevitably blend with the green background. This creates pixels that are a mix of your subject’s color and the green screen’s color. Traditional keyers struggle to untangle these colors, forcing you to spend hours building complex edge mattes or manually rotoscoping. [...] I built CorridorKey to solve this unmixing problem. You input a raw green screen frame, and the neural network completely separates the foreground object from the green screen. For every single pixel, even the highly transparent ones like motion blur or out-of-focus edges, the model predicts the true, un-multiplied straight color of the foreground element, alongside a clean, linear alpha channel. It doesn’t just guess what is opaque and what is transparent; it actively reconstructs the color of the foreground object as if the green screen was never there.” Crazy that his even works without a green screen.
So, Henrick Johansson, this Twitter European VC parody account that often hit a bit too close to home is… real?! No, that can’t be, right? So my theory is: it started as a parody account, but then Comp AI took over the account, changed the avatar to this actor’s image, and now uses it to run ads for compliance while keeping the parody going. Anyone know more?
Staring at walls to improve focus and productivity. I don’t know, man. On one hand: whew, wow, wow. On the other: if it works? On the third: it’s meditating
Beautiful: I just learned I only have months to live.


