3 Comments
User's avatar
Justin's avatar

Thorsten, great post as usual. Definitely would be interested in learning more about how you guys work internally at Amp. How do you release with confidence? How do you detect the need to rollback and rollback? How do model evals happen? Etc.

Larkin's avatar

I find the Palmer post much noisier and dustier than pretty much anything else I've read on the topic. He acts as if the main question is "should the Pentagon work with Anthropic?" when everyone serious in the discussion agrees that the answer is "not if they don't want to." The key questions for me are: if Anthropic's position is "an untenable position that the United States cannot possibly accept", why did the Pentagon accept it in the first place? and, why is the Pentagon designating Anthropic a security risk, damaging their business, instead of just walking away like this would normally play out? The Pentagon has been happily doing business with this security risk for awhile, and will continue to do so for six months until they've been disentangled. That's weird.

And it should signal to you that, as the founder of an AI org, you should not do business with this administration, since there is a high risk of ending off worse than you started. Suppose they wanted to cut your pay in half after six months. Well, if you say "no" and they walk away, that sucks but at least you got six months of contract. That happens with gov contracts all the time, tbh, and it's not fun. But if they walk away AND burn bridges with your other clients such that you lose 2 years' worth of revenue, that's a net loss.

Thorsten Ball's avatar

At this point I'm honestly not sure how I can differentiate noise and dust from anything else. Take these two posts, for example: https://stratechery.com/2026/anthropic-and-alignment/ and https://www.astralcodexten.com/p/the-pentagon-threatens-anthropic.