بسم الله الرحمن الرحيم
25 February 2026

I don’t think this is going to be particularly controversial, but I think Anthropic abandoning some of their commitments to safety is a Bad Thing™.

To me, this was the distinguishing feature of their brand. This is what separated them from the OpenAIs and xAIs of this world.

To be clear the commitment that they have dropped is a promise to not realease models without guaranteeing proper risk mitigations in advance.

“We felt that it wouldn’t actually help anyone for us to stop training AI models,” Anthropic’s chief science officer Jared Kaplan told TIME in an exclusive interview. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

This sounds to me more like an excuse. Surely they don’t have to stop training the models - they just have to make sure they’re safe before they release them.

What this actually sounds like is “We can make more money if we care less about safety”. Are Anthropic really at risk of being taken over by competitors? It doesn’t seem that way right now, with $9b in revenue at the end of 2025, compared to $1b in 2024.

This really feels like the beginning of enshittification of LLMs. OpenAI is adding ads, Anthropic is dropping safety commitments.