Does AI safety prevent AI safety?

In many ways, the AI safety conversation right now functions mostly to avoid doing something about AI safety.

In particular, there are existential but vague conversations about AI ending humanity in some dramatic way. These discussions seem to drain a lot of oxygen from far more tangible, tactical conversations about how AI is negatively affecting people’s lives right now in obvious, gross, and unethical ways.

"Purpose" here is not a suggestion of conscious intent. Rather, there’s always a dynamic in society where certain narratives are more convenient for those with influence that others. The tactical, tangible AI harms are inconvenient because they would require someone to do something specific right now. The scifi, apocalyptic AI safety conversations are convenient because they absorb attention without requiring concrete action in the present.

When you read the newspaper it's worth asking yourself: What is real in my life right now? What’s real in the stories I hear face-to-face from people about their lives?

Here’s something that’s real: people can't post images on social media without someone using AI to alter or exploit those images, and often in degrading ways. This disproportionately affects young women, including those not yet at the age of majority. It’s unacceptable.

And yet, in the United States, we are not moving with much urgency to address these problems. There has even been discussion at the federal level about limiting states’ ability to regulate AI. There has been some movement in other countries, but broadly speaking, the mainstream AI safety conversation rarely focuses on these issues as “AI safety.”

That’s tragic because these are problems we could actually do something about.

If you scroll through social media, you’ll see that AI safety discourse is often about the end of the world. I don’t want the world to end. I understand the instinct to focus on catastrophic risks. But how, exactly, is the world supposed to end? When? Through what mechanism? The details are often thin.

When the risks are framed in such abstract and long-term ways it becomes difficult for anything concrete to happen in the short term. Occasionally we hear that maybe a company won’t release its hot new model. But as we’ve recently seen, even companies that have made strong safety pledges feel market pressure to keep releasing new systems when competitors do.

The “end of the world” AI safety conversation often results in… nothing.

If we deprioritized these vague existential narratives, we might free up space for discussions where action is actually possible. These are discussions that could reduce real harm that people are suffering today under the status quo.

If you’ve never encountered this idea before, it might sound crazy: that AI safety discourse could function to prevent meaningful AI safety action. But this is part of a broader pattern that scholars have written about for decades. It’s often referred to as recuperation.

If you want to see recuperation in action, pay attention to classic rock songs in car commercials. Listen to the lyrics. Ask yourself: is this an anti-consumerist anthem now being used to sell me a car?

It’s a common pattern. When powerful interests face criticism, they adopt pieces of that criticism and use them in a diluted, aestheticized, or symbolic way. Paradoxically, this can neutralize the critique itself. The language remains, but the teeth are gone.

This is happening across much of today’s AI safety discourse.

That’s a real tragedy because the harms are not theoretical. They’re here. They’re personal. And they’re fixable.