We Told Everyone AI Would Take Their Jobs. Someone Believed Us.

The AI industry's rhetoric about job displacement is fuelling public anger and radicalisation. What business leaders need to do now.

Andrew Bird
Andrew Bird
Head of AI
Affinda green mist logo icon
Affinda team

When someone throws a Molotov cocktail at Sam Altman’s house at 4:00 AM, the AI industry’s first instinct is to treat it as an aberration — a lone extremist, a tragic outlier. It’s not. It’s the entirely predictable result of a story the industry has been telling for years.

I run the sales team at an AI company. I spend my days helping enterprises automate document processing. I’m not building AGI, and I’m not trying to replace anyone’s job. But I’m part of this industry, which means I’m part of the problem — and I think we need to be honest about that.

The rhetoric problem

Here’s the uncomfortable truth that most people in AI don’t want to say out loud: the industry’s own messaging has been irresponsible.

Every quarter, another CEO takes the stage to announce that AI will automate millions of jobs. The framing is always triumphant — productivity gains, efficiency, transformation. But to the person watching from their apartment that costs 47.7% of their household income, what they hear is: your labour has no future value.

Research from the Journal of Conflict Resolution tells us it’s not static poverty that drives political violence — it’s projected economic decline. When people anticipate downward mobility, they become risk-seeking. They enter what behavioural economists call a “domain of loss.” And the AI industry has spent the last several years convincing a very large number of people that their economic trajectory points straight down.

The attack on Altman’s home didn’t come out of nowhere. Counter-terrorism think tanks were warning about anti-AI violence as far back as late 2024. Days before the Molotov cocktail, an Indianapolis city councilman had 13 rounds fired at his door with a note reading “No data centers.” Social media posts supporting the attacker garnered thousands of likes. This isn’t fringe — it’s a trend.

Why UBI is the wrong answer

The default response from Silicon Valley is some variant of universal basic income. It sounds generous. It’s actually toxic.

UBI from the people automating your job is the most condensed possible version of the power dynamic that generates resentment. Moral typecasting theory shows us why: when powerful actors position themselves as agents and the public as passive recipients, it strips people of dignity and agency. A UBI cheque doesn’t say “we value you.” It says “we agree your labour has no future, here’s a stipend.”

The Carnegie Endowment reviewed lab interventions designed to reduce partisan animosity. They found that even when they succeeded in making people feel warmer towards each other, there was zero effect on attitudes towards political violence. You can’t buy your way out of a grievance that’s fundamentally about dignity.

What actually works

The research points to two things that genuinely reduce the risk of radicalisation.

Political efficacy. When people perceive that democratic channels actually work, they’re significantly less likely to support violence. This is where the AI industry’s lobbying record becomes directly relevant. Every time an AI company successfully kills a regulation, it confirms what radicalised individuals already believe: that democratic recourse is unavailable. Meaningful AI governance isn’t just good policy — it may be the single most effective deescalation tool available.

Economic trajectory. Not handouts, but credible signals that people’s economic futures are improving. Job retraining programmes with actual placement rates. Housing affordability measures. Portable benefits that don’t vanish when your role is automated. The absence of a Marshall Plan for AI-era reskilling isn’t just a policy gap — it’s a radicalisation vector.

What this means for business leaders

If you’re running an AI team or an AI company, this isn’t someone else’s problem. Here’s what I think we need to do:

Stop the apocalypse marketing. Every time you tell a client that AI will “replace” their workforce, you’re feeding the narrative. You can sell transformation without selling displacement. Be precise about what your technology actually does.

Support regulation publicly. I know this is heresy in tech circles. But if the choice is between governance you helped shape and governance imposed on you after the next attack, the strategic calculus is obvious. Companies that lobby against all AI regulation are confirming the radicalisation thesis.

Invest in the transition, not just the technology. If your product automates a workflow, what happens to the people who used to do that work? “Not our problem” is no longer a viable answer — not morally, and increasingly not commercially.

Drop the UBI talking point. It makes you feel generous. It makes everyone else feel patronised. Find a different answer.

The uncomfortable conclusion

The people best positioned to defuse what’s building are the ones who have least wanted to. The AI industry has treated governance as something to be lobbied away, reskilling as someone else’s job, and public fear as a communications problem.

It’s none of those things. It’s a radicalisation pipeline — from real economic pain, through perceived inequality, to political violence. And we’re standing at the top of that pipeline, telling everyone the water’s about to get worse.

The Molotov cocktail was a warning. The question is whether we’re going to treat it like one.

Author
Andrew Bird
Head of AI
Affinda green mist logo icon
Affinda team
Published
Share

Related content

Clear, practical solutions