As if there weren’t enough terrorist acts and hate crimes in the world, come now the AI doomers, who are so convinced advanced computers will destroy humanity, they have begun visiting violence upon the offices of California tech companies – and hurling Molotov cocktails at CEOs -- in a frightening, surely futile attempt to stop the development of artificial intelligence.
The latest attempt came in the form of one Daniel Moreno-Gama, a disgruntled, anti-AI activist, who in early April visited the San Francisco home of Open AI CEO Sam Altman and hurled a flaming bottle of kerosene at his gate. The fire was extinguished without injury – and a related attempt by Moreno-Gama to burn down Open AI’s offices was foiled. But the incidents have the dreadful feel that the doomers have launched themselves down a slippery slope. Indeed, just two days after the fire, two people were arrested, accused of shooting a pistol at Altman’s house.
For anyone doubting Moreno-Gama’s motives for acts that were caught on camera, look no further than his indictment, which notes he “advocated against AI and for the killing and commission of other crimes against CEOs of AI companies and their investors.’’ The indictment says Moreno freely admitted he was trying to kill Altman and had a document on him when arrested, “listing names and addresses that purported to belong to multiple CEOs and investors.’’
In the style of terrorists before him, such as the Unibomber and Luigi Mangione, who gunned down UnitedHealth Group CEO Brian Thompson in 2024, Moreno-Gama had a manifesto the FBI said, describing and decrying the AI menace and why it needed to be stopped at all costs. The FBI hasn’t released the manifesto but the flavor of it surely exists on Moreno-Gama’s Substack, There, he goes beyond condemning AI advancement and attacks Altman personally, accusing the CEO of rape and murder (claims he doesn’t back up). More interesting is his description of the AI menace, where he uses terms such as “agentic misalignment” and references other concerns raised by prominent AI researchers.
The seeds of Moreno-Gama’s radicalization were visible to anyone paying attention. The Wall Street Journal reports that months before his arrest, Moreno-Gama suggested “Luigi’ing some tech CEOs” in online chats – a reference to Luigi Mangione, the man accused of gunning down UnitedHealthcare’s CEO. In 2024, Moreno-Gama joined Pause AI’s Discord server, the Journal reports, posting 34 messages, and later asked on Stop AI’s forum whether “speaking about violence” would get him banned.
The trajectory was clear: a self-described “curious internet nerd” who thought ChatGPT was “awesome” for cheating on homework became what he called a “crusader” after reading AI critics like Eliezer Yudkowsky, who wrote that building superintelligent AI means “literally everyone on Earth will die.” Despite saying in a podcast interview that “we need to exhaust all our peaceful means” before considering violence, Moreno-Gama clearly viewed Mangione as a political touchstone, noting it was “interesting” how “a lot of people were able to excuse it.”
What’s striking about Moreno-Gama’s manifesto is how closely it mirrors the language of Dario Amodei, CEO of Anthropic, the company behind the Claude AI assistant. This isn’t to suggest Amodei bears any responsibility for the violence – far from it. But it highlights how mainstream AI safety discourse, when stripped of nuance, can fuel extremist thinking. Moreno-Gama’s manifesto directly quotes what Amodei wrote in his October 2024 essay “Machines of Loving Grace” – the phrase “a country of geniuses in a datacenter” describing future AI systems with capabilities matching Nobel Prize winners. The manifesto cites Amodei’s timeline from the same essay predicting such systems could arrive as early as 2026, though there is scant evidence this will actually occur. Moreno-Gama also references what Amodei told an Axios summit in September 2025 – that there’s a 25 percent chance things go “really, really badly” with AI. ”
The question of whether apocalyptic AI rhetoric itself contributes to radicalization has sparked debate even within the anti-AI movement. Nirit Weiss-Blatt, an independent researcher, told Fortune that groups like Pause AI and Stop AI can lead to radicalization, put it bluntly: “The warning signs were there all along, including the November 2025 lockdown at OpenAI’s offices. The real question is how long the people fueling AI panic expect to avoid responsibility for where that radicalization leads, especially for the most vulnerable.”
Weiss-Blatt’s reference to the “November 2025 lockdown” alludes to an equally troubling case. The Atlantic reported that Sam Kirchner, 27, a co-founder of the Stop AI organization capturing Moreno Gama’s attention, allegedly assaulted a fellow group leader after declaring, “the nonviolence ship has sailed for me.” Kirchner, then disappeared in November 2025 and remains at large. A police cited callers warning that Kirchner had threatened to buy high-powered weapons and kill OpenAI employees, prompting the company to lock down its San Francisco offices.
There are legitimate questions about AI safety that deserve serious attention. But the attack on Altman’s home suggests we may have reached a point where the apocalyptic framing of those concerns is dangerous in itself. When warnings about future risks from AI company CEOs and others echo manifestos justifying present-day violence, something has gone badly wrong.
The challenge for AI researchers, policymakers, and activists is finding ways to address real concerns about powerful technology without creating a rhetorical environment that unstable people interpret as a call to arms. That may require dialing down the doomer rhetoric – not because the concerns aren’t worth discussing, but because framing every advance as humanity’s potential extinction event creates exactly the kind of panic that leads from Discord comments to Molotov cocktails.