AI Apocalypticism Shares Psychology with Religious End Times Thinking, Says Oxford Expert

Holographic interface visualizing AI existential risk psychology with teal-magenta data streams, reflecting Oxford research on apocalyptic tech narratives

The psychology driving concerns about existential AI risk bears striking similarities to religious apocalyptic thinking, with both sharing a fundamental human desire for total, world-orienting narratives, according to Oxford University's Michael Wooldridge, a veteran AI researcher and pioneer of agent-based AI.

The religious psychology of AI doom scenarios has become a defining element of how the field is funded and researched, writes End of Miles, with Wooldridge suggesting this mindset explains why some researchers prioritize low-probability extinction scenarios over more immediate concerns.

A Religious-Like Drive for Apocalyptic Narratives

Wooldridge, a Professor of Computer Science at Oxford who has worked through multiple AI hype cycles, expressed skepticism about the singularity narrative that dominates much of AI safety discussions today.

"When I talk to people in the existential risk world, the psychology kind of reminds me of the Christian apocalyptic that there's these people throughout Christian history that are like 'Now's the Time'," Wooldridge explained. "This happened most recently probably when we were going through the Millennium right 1999." Michael Wooldridge, Oxford University

The Oxford professor isn't arguing against all risk assessments, but rather pointing to a specific psychological pattern that he believes drives disproportionate attention to certain scenarios. He sees the same drive behind climate apocalypticism, while still acknowledging climate change is real.

"It's not to say that these things aren't true, right? It's not to say that the world isn't ending in Christianity, the climate isn't changing, or there is no existential risk. It's that the reason that people seem attracted to this narrative is almost a religious phenomena." Wooldridge

The Appeal to Something Primal

The existential AI risk narrative has gained significant traction in tech and policy circles, influencing funding priorities and research direction. Wooldridge points out that this narrative's psychological appeal helps explain its outsized influence despite what he considers its relatively low probability.

"I think that's right and I think it appeals to something almost primal in kind of human nature. At its most fundamental level, it's the idea that you create something, you have a child, and they turn on you. You know, that kind of the Ultimate Nightmare for parents." Wooldridge

This primal fear traces back to foundational myths across cultures, with Wooldridge drawing a direct connection to Mary Shelley's "Frankenstein," which follows exactly this narrative arc: humanity uses science to create life, only to have that creation turn against its creator.

Funding Follows the Apocalypse

The religious-like preoccupation with AI existential risk has real-world consequences in how the field develops. Wooldridge notes that the narrative has shaped not just public discourse but actual research priorities.

"If you look at not just the narrative but actually the funding and what the smartest people are devoting their time into thinking in not only companies but policy groups, X risk—existential risk—is the dominant share of the entire market, so to speak." Wooldridge

His critique suggests that the psychological appeal of these narratives leads to a disproportionate focus on extremely low-probability scenarios at the expense of more immediate concerns, such as AI-generated disinformation or automation's impact on labor markets.

A Different Type of Risk

While expressing skepticism about the singularity narrative, Wooldridge isn't dismissing AI risks entirely. Instead, he advocates for focusing on more immediate and concrete concerns, particularly around AI-generated content and its potential to undermine shared reality.

In place of existential risk concerns, Wooldridge worries about a world where "pretty much everything we read and see on social media and the internet is going to be AI-generated, and we're not going to know what's real and what isn't real." This fragmentation of our information ecosystem, he suggests, presents a far more imminent threat than superintelligent systems turning against humanity.

For the Oxford professor, the religious psychology that draws people to AI apocalypticism diverts attention from addressing these present dangers, illustrating how deeply human cognitive patterns shape even our most technical fields.

Read more