Within A Decade, Almost Everything We Read Online Will Be AI-Generated, Warns Oxford AI Expert

"We are heading into a world where basically within a decade—two decades at the most—pretty much everything we read and see on social media and the internet is going to be AI-generated, and we're not going to know what's real and what isn't," warns Michael Wooldridge, Oxford University's Professor of Computer Science and a veteran AI researcher with decades of experience.
This scenario, where synthetic content becomes ubiquitous and indistinguishable from human-created information, poses far more immediate dangers than the existential threats that dominate AI safety conversations, writes End of Miles.
Beyond the Singularity Distraction
While much of the AI safety community focuses on scenarios where superintelligent machines might spiral out of control, the Oxford professor believes this emphasis misplaces our collective attention.
"There are many, many risks associated with that world where society just fragments because there is no common core of beliefs anymore," Wooldridge explains. "We're all obsessed with some particular issue, and social media and the internet is just driving us around that one particular issue because AI is programmed to pick up on the issues that you care about and to feed you stories emphasizing those." Michael Wooldridge, Oxford University
The AI specialist paints a picture where technology is rapidly advancing toward a point where the information ecosystem becomes thoroughly polluted with synthetic content, making truth increasingly difficult to discern.
The 2024 Election Warning
"Going into elections in the US and UK, I was really worried that what we were going to see was social media drowning in AI-generated fake news," notes the computer science expert. "We didn't see that, at least not on the scale that I feared it might occur, but nevertheless I wouldn't take my eye off that as a risk."
Despite the relative absence of widespread AI-generated disinformation during recent electoral cycles, the researcher's concerns remain undiminished. The technological capabilities continue to advance rapidly, potentially outpacing our collective readiness.
"I think that's a very, very real risk—that autocratic states control media, just use AI to generate endless stories, fake news stories, that populist politicians do the same thing, and so on, and that we just drown in fake news till we no longer know how to tell what's real and what isn't and don't trust anything as a consequence." Wooldridge
Why This Matters Now
The Stanford-educated researcher's warning comes at a critical moment when investment in AI safety is heavily skewed toward preventing low-probability but high-impact scenarios often described as "existential risks."
"If you look at not just the narrative but actually the funding and what the smartest people are devoting their time into thinking—in not only companies but policy groups—existential risk is the dominant share of the entire market," Wooldridge observes, highlighting how resources might be better allocated toward addressing more immediate challenges.
The concern about society fracturing under the weight of synthetic content reflects growing apprehension among AI experts that current regulatory approaches may be missing the most pressing threats. Rather than attempting to regulate neural networks themselves—which Wooldridge compares to "trying to introduce legislation to govern the use of mathematics"—he advocates for application-specific regulations in domains where AI could cause immediate harm.
As large language models continue to improve in their ability to generate convincing text, images, and eventually video, the boundary between authentic and artificial becomes increasingly blurred. In this environment, the fundamental challenge may not be controlling AI's capabilities, but preserving our collective ability to distinguish fact from fiction in a sea of synthetic content.