Nobel Physicist Hinton: 50% Chance of Superintelligent AI Within Two Decades

"There's a good chance — a 50% chance — we'll get AI smarter than us" within the next two decades, warns Geoffrey Hinton, the 2024 Nobel Prize winner in Physics and longtime AI pioneer whose foundational work helped spark the current AI revolution.
Hinton's prediction represents one of the most specific timelines offered by a leading AI researcher on the emergence of superintelligence, writes End of Miles.
A Narrowing Window
The 76-year-old researcher, sometimes called the "Godfather of AI" for his breakthrough work on neural networks, offered a surprisingly precise estimate during a Nobel Week interview in Stockholm last December.
"My guess is in between 5 and 20 years from now there's a good chance — a 50% chance — we'll get AI smarter than us. It may be much longer, it's just possible it's a bit shorter," Hinton said, before adding a telling update: "Actually that was my guess a year ago, so I guess my guess now is between 4 and 19 years." Geoffrey Hinton, Nobel Prize interview, December 2024
The adjustment suggests the AI researcher is tracking his predictions against real-world developments, with each passing year bringing us closer to what he sees as a probable reality.
Among the Leading Voices on AI Risk
Hinton's comments carry particular weight given his stature in the field. After decades pioneering neural networks at the University of Toronto and Google, he made headlines in 2023 when he resigned from Google to speak more freely about AI risks.
While some technologists dispute timelines for advanced AI development, the Nobel laureate emphasized there's broad agreement among specialists about the eventual outcome.
"Researchers differ on when that will happen, but among the leading researchers, there's very little disagreement on the fact that it will happen — unless of course we blow ourselves up." Hinton
Unpredictable Territory
The AI scientist acknowledged humanity faces unprecedented uncertainty once machines surpass our intelligence.
"The question is what's going to happen when we've created beings that are more intelligent than us. We don't know what's going to happen — we've never been in that situation before. Anybody who says it's all going to be fine is crazy, and anybody who says they're inevitably going to take over, they're crazy too. We really don't know."
Hinton noted that history provides few examples of less intelligent entities controlling more intelligent ones — "there's not much difference in intelligence, and evolution had to put a lot of work into making that happen," he said of the parent-child relationship, one rare example.
Control Problem Needs Urgent Attention
While not definitively pessimistic, the researcher emphasized the need for immediate action rather than complacency.
"Because we really don't know, it would make a lot of sense to do a lot of basic research now on whether we can stay in control of things that we create that are more intelligent than us."
This latest timeline aligns with a trend of increasingly urgent warnings from AI insiders who once focused primarily on capabilities but now emphasize risks.
Hinton himself acknowledged a shift in his own thinking: "I wish I'd thought sooner about this existential threat. I always thought superintelligence was a long way off and we could worry about it later... The problem is it's close now."