AI Capabilities Growing 20x Annually, Could Compress Century of Progress Into Decade

Artificial intelligence capabilities are now increasing over twenty-fold annually, potentially compressing a century's worth of technological development into a single decade and far outpacing society's capacity to adapt, according to groundbreaking research by Oxford philosophers.
End of Miles reports that these findings represent a significant departure from mainstream AI discourse, which has typically focused on narrow aspects of AI risk rather than the broader implications of an intelligence explosion.
Unprecedented Acceleration in Cognitive Labor
"At the moment, the effective cognitive labour from AI models is increasing more than twenty times over, every year," write Fin Moorhouse and Will MacAskill in their paper "Preparing for the Intelligence Explosion," released March 11th. "Even if improvements slow to half their rate, AI systems could still overtake all human researchers in their contribution to technological progress, and then grow by another millionfold human researcher-equivalents within a decade."
"For business to go on as usual, then current trends — in pre-training efficiency, post-training enhancements, scaling training runs and inference compute — must effectively grind to a halt." Fin Moorhouse and Will MacAskill," Preparing for the Intelligence Explosion"
The Oxford researchers have quantified the acceleration in specific, measurable terms. The current rate of AI research capability growth is more than 600 times faster than total human cognitive labor devoted to technological progress, with significant headroom for continued improvements.
Drivers of Explosive Growth
This acceleration emerges from multiple reinforcing factors, according to the philosophers. Training compute used in the largest AI models is scaling up by approximately 4.5x per year, while algorithmic efficiency in training is improving by roughly 3x per year. When combined with post-training enhancements providing an additional 3x efficiency improvement annually, the effective training compute is scaling more than 30x per year.
"The best models could become AI researchers themselves, accelerating years of algorithmic efficiency gains into mere months," the paper warns. This software feedback loop could further accelerate capabilities growth beyond current projections.
"The intelligence explosion could yield, in the words of Anthropic CEO Dario Amodei, 'a country of geniuses in a data center,' driving a century's worth of technological progress in less than a decade." Moorhouse and MacAskill
The Disorienting Pace of Change
To illustrate this compressed timeline, the researchers present a thought experiment: if the technological developments of 1925-2025 had been compressed into just one decade, the first atomic bomb would have been developed only three months after the Manhattan Project launched.
The MacAskill-Moorhouse analysis identifies a critical asymmetric acceleration problem—while technological change will accelerate dramatically, human cognition and social institutions remain fixed at biological speeds. "The quality and speed of high-stakes decision-making would not always keep pace with the rate of change," they note, highlighting that many institutions operate on rigid schedules that would become increasingly maladaptive.
Inevitable Trajectory or Avoidable Outcome?
The research team emphasizes that their projections are based on existing trend lines rather than speculative breakthroughs. The paper methodically assesses both moderate and rapid scenarios for AI development, concluding that even conservative estimates point to transformative change.
MacAskill, a prominent figure in the effective altruism movement, argues that the evidence points to AI-human cognitive parity occurring "within the next two decades" and quite possibly "well within the coming decade."
The authors cite several compelling indicators that AI capabilities are rapidly approaching human parity in research-relevant tasks. They note that on GPQA—a benchmark of Ph.D-level science questions—AI performance has progressed from "marginally better than random guessing" to "outperforming PhD-level experts" in just 18 months.
Implications Beyond Alignment
Unlike many analyses that focus exclusively on AI alignment risks, the Oxford philosophers argue for a broader preparation framework encompassing multiple challenges, including power concentration, destructive technologies, and economic transformation.
"We should have humility about our ability to identify what will be most important during an intelligence explosion," the researchers conclude, advocating for cross-cutting measures and adaptable institutions rather than narrow focus on predetermined challenges.