"We'll Never Catch Up": Eric Schmidt Reveals How AI Superintelligence Creates Existential Security Risks

A six-month gap in artificial intelligence capabilities between major powers could become an insurmountable lead that triggers preemptive military action, former Google CEO Eric Schmidt warned during a recent tech forum. The race toward superintelligence is evolving so rapidly that traditional diplomatic frameworks for managing technological competition are becoming obsolete.
End of Miles reports that Schmidt's stark assessment came during a wide-ranging conversation at SRI International's PARC Forum, where he outlined scenarios that suggest AI development could fundamentally destabilize international relations in ways nuclear weapons never did.
The exponential advantage
"Let's assume for purposes that the US gets its act together—highly unlikely—and we're actually doing this. We have all the data centers and we've just done this and China is six months behind," Schmidt explained. "Everyone here would say 'no problem, six months is not very much.' But in network effect businesses, when the slope of growth is this steep, you never catch up."
"When America gets to the point where something new that could completely destroy the country of China occurs, China would have a six-month latency or more—or vice versa." Eric Schmidt
The former tech executive outlined how AI systems are improving at approximately ten times per year, with the addition of AI scientists potentially accelerating this curve dramatically beyond human capability to match. This compounding advantage means that catching up becomes mathematically impossible once a significant lead is established.
From nuclear standoffs to AI destabilization
Unlike nuclear weapons, where deterrence frameworks evolved through negotiated parity, Schmidt argues that superintelligence creates winner-take-all dynamics that eliminate traditional balancing mechanisms.
Drawing on his work with the late Henry Kissinger, Schmidt contrasted today's AI competition with Cold War nuclear diplomacy: "Henry used to tell me that when he negotiated with the Soviets, he would tell them how many missiles they had at the beginning. We had classified information about their classified information. You can't have that conversation in a network effect business."
"This means that when America gets to the point where something new that could completely destroy the country of China occurs, China would have a six-month latency or more or vice versa. So the first thing you conclude is that America should win the race for superintelligence. But the real question is how do you manage the global partnerships." Eric Schmidt
The escalation logic
Schmidt painted a troubling picture of how this technological imbalance would likely unfold: "Let's say that we're nearing the point of total intellectual dominance. What are China's options? Preparatory attack. A preliminary attack."
This inherent instability stems from the compounding nature of AI advancement. Once a nation achieves superintelligence—defined by Schmidt as intelligence greater than the combined intelligence of everyone—the advantage becomes so overwhelming that adversaries may see preventive military action as their only option.
"This is inherently destabilizing to world order," the tech leader emphasized.
A new framework for technological competition
The challenge, according to Schmidt, is developing new diplomatic frameworks when traditional arms control approaches can't work. "How do you get the other side to give up something while you're in a race? Turns out to be really hard," he noted.
Schmidt referenced ongoing "track two" discussions between the US and China about AI safety, which he and Kissinger helped initiate before the rise of companies like Deep Seek. However, he expressed skepticism about current diplomatic efforts, describing them as "hilarious" with "Americans all on Zoom, kind of normal Americans, kind of disheveled, and the Chinese all lined up in a row with their little ties, very organized, very precise."
The computer scientist and business leader recently co-authored a piece with Dan Hendrick on superintelligence which elaborates on these concerns. His warnings reflect growing awareness among tech leaders about the profound security implications of advanced AI systems beyond their immediate applications.