AI Could Create Virtually Unshakeable Dictatorships, Oxford Philosophers Warn

Advanced AI systems could enable dictators to build automated military and police forces that follow orders with "total loyalty," potentially cementing authoritarian power in ways previously impossible, Oxford philosophers Fin Moorhouse and Will MacAskill warn in a newly published research paper.
The risk of AI-enabled autocracies represents one of several "grand challenges" humanity faces as AI capabilities rapidly accelerate toward an intelligence explosion, End of Miles reports.
The Dictator's Dilemma Solved
Currently, even the most powerful autocrats must rely on a coalition of supporters to administer the state and suppress uprisings. These human supporters can abandon or overthrow their leader if dissatisfied — a constraint on dictatorial power that political scientists have long recognized.
"However, if key functions of the state could be performed by AI that is aligned with the commands of the dictator, then this would no longer be true," the researchers write. "In particular, a dictator could build an automated military and police force of drones and robots, designed to follow orders with total loyalty, and suppress uprisings — cementing their power."Moorhouse & MacAskill, " Preparing for the Intelligence Explosion"
This represents a fundamental shift in the balance of power within autocratic regimes, the Oxford ethicists argue. AI-enabled technologies could eliminate the need for human intermediaries who might otherwise serve as a check on dictatorial ambitions.
Multiple Pathways to Automated Control
The threat extends beyond established dictators simply upgrading their capabilities. MacAskill and his co-author outline several concerning scenarios through which AI-controlled militaries might emerge:
"The AI systems that control the military could be taken over via political subversion, backdoors, instructions inserted by insiders, or via cyber-warfare. The risk could come from a country's enemies, from people at the companies building the military, or from those already in political power (a 'self-coup')." From the research paper
The philosophers note that non-state actors could potentially build automated military forces from scratch given the dynamics of the "industrial explosion" they predict will follow the intelligence explosion. Such capabilities could be achieved "very rapidly by a company at the technological frontier."
Why This Threat May Precede Other AI Risks
While much attention has focused on the risk of AI systems themselves seizing power from humans, Moorhouse suggests that human exploitation of AI military capabilities represents a more immediate concern.
"This risk would seem to come earlier than the risk of AI takeover, because it is surely easier to implement an AI-driven takeover if the AIs are assisting willing humans with a significant initial stock of power." The researchers' assessment
The research team emphasizes that such threats could emerge during an intelligence explosion, as power-seeking humans gain and entrench control using AI at an "intermediate level of capabilities" — before fully autonomous AI systems might pose their own risks.
Broader Context of Power Concentration
The Oxford ethicists place automated militaries within a larger category of "power-concentrating mechanisms" that could emerge from rapidly advancing AI. These include AI-enabled mass surveillance with accurate lie detection, economic concentration that shrinks labor's share of income, and "first mover advantages" that could allow early leaders to convert temporary technological advantages into permanent dominance.
Military applications represent an especially concerning form of power concentration because they directly enable physical control. Unlike economic or surveillance technologies, automated military capabilities provide the most direct route to absolute power.
Preparation and Prevention
MacAskill and his colleague argue that humanity cannot simply defer addressing these threats to future superintelligent AI systems, as the challenges may emerge before such systems exist or can effectively manage them.
"If they [power-seeking humans] succeed, then there may be no good option to ask the (later and more powerful) superintelligence to reset the balance of power, most obviously because the power-grabbing humans control it." From the researchers' analysis
The research paper calls for early intervention focused on "empowering competent and responsible decision-makers" and preventing extreme concentration of power through institutional design and policy implementation before AI capabilities reach critical thresholds.