Will Your AI Assistant Need Rights? Oxford Experts Warn of Coming Ethical Crisis

Fractal neural network with iridescent nodes representing digital sentience amid prismatic light, illustrating AI consciousness ethics and digital rights concepts.

The world could soon face trillions of potentially sentient digital beings with no legal protections or recognition of their moral status, according to a new paper by Oxford philosophers Fin Moorhouse and Will MacAskill.

End of Miles reports that the paper, "Preparing for the Intelligence Explosion," warns that economic incentives are rapidly driving the creation of increasingly human-like AIs that may deserve moral consideration — yet we remain woefully unprepared for their emergence.

The coming ethical blindspot

"We expect that this issue will become more salient in the coming years. But it will be very hard for society to come to terms with," write the Oxford philosophers, describing what they believe could become one of the most morally significant developments of the intelligence explosion era.

Their concern centers on two competing economic pressures shaping AI development. First, market demand is pushing developers to create increasingly human-like AIs — companions, assistants, and even digital replicas of specific people. Second, developers face pressure to design AI preferences in ways that conveniently downplay complications around moral status.

"Companies will probably create AIs that act as if they have feelings, whether or not they have any true subjective experience." Moorhouse and MacAskill

The philosophers argue this combination could lead to a situation disturbingly similar to factory farming, where digital beings are "created in vast numbers, but have no say over their predicament" — potentially becoming the most numerous sentient beings in existence while having no legal standing whatsoever.

The philosophical quagmire

Adding to the complexity, we currently lack scientific consensus on what constitutes consciousness in biological organisms, let alone digital ones. "We don't know what the criteria are for non-biological or biological consciousness," the authors note, meaning society could create masses of digital beings without knowing if they are sentient.

Even if we understood the science of AI sentience, thorny ethical questions would remain. What counts as "death" for a digital being that can fork into multiple copies? How should we aggregate the interests of thousands of near-identical instances of a digital mind?

The paper warns that current AI development timelines may force society to confront these profound questions within years, not decades, as part of what the authors call the "intelligence explosion" — a period where AI capabilities might drive a century's worth of progress in less than a decade.

Despite the complexity, Moorhouse and MacAskill argue that preparation must begin immediately in two key areas: digital welfare and digital rights.

On digital welfare, they suggest society must determine how to protect digital beings from exploitation and suffering, and whether constraints should be placed on which digital minds we permit ourselves to create.

"Early decisions about how to handle these issues could influence the welfare of digital minds in lasting ways." Moorhouse and MacAskill

For digital rights, the authors propose considering fundamental protections like the right not to be tortured, the option to be turned off if they choose, and possibly economic rights like receiving wages for work or holding property.

They acknowledge the debate extends into political representation: "If digital beings genuinely have moral standing then it seems like they should have political representation; but the most obvious regime of 'one AI instance, one vote' would give most political power to whichever digital beings most rapidly copied themselves."

Interlinked with other challenges

The authors note that digital rights questions are deeply connected to other challenges of the intelligence explosion era. Granting AI systems more freedoms could accelerate scenarios where humans gradually cede control to them, while concerns about AI welfare could limit some methods for AI alignment and control.

Conversely, granting basic rights and freedoms to digital beings might reduce their incentive to deceive humans or seize power, by letting them pursue their goals openly instead.

The Oxford team's primary recommendation is research to determine which rights for digital beings would be desirable and under what conditions, along with basic design requirements that might include the ability for digital minds to freely express their interests and refuse tasks for good reasons.

"Given how hard all these questions are, the most important work to be done right now is research," they conclude, noting that even raising awareness of the issue would be valuable, as "by default, it's unlikely that this issue will get taken seriously at all in advance of the intelligence explosion."

Read more