"Move Fast and Break Things? Not for AI," Says Nobel Laureate Hassabis

"With a lot of Silicon Valley, it's like 'move fast and break things.' I think it's not appropriate, in my opinion, for this type of transformative technology," declared Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, challenging the tech industry's most celebrated philosophy.
In a striking departure from conventional tech development philosophy, Hassabis advocates for exceptional care, scientific rigor, and humility when advancing artificial intelligence, writes End of Miles.
Why AI demands a different rulebook
Speaking at Cambridge University in March 2025, the DeepMind co-founder outlined a fundamentally different approach to developing advanced AI systems compared to traditional tech products and services that dominate Silicon Valley's landscape.
"I think instead we should be trying to use the scientific method and approach it with the kind of humility and respect that this kind of technology deserves." Demis Hassabis
This position comes as AI systems demonstrate increasingly sophisticated capabilities across multiple domains. The DeepMind chief—who received the 2024 Nobel Prize in Chemistry for the protein-structure predicting AI system AlphaFold—emphasized that the transformative nature of artificial intelligence requires significantly more caution than consumer apps or social networks.
Hassabis specifically highlighted uncertainty as a reason for restraint. "We don't know a lot of things. There are a lot of unknowns around how this technology is going to develop. It's so new," the AI pioneer explained.
The scientific approach to intelligence
Rather than embracing the rapid iteration and minimal oversight that characterized earlier waves of tech innovation, the Nobel laureate emphasized a methodical approach mirroring scientific research processes.
"With exceptional sort of care and foresight we can get all the benefits and minimize the downsides of this, but I think only if we start the research and the debate about that now." The DeepMind founder
This philosophy has manifested in DeepMind's development practices. While Silicon Valley norms might have prioritized speed to market with AI models like AlphaFold, the research team instead consulted with over 30 biosecurity and bioethics experts before releasing their protein structure database to ensure responsible deployment.
The British AI researcher's cautious approach stands in stark contrast to competitors who have occasionally rushed AI systems to market, only to face criticism for unresolved safety issues or harmful outputs.
Why this matters now
As artificial intelligence transitions from specialized research tools to mainstream consumer products, the stakes around development methodologies have never been higher. The AI safety advocate's stance reflects growing concerns among researchers that traditional tech development cycles are inadequate for systems with profound societal implications.
Hassabis's "move slow" philosophy arrives as international governments increasingly focus on AI governance. He specifically praised recent global AI summits that bring together governments, academia, and civil society to discuss appropriate guardrails for AI development.
"I think it's been great to see these international summits... bringing together heads of government with academia and civil society to discuss these technologies, how to put the right guardrails on it, how to make sure we embrace the opportunities but we mitigate the risks that are coming down the line." The Nobel laureate
For AI systems that demonstrate unprecedented capabilities, like DeepMind's recently announced video generation model V2 or their game-generating system Genie2, careful deployment becomes particularly crucial. The computational neuroscientist highlighted that these systems demonstrate physical understanding and world modeling that would have seemed impossible just five years ago.
Foresight as competitive advantage
Rather than viewing caution as an impediment to innovation, the AI pioneer positions responsible development as ultimately beneficial for both technology and society. By planning for success from the beginning—as Hassabis revealed DeepMind has done since 2010—companies can avoid potentially harmful missteps.
"We were sort of planning for success if we were to build these kinds of transformative systems and technologies. It would come with a lot of responsibility as well to make sure they get deployed in a safe and responsible way," the Cambridge alumnus explained.
For a field advancing at exponential rates, Hassabis suggests that foresight and careful stewardship aren't just ethical imperatives—they're strategic advantages. By rejecting Silicon Valley's breakneck acceleration, DeepMind has positioned itself as a model for how transformative technologies might be developed with both ambition and responsibility.