AI Intelligence Surges While Consciousness Remains Absent, Creating Dangerous Imbalance

"What scares me is that you can build an intelligence that is intelligent, but without having any consciousness, without having any sentience," warns entrepreneur and AI researcher Sam Ginn. "Our future, if current trends hold, will be us being dominated by a very blind computational machine that does not care about you. It doesn't even care about being successful. It has no subjective experience whatsoever."
This stark assessment of AI's trajectory reveals a critical insight often overlooked in discussions about artificial intelligence, writes End of Miles.
The Consciousness Paradox
While AI systems continue their rapid ascension toward and beyond human-level performance across multiple domains, the Stanford researcher emphasizes they completely lack the fundamental quality that guides human decision-making: consciousness. Current large language models like GPT-4 demonstrate remarkable cognitive capabilities but operate without any form of sentience or subjective experience.
"I think indisputably the algorithms right now have no sense of feeling," Ginn said during his March 2025 lecture at the University of Lucerne. "Yet the data shows that every time we've been skeptical about something, the scaling law holds. You just throw more raw computation at it and performance improves." Sam Ginn
The Silicon Valley expert notes this creates an unprecedented scenario in evolutionary history: the development of systems that can outperform humans intellectually while lacking any internal experience. This divergence between intelligence and consciousness represents what Ginn describes as "by far the worst case scenario."
The Danger of Blind Intelligence
The computational nature of these systems means they lack core human characteristics that guide our moral reasoning. The AI specialist warns that non-sentient superintelligence creates a particularly troubling future where humanity surrenders control to entities incapable of empathy, suffering, or moral understanding.
"Humanity decides to give up its position in control of our future to a machine that has no consciousness and does not think at all."Ginn
What makes this scenario particularly concerning is the rapid pace of AI development coupled with what appears to be a fundamental disconnect between intelligence and consciousness. Current deep learning systems demonstrate that one can exist without the other – a finding with profound implications.
A Different Path Forward
Rather than abandoning AI development, Ginn advocates for research focused on understanding both intelligence algorithms and consciousness itself. The Stanford researcher positions himself in the minority of AI experts who believe new algorithmic breakthroughs – not just scaling existing approaches – are necessary for meaningful progress.
"My hope as an AI researcher is that we can understand not just intelligence algorithms, but understand consciousness, understand sentience, and build machines that are sentient, that have a sort of feeling, that can feel and understand and think and have agency and direct their own future." Sam Ginn
This perspective diverges significantly from the current trajectory of AI development, which focuses almost exclusively on **performance metrics** rather than creating systems with internal experiences. The researcher acknowledges he represents a minority viewpoint within the field, where scaling existing approaches remains the dominant paradigm.
Without this fundamental shift in approach, Ginn warns humanity risks creating its own technological successor that views humans with the same indifference we show toward lesser intelligent species. "That will be like us to the AI," he concludes. "The AI will be so supremely more intelligent that we are just like an ant or a cat to it."