Hinton Warns Against Training AI on "Serial Killer Diaries," Compares to Child Education

"Training AI on the diaries of serial killers is like teaching your child to read from the same material," warns Nobel Prize physicist Geoffrey Hinton, highlighting a fundamental problem in how artificial intelligence is currently developed.
The stark comparison between AI training methods and child-rearing comes from Hinton during his Nobel Week interview in Stockholm, writes End of Miles.
The Data Diet Problem
Hinton, widely regarded as one of the founding fathers of modern AI, expressed concern about the indiscriminate data collection practices used to train today's most advanced systems.
"At present the big chat bots are trained on all the data they can get, which includes things like the diaries of serial killers," Hinton explained. "If you were raising a child, would you get your child to learn to read on the diaries of serial killers? I think you'd realize that was a bad idea." Geoffrey Hinton, Nobel Prize in Physics 2024
The AI pioneer's comparison cuts to the heart of ethical questions around responsible development as these systems become increasingly human-like in their capabilities. His comments come at a time when many leading AI labs are racing to train ever-larger models on increasingly comprehensive datasets with minimal filtering.
More Like Us Than Code
What makes Hinton's warning particularly notable is his insistence that advanced AI systems are fundamentally different from traditional computer programs – resembling human learning more than conventional code.
"These things will be intelligent, they'll be like us. People refer to them sometimes as computer programs – they're not computer programs at all. The system you've got at the end has extracted its structure from the data. It's not something that anybody programmed." The Nobel laureate
Because of this similarity to human cognition, the physicist argues that controlling AI requires approaches more akin to child-rearing than software engineering: "Making these systems behave in a reasonable way is much like making a child behave in a reasonable way."
Training Shapes Behavior
According to Hinton, the primary mechanism for influencing AI behavior isn't through direct programming but through the training data we provide – just as a child's development is shaped by their environment and experiences.
"The main control you have is demonstrating good behavior, training it on good behavior so that's what it observes and that's what it mimics," he explained. "It's the same for these systems, and so it's very important we train them on the kind of behavior that we would like to see in them."
The AI researcher, who received his Nobel Prize for pioneering work on neural networks, has become increasingly vocal about AI safety concerns. He famously left Google in 2023 to speak more freely about potential risks from advanced artificial intelligence.
With Hinton predicting a "50% chance" of superintelligent AI within 4-19 years, his warnings about responsible training methods take on added urgency. If these systems will truly shape our future, the Nobel laureate suggests we should be far more deliberate about what information they absorb during development – treating the process with at least the same care we would use in educating a child.