"We're Giving 5 People Power Over Millions": AI Expert's Concern About AI Guardrails

"If you have a million people using AI and five developers who believe something should be done in a certain way but a million people disagree, then you have five people imposing their will on a million people," warns AI researcher Vuk Rosić in a stark assessment of power dynamics in artificial intelligence development.
The concerning imbalance between AI developers and users represents a growing tension in how AI systems are governed, writes End of Miles.
The Democracy Problem in AI Safety
Rosić, a research scientist at Beam.AI specializing in large language models and deep learning, points to a fundamental problem with how AI safety mechanisms are currently implemented. His critique centers on "jailbreaks" — methods users employ to circumvent AI systems' built-in restrictions.
"This sometimes happens that a couple of AI researchers believe they are above millions of their users. This is why we have open source and why we must make sure that no one company or small group of people controls the power of AI." Vuk Rosić
The machine learning specialist argues that guardrails — the protective constraints programmed into AI systems — reflect the values and priorities of a tiny subset of technologists rather than broader societal consensus.
Open Source as a Counterbalance
Rather than centralized control, the AI expert advocates for open-source development as a democratic alternative to the current approach.
"I believe that together with open source we can all keep each other in check and balance and we can together develop a better future for everybody." Rosić
Open-source AI development, where code and models are publicly accessible and can be modified by anyone, offers a path to more distributed governance. This approach contrasts sharply with the closed systems developed by major AI labs, where decisions about what content to restrict often happen without public input or oversight.
Balancing Safety with Democratic Control
Despite his critique, Rosić acknowledges certain AI systems should be highly secure against manipulation. He cites autonomous vehicles as an example where safety restrictions are critical.
"There should be models completely immune to jailbreaks because there are certain things like if you are in an autonomously driving car you don't want somebody to jailbreak into it and make you smash into a wall." Rosić
However, even for these safety-critical systems, he argues that determining which constraints are necessary should be a collective decision.
"It's important that for these unbreakable models, it should be determined by entire society and the world which things we should completely constrain," notes the researcher. "It shouldn't be determined by just a small group of people or one company."
As AI systems grow more powerful and ubiquitous, the question of who decides how they behave — and what values they encode — becomes increasingly consequential. Rosić's perspective highlights a growing recognition that AI governance extends beyond technical decisions into fundamentally political questions about representation and democratic control.