NVIDIA and Mistral CEOs: Mass Scrutiny Makes Open AI Models Safer, Not Riskier

"It is impossible to control software. If you want to control it, then somebody else's will emerge and become the standard," declared NVIDIA CEO Jensen Huang, making a forceful case that open-source AI models improve—rather than compromise—security and safety.
As global regulators increasingly debate whether open-source AI models represent security risks requiring export controls, two influential tech leaders are pushing back with a counterintuitive argument, writes End of Miles.
Why openness improves scrutiny
During a recent discussion on sovereign AI and national infrastructure, Huang and Mistral AI CEO Arthur Mensch directly challenged emerging regulatory positions that favor restricting access to AI model weights. Instead, they articulated a security-through-transparency philosophy that mirrors established open-source software principles.
"Open source enables more transparency, more researchers, more people to scrutinize...the reason why every single company in the world, every cloud service provider is built on open source is because it is the safest technology of all." Jensen Huang, NVIDIA CEO
The chip executive emphasized that open-source development attracts intense scrutiny, creating a natural security advantage through mass inspection that closed systems lack. His argument directly contradicts claims that restricting access to powerful AI models would prevent misuse.
Mensch reinforced this perspective, pointing to practical security considerations that make open models particularly valuable for sensitive applications.
"You can evaluate a model much better if you have access to the weights than if you only have access to APIs. If you want to build certainty around the fact that your system is going to be 100% accurate, I don't think you should be using a closed-source model." Arthur Mensch, Mistral AI co-founder and CEO
Control attempts may backfire
The AI executives warned that attempts to restrict open-source AI development through export controls or licensing requirements would likely prove counterproductive at a global scale.
The French entrepreneur suggested that restrictive policies would merely shift AI leadership elsewhere. "If one state decides to lock things down, the only thing that is going to happen is that another state will take the leadership," Mensch explained. "Cutting yourself from the open flywheel is just too high of a cost for you to maintain competitivity."
This perspective grows more significant as countries race to establish sovereign AI capabilities that reflect their national interests and values, with both CEOs repeatedly emphasizing that AI represents not just computing infrastructure but also cultural infrastructure.
Mission-critical applications need transparency
Huang specifically pointed to sectors where security requirements make open-source models particularly advantageous.
"The benefit of open source is particularly strong in the fringe, niche but mission-critical areas where data might be sensitive. Healthcare, life sciences, physical sciences, robotics, transportation, financial services, energy, defense—you pick your favorites. Anything that is mission-critical and requires thorough auditing." NVIDIA's founder
This issue takes on new urgency as policymakers across major AI-producing regions consider frameworks for AI governance. The technology executives suggested their arguments had begun gaining traction, with Mensch noting: "We're glad to see that at the AI Summit that occurred last week, this was very much on the agenda—this realization that we could accelerate together by being more open."
Both industry leaders framed the debate as existential for countries seeking to develop sovereign AI capabilities, positioning open-source models as essential infrastructure rather than security liability in an increasingly AI-driven global economy.