In the final week of July, both the United States and China made open-source AI a matter of national strategy. The timing may have been coincidental. The convergence around a single principle was not: open-source AI has become essential infrastructure for technological sovereignty.
On July 23, the U.S. released its AI Action Plan, allocating significant federal resources to support open-source and open-weight models—systems that can be studied, reused, and improved by others. This marks a decisive shift from treating open models as experimental alternatives to recognizing them as critical national infrastructure.
Open-source AI matters because it gives institutions genuine control. Public schools can customize language models for local curricula without relying on corporate APIs that might change terms overnight. Hospitals can audit AI diagnostic tools to understand their decision-making processes. Local agencies can adapt models to serve their communities without vendor lock-in or privacy compromises that come with proprietary systems.
Three days later, China introduced its Global AI Governance Action Plan at the World AI Conference in Shanghai. Premier Li Qiang proposed a United Nations–aligned AI cooperation body and emphasized open-source development—particularly in collaboration with universities and governments in the Global South. The message: shared rules, shared models, shared access across borders.
The two approaches differ in framing—America emphasizes innovation and competitive advantage, while China focuses on multilateral coordination. But both point to the same strategic reality: open-source AI is no longer a risk to be managed but a foundation to be strengthened.
This represents a fundamental shift in the safety debate. For years, critics argued that open models were inherently dangerous—that public access to AI systems would inevitably lead to misuse. But evidence suggests the opposite. When Meta released Llama 2 as an open model, the research community quickly identified and addressed potential biases that had gone undetected in closed development. When researchers at Stanford found concerning behaviors in GPT-4, they couldn't verify the extent of the problem because the model remained opaque. Openness enables the kind of distributed security testing that no single organization can match.
The risks are real—bad actors will attempt to misuse any powerful technology. But the solution isn't secrecy; it's building robust defenses through transparent development. As cryptography has shown us, security through obscurity fails when the stakes are high. Security through community scrutiny scales.
Countries that fail to invest in open-source AI face a different kind of risk: technological dependence. Nations relying entirely on proprietary AI systems from foreign companies are essentially outsourcing critical infrastructure decisions. They cannot audit the systems making important decisions about their citizens. They cannot adapt the technology to their specific needs, languages, or values. They cannot guarantee continued access if geopolitical relationships shift.
Openness also builds resilience at the institutional level. When the COVID-19 pandemic struck, countries with robust open-source software ecosystems could rapidly adapt digital infrastructure. Those dependent on proprietary systems faced delays, licensing negotiations, and vendor bottlenecks at critical moments.
It's encouraging to see this direction finally gain ground at the highest levels of government. Some of us have been making the case for open systems for decades. My own work in open source began in 1991, when the web was still young and openness required constant justification. The way Linux became the backbone of modern computing—powering everything from smartphones to the global Internet infrastructure—demonstrates what happens when open systems reach strategic importance. The terrain has changed, but the principle remains: if we want technology to serve the public interest, we need to build it in public view.
The debate has fundamentally shifted. The question is no longer whether to support open-source AI, but how to do it effectively—and how to ensure the benefits reach beyond the few institutions with resources to build from scratch.
The path forward requires open-source AI that's not just accessible, but genuinely trustworthy and accountable to the communities it serves.