Beyond Stargate: Open code and the AI black box

The most valuable resource in the AI era isn’t silicon or electricity; it’s the collective intelligence baked into the world’s source code.
In early 2025, the US announced Project Stargate, a $500 billion infrastructure play designed to signal absolute dominance in AI. But just a week later, the narrative shifted. A Chinese model named DeepSeek—developed for a reported $6 million—demonstrated that openness could challenge even half-trillion-dollar bets.
At the Software Heritage 10th Anniversary Symposium held at UNESCO Headquarters in Paris, global experts gathered to argue that the “black box” of proprietary AI isn’t just a technical hurdle; it’s a geopolitical and civilizational risk. Moderator Aurélie Simard, Executive Director of the Center of Expertise for International Cooperation on AI at Inria, observed that software is frequently relegated to the background, yet it serves as the essential engine of the technology. As she succinctly put it, “software is what turns compute into real capabilities and access to concrete outcomes”

© Inria / Photo B. Fourrier
By focusing on this “interface between decision and action,” the panel argued that transparency in code is the only way to translate high-level political intent into inclusive and ethical AI development. This perspective shifts the focus from who has the most hardware to who can ensure their technology is sovereign, resilient, and accessible through open source.
You can catch the 37-minute session on YouTube.

The $500 billion blind spot
Anne Bouverot, France’s Special Envoy for AI, noted that the disparity between Stargate and DeepSeek proves AI development isn’t reserved for only the “very rich, very closed companies” in the US. By leveraging open source, the DeepSeek team was able to share information, knowledge, and findings to build a world-class model on a fraction of the budget.
This message resonated globally. During the AI Action Summit in Paris, over 60 countries—including India, Brazil, and China—signed a declaration backing principles for AI developed in an inclusive, sustainable, and open way. This move marks a pivot toward “technological resilience” and away from a future where AI is controlled by a tiny handful of private entities.
The paradox of openness
However, “openness” is often a loaded term. Anuchika Stanislaus, representing the France 2030 investment program, warned of “free rider” behaviors from tech giants. She pointed to companies like Meta that implement conditional licensing—limiting access once a company reaches a certain size—to maintain market dominance while reaping the reputational benefits of the “open source” label.
To counter this, France is treating AI as a strategic public investment rather than just a private productivity tool. A key initiative is the CodeCommons project hosted by Software Heritage. This project aims to address the AI data set gap by building smaller, superior, and transparent data sets from structured public code. By moving away from the opaque and redundant data sets used by many current models, the goal is to reduce the massive waste of compute and energy that defines the current era.
Sovereignty: Energy, language, and legacy
For nations outside the US-China duopoly, AI sovereignty is about survival. Cristina Shimoda highlighted Brazil’s “AI for the Good of All” strategy, which treats software heritage as a digital public good. Brazil, where Shimoda works at the AI Program, Ministry of Science, Technology, and Innovation, is aiming to host a national mirror of the Software Heritage archive, not just for its own technological autonomy, but to ensure the global resilience of the world’s code.
Hakim Hacid of the UAE’s Technology Innovation Institute (TII) took the argument further, arguing that true independence requires more than just owning a model. “Sovereignty means each nation should be capable of controlling the AI they consume,” Hacid explained. “If you send your data to a model but don’t know what happens to it, you aren’t sovereign. But it goes beyond the code—if you don’t control the infrastructure and the energy required to run it, ‘Sovereign AI’ is just a dream”. With the massive consumption of AI models, Hacid noted that energy has become the ultimate gatekeeper.
Beyond energy lies the “civilizational risk” of language dominance. Hacid warned that because AI is developed primarily in English and Chinese, thousands of other languages and cultures are at risk of disappearing from the digital landscape. By releasing models like Falcon with open weights, the UAE aims to provide a foundation for any nation to fine-tune AI for its own local dialects and cultural needs.
The road ahead
As the global community looks toward the future the focus is shifting from “who has the most GPUs” to “who has the most inclusive code.” The session served as a reminder that in a world of $500 billion bets, the most valuable asset is the transparent, open-source code we share for free. As the panel concluded, software is the bridge between political intent and the concrete choices that will define our digital future.
Videos from the other sessions are on our YouTube channel.
#SWH10
