Is AI espionage preventable? Are open-source AI models a threat to national security? To discuss divergent industry viewpoints, we have a special guest post by Pablo Chavez, former ChinaTalk guest, fellow at CNAS, and former VP of Google Cloud’s Public Policy.
In late July, Sam Altman and Mark Zuckerberg each wrote independently about how the United States should deploy and govern AI power. Read together, these two pieces represent a high-stakes dialogue on the geopolitics of AI, portraying sometimes competing, sometimes complementary visions of American AI leadership.
Altman advocates for a more controlled and regulated approach, while Zuckerberg champions the power of open-source collaboration.
[W]e face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power? There is no third option — and it’s time to decide which path to take. … That will … mean setting out rules of the road for what sorts of chips, AI training data and other code — some of which is so sensitive that it may need to remain in the United States — can be housed in the data centers that countries around the world are racing to build to localize AI information.
The United States’ advantage is decentralized and open innovation. Some people argue that we must close our models to prevent China from gaining access to them, but my view is that this will not work and will only disadvantage the US and its allies. … I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term.
The two visions diverge for some reasonably straightforward business reasons. Altman’s OpenAI is, at its core, a developer of AI systems that it provides to enterprises and consumers for a fee. As a consequence, protecting models — the company’s main source of revenue — is a necessary step on the path to profitability. By contrast, Meta is mainly an AI deployer that wants to leverage the technology for its products (just like other enterprises like Microsoft that integrate AI into their products). It doesn’t want to be locked into a particular supplier or live in a world where AI model suppliers become so successful that they threaten its advertising and other core businesses (as OpenAI threatens to do to Google in search, for example).
Beyond their respective business interests, the two CEOs explore the larger strategic implications of the paths they propose, revealing some nuanced common ground along with some clear distinctions.
Two Competing Visions
At a high level, Altman is calling for a disciplined US-led industrial policy effort, embracing cooperation with like-minded democracies, and emphasizing the importance of coordinated action. His focus on security, infrastructure investment, and international norms leads to a controlled but generous release of AI to allies and partners — as well as co-development with these partner nations — while making significant efforts to keep frontier AI out of the hands of China and other autocratic rivals.
Conversely, Zuckerberg champions an organic, hands-off approach to democratizing AI. His call for an open-source AI ecosystem echoes a more inclusive and collaborative ethos, potentially fostering a dynamic AI landscape that empowers a broader range of actors. Like Altman, he’s concerned about China, but he sees openness as a means to stay ahead. He argues this is the only option to ensure AI technology remains broadly distributed; the alternative he describes is where the technology becomes concentrated in the hands of a select few.
While Altman’s goal is to develop and deploy AI that aligns with and upholds democratic values, Zuckerberg’s emphasis is on democratizing the development and deployment of AI itself.
These are two very different goals with potentially divergent outcomes.
In Altman’s world, both the coalition of AI countries and the technology itself should have democratic characteristics. He argues AI ought to be deployed beyond allies in the service of growing the global democratic coalition.
By contrast, Zuckerberg writes about democratizing access to AI itself through open source. In this vision, AI is water, food, healthcare, and education. With it, all countries will do better. Without it, some countries will fall behind to the detriment of all of humanity.
Ultimately, Zuckerberg argues, an open AI ecosystem should lead to a more open, safer (and perhaps more democratic) world. He points to the history of open-source software as a model for testing and ensuring the safety and stability of code.
Zuckerberg also focuses on protecting what he believes is at the core of America’s technological advantage: decentralized and open innovation. He’s not just with staying ahead in AI, but also with protecting the ecosystem that generates American technological advancement. Altman focuses on strategies for winning the AI race, rather than preserving the economic and political operating system that got the U.S. to where it is today. The advantages of AI — such as workforce training and building infrastructure — are questions for the future.
Gray Areas and Common Ground
A deeper reading of their essays softens some of the contrast between the two visions.
Both Altman and Zuckerberg express concerns about the concentration of AI power. Altman worries about authoritarian regimes using AI to strengthen and broaden their control, while Zuckerberg is additionally wary of closed AI models controlled by a small number of companies.
Their disagreement over open vs. closed is a bit grayer rather than a strict dichotomy.
Altman’s coalition does not exclude open-source collaboration. Instead, it seeks to create a strong, unified front, built around closed, controlled models to promote democratic values and prevent authoritarian dominance.
He views open models as an ancillary mechanism of soft power to encourage both partnerships and self-sufficiency among third-party countries.
For its part, Zuckerberg’s open-source AI future doesn’t necessarily contemplate open model development — just open release, without specifics about how open any particular model should be.
In addition, Zuckerberg emphasizes the advantage that larger, more sophisticated institutions will have in deploying AI at scale: such institutions have more compute, and therefore an incumbent advantage over smaller players. He suggests that these larger institutions will have an interest in safety and stability and argues that America’s leading AI companies should work with the US government and allies to maintain a first-mover advantage over bad actors.
None of this is incompatible with openness, but these details are evocative of a more closed ecosystem than the top line suggests.
Similarly, while differing in emphasis, their perspectives on AI safety are not mutually exclusive. Altman’s focus on establishing international norms and protocols complements Zuckerberg’s belief in the inherent safety of transparent, scrutinizable AI. Both recognize the need for a multilayered approach to AI safety, combining technical safeguards with broader ethical and governance frameworks. Indeed, Altman’s advocacy for a multistakeholder governance model is clearly inspired by ICANN, but it’s also evocative of open-source software development communities.
On Engagement with China
Altman believes the threat of authoritarianism should be addressed by withholding technology from China. At the same time, he calls for engagement with China to cooperate on reducing catastrophic risk — a pragmatic nod to a complex geopolitical reality. In a sense, his argument is that we have no choice but to work with them given their size and influence. Zuckerberg offers an alternative method for maintaining an edge over China: fostering America’s innovation ecosystem through openness.
Fundamentally, both see China as a threat, but Altman thinks the US and its allies can still protect AI infrastructure from Chinese cyber-intrusions. Conversely, Zuckerberg assumes that Chinese actors are already in the system, and thus, the goal should be to continue to move fast and stay one step ahead.
Neither essay discusses China’s AI ecosystem — including what appears to be a fairly robust and growing open-source AI community — or how China would respond to their respective visions.
Everything in Moderation?
The dialogue between Altman and Zuckerberg underscores the complex challenges and opportunities at stake in AI advancement.
Perhaps the future of AI will likely be shaped by a combination of both approaches. Ultimately, the most successful path forward will require a delicate balance of innovation, accessibility, safety, and ethical considerations that involve a diverse set of governments, corporations, and civil society actors.
The question of who will control the future of AI remains open, but one thing is certain: the decisions we make today will have profound implications for generations to come. This must remain top of mind for American political leaders as we transition to a new administration and a new Congress in 2025.
This showdown deserved its own AI-generated track…
For more Pablo Chavez on open vs. closed AI, have a listen to the show we did together last month.
Good Job Alert
The Institute for Progress is looking for an associate editor. Your boss will be
, creator of the excellent Statecraft newsletter and the think tank editor I’ve worked with who has impressed me the most. Consider applying!
Implicit in the idea that the model weights have national security implications is the idea that our current AI progress in deep learning-based multimodal models is on a path to... something, and that possessing those weights puts you closer to that then you would be without them. At present, it let's you skip the expensive training, but it doesn't get you a high-end compute cluster to run that model on, nor the scaffolding that makes it possible to interact usefully with the agent. Nor the know how to make the next advance.
If, on the other hand, this is a paradigm that is going to yield diminishing returns, then open model weights are only a business threat, not a national security one. Perhaps the next stage of AI development requires a complete rethink of architecture, compute structure, training data curation, and so on, in which case possessing the model weights to the latest GPT model would be equivalent to possessing working vacuum tubes when your opponent is building microprocessors.
Honestly, I don't know. Precautionary Principle would argue for protecting model weights, but that really depends on if this current paradigm IS in fact the path to AGI, or something close enough to it that the distinction is without meaning.
Great article, thank you. Do you think Sam Altman really believes these ideas or is it all just a front for other interests? I think the only outcome of a ban is an emboldened China, focused on a technology that is far from bottled up and controlled by any US laws.
Is Altman just naive? Am I naive? Is China incapable of duplicating what the US has been doing?