Artificial Intelligence (AI) is rapidly transforming multiple facets of society, with advances arriving at a pace and scale that few anticipated. These developments compel policymakers, technologists, and strategists to address a widening spectrum of issues, from economic shifts driven by automation to strategic concerns about global competition. As with any transformative technology, AI presents both significant opportunities and formidable risks.
Among these challenges, the dual-use nature of AI—its capacity for both civilian and military applications—emerges as a critical factor. Unlike specialized technological tools, AI spans virtually every sector, including finance, healthcare, and defense. This broad applicability, coupled with its rapid evolution, creates a risk landscape that is expansive and difficult to predict. Strategic actors must contend with potential misuse, risks of geopolitical escalation, and the need for frameworks to govern systems whose capabilities may surpass human oversight.
To navigate these complexities, many have turned to analogies. AI has been compared to electricity for its general-purpose nature, to traditional software for its economic importance, or to the printing press for its cultural impact. While these comparisons provide useful entry points, they fail to emphasize the grave national security implications of AI. A more productive analogy lies between AI and catastrophic dual-use nuclear, chemical, and biological technologies. Like them, AI will be integral to a nation’s power while posing the potential for mass destruction. A brief examination of the historical parallels between AI and the nuclear age can highlight the gravity of our current situation.
In 1933, the leading scientist Ernest Rutherford dismissed the notion of harnessing atomic power as "moonshine." The very next day, Leo Szilard read Rutherford’s remarks and sketched the idea of a nuclear chain reaction that ultimately birthed the nuclear age. Eventually figures such as J. Robert Oppenheimer recognized the dual nature of their work. Today, AI is at a similar stage. Previously considered science fiction, AI has advanced to the point where machines can learn, adapt, and potentially exceed human intelligence in certain areas. AI experts including Geoffrey Hinton and Yoshua Bengio, pioneers in deep learning, have expressed existential concerns about the technologies they helped create.
As AI's capabilities are becoming more evident, nations and corporations are heavily investing to gain a strategic advantage. The Manhattan Project, which consumed 0.4% of the U.S. GDP, was driven by the need to develop nuclear capabilities ahead of others. Currently, a similar urgency is evident in the global effort to lead in AI, with investment in AI training doubling every year for nearly the past decade. Several "AI Manhattan Projects" aiming to eventually build superintelligence are already underway, financed by many of the most powerful corporations in the world.
However, the rapid advancement of AI technologies introduces significant uncertainties for international stability and security. The introduction of nuclear weapons altered international relations, granting influence to those who possessed them and leading to an arms race. The Cuban Missile Crisis highlighted how close the world came to nuclear war. Nuclear annihilation has been avoided thus far despite significant tensions between nuclear states in part through the deterrence principle of Mutual Assured Destruction (MAD), where any nuclear use would provoke an in-kind response. In the AI era, a parallel form of deterrence could emerge—what might be termed "Mutual Assured AI Malfunction" (MAIM)—where states' AI projects are constrained by mutual threats of sabotage.
The risks are not limited to state competition; advanced dual-use technologies can also be exploited by non-state actors. Just as the spread of nuclear capabilities raised concerns about misuse, the availability of AI systems presents new challenges. Malicious actors could use AI to develop weapons of mass destruction or conduct large-scale cyberattacks on critical infrastructure. The accessibility of unsecured or open-weight AI increases these risks, highlighting the need for careful policies and safeguards.
In the nuclear era, uranium became the linchpin of atomic power. States that secured it could enforce regulations, negotiate treaties, and limit the spread of destructive capabilities. In the realm of AI, computing resources—especially AI chips—have a similar strategic weight, fueling rivalries and shaping geopolitical calculations. This dynamic is evident in places such as Taiwan, central to AI chip production, where rising tensions could have extensive consequences. Nations have a shared interest in controlling access to AI chips to keep them out of the hands of rogue actors, echoing the logic once applied to uranium.
Despite these challenges, AI offers significant opportunities. Nuclear technology, while introducing the threat of mass destruction, also provided a new energy source that transformed societies. AI has the potential to drive advancements across various sectors, from medical breakthroughs to economic automation. Embracing AI's benefits is important for economic growth and progress in the modern world.
The challenges AI poses are far too broad, and far too serious, for piecemeal measures. What is needed is a comprehensive strategy, one that does not shy from the unsettling implications of advanced AI. As with Herman Kahn's famous analysis of nuclear strategy, superintelligence strategy requires "thinking about the unthinkable." In this paper, we propose such a strategy and grapple with these fundamental questions along the way: What should be done about lethal autonomous weapons? Catastrophic malicious use? Powerful open-weight AIs? AI-powered mass surveillance? How can society maintain a shared grasp of reality? What should be done about AI rights? How can humans maintain their status in a world of mass automation?
We argue that the most effective framework for addressing AI’s challenges is to view it through a national security lens. Drawing on lessons from previous dual-use technologies while tailoring them to the distinct demands of AI can help safeguard against catastrophic misuse, maintain geopolitical stability, and ensure that the broader Western world remains at the forefront.