AI Is Pivotal for National Security

AI holds the potential to reshape the balance of power. In the hands of state actors, it can lead to disruptive military capabilities and transform economic competition. At the same time, terrorists may exploit its dual-use nature to orchestrate attacks once within the exclusive domain of great powers. It could also slip free of human oversight.

This chapter examines these three threats to national security: rival states, rogue actors, and uncontrolled AI systems. It closes by assessing why existing strategies fall short of managing these intertwined threats.

States, terrorists, and AIs are threats to national security.

Strategic Competition

In an international system with no central authority, states prioritize their own strength to ensure their own security. This competition arises not from a desire for dominance but from the necessity of safeguarding national interests. They exist in an environment where threats can emerge unexpectedly and assistance from others is uncertain. Some states may rise in their power and provoke alarm in their rivals, a pattern called the Thucydides Trap. Consequently, states seek to preserve their relative power.

In this environment, the impact of AI on state power looms large. AI may transform the foundations of economic and military power. Its ability to automate labor could become the source of economic competitiveness. In the military sphere, it could be used to dominate rivals. We begin by looking at economic power, then turn to its greatest military implications.

Shifting Basis of Economic Power

AI Chips as the Currency of Economic Power. As AI becomes more and more integrated in the economy, the possession of advanced AI chips may define a nation's power. Historically, wealth and population size underpinned a state's influence; however, the automation of tasks through AI alters this dynamic. A collection of highly capable AI agents, operating tirelessly and efficiently, rivals a skilled workforce, effectively turning capital into labor. In this new paradigm, power will depend both on the capability of AI systems and the number of AI chips on which they could run. Nations with greater access to AI chips could outcompete others economically.

Destabilization Through Superweapons

States have long pursued weapons that could confer a decisive advantage over rivals. AI systems introduce new avenues for such pursuit, raising questions about whether certain breakthroughs could undermine deterrence and reorder global power structures.

AI Could Enable Military Dominance. Advanced AI systems may drive technological breakthroughs that alter the strategic balance, similar to the introduction of nuclear weapons, and could generate strategic surprise that catches rivals off-guard. Such a “superweapon” may grant two tiers of advantages. One, which might be called “subnuclear dominance,” would allow a state to project power widely and subdue adversaries without disrupting nuclear deterrence. The second possibility—a “strategic monopoly” on power—would upend the nuclear balance entirely and could establish one state's complete dominance and control, leaving the fate of rivals subject to its will.

Possible Superweapons. Subnuclear superweapons—such as an AI-enabled cyberweapon that can suddenly and comprehensively destroy a state's critical infrastructure, exotic EMP devices, and next-generation drones—could confer sweeping advantages without nullifying an adversary's nuclear deterrent. Some superweapons might erode mutual assured destruction outright. A “transparent ocean” would threaten submarine stealth, revealing the location of nuclear submarines. AIs might be able to pinpoint all hardened mobile nuclear launchers, further undermining the nuclear triad. AIs could undermine situational awareness and sow confusion by generating elaborate deceptions—a “fog of war machine”—that mask true intentions or capabilities. A defensive superweapon possibility is an anti-ballistic missile system that eliminates an adversary’s ability to strike back. Lastly, some superweapons remain beyond today’s foresight—“unknown unknowns” that could undermine strategic stability.

Implications of Superweapons. Superintelligence is not merely a new weapon, but a way to fast-track all future military innovation. A nation with sole possession of superintelligence might be as overwhelming as the Conquistadors were to the Aztecs. If a state achieves a strategic monopoly through AI, it could reshape world affairs on its own terms. An AI-driven surveillance apparatus might enable an unshakable totalitarian regime, transforming governance at home and leverage abroad.

The mere pursuit of such a breakthrough could, however, tempt rivals to act before their window closes. Fear that another state might soon rapidly grow in power has led observers to contemplate measures once seen as unthinkable. In the nuclear era, Bertrand Russell, ordinarily a staunch pacifist, proposed preventive nuclear strikes on the Soviet Union to thwart its rise, while the United States seriously pondered crippling the Chinese nuclear program during the early 60s. Faced with the specter of superweapons and an AI-enabled strategic monopoly on power, some leaders may turn to preventive action. Rather than only relying on cooperation or seeking to outpace their adversaries, they may consider sabotage or datacenter attacks, if the alternative is to accept a future in which one’s national survival is perpetually at risk.

Superweapons and shifting economic power can redefine strategic competition. To grasp the full magnitude of AI's impacts on national security, we turn next to rogue actors, and later to AI systems that slip from human control.

Terrorism

AI's Dual-Use Capabilities Amplify Terrorism Risks. As AI capabilities increase, it will likely be important not only in the context of state-level competition but also as an amplifier of terrorist capabilities. Technologies that can revolutionize healthcare or simplify software development also have the potential to empower individuals to create bioweapons and conduct cyberattacks. This amplification effect lowers the barriers for terrorists, enabling them to execute large-scale attacks that were previously limited to nation-states. This section examines two critical areas where AI intensifies terrorism risks: lowering barriers to bioweapon development, and lowering barriers to cyberattacks against critical infrastructure.

Bioterrorism

AI Lowers Barriers to Bioterrorism. Consider Aum Shinrikyo, the Japanese cult that orchestrated the 1995 Tokyo subway sarin attack. Operating with limited expertise, they managed to produce and deploy a chemical weapon in the heart of Tokyo's transit system, killing 13 people and injuring over 5,000. The attack paralyzed the city, instilling widespread fear and demonstrating the havoc that determined non-state actors can wreak.

With AI assistance, similar groups could achieve far more devastating results. AI could provide step-by-step guidance on designing lethal pathogens, sourcing materials, and optimizing methods of dispersal. What once required specialized knowledge and resources could become accessible to individuals with malevolent intent, dramatically increasing the potential for catastrophic outcomes. Indeed, some cutting-edge AI systems without bioweapons safeguards already exceed expert-level performance on numerous virology benchmarks.

Engineered pathogens could surpass historical pandemics in scale and lethality. The Black Death killed in the vicinity of half of Europe's population without human engineering. Modern bioweapons, enhanced by AI-driven design, could exploit vulnerabilities in human biology with unprecedented precision, creating contagions that evade detection and resist treatment. While most discussions of bioweapons are secret, some scientists have openly warned of "mirror bacteria,'' engineered with reversed molecular structures that could evade the immune defenses that normally keep pathogens at bay. Though formidable to create, they have prompted urgent appeals from leading researchers to halt development, lest they unleash a catastrophe unlike any our biosphere has known. In contrast to other weapons of mass destruction, biological agents can self-replicate, allowing a small initial release to spiral into a worldwide calamity.

Cyberattacks on Critical Infrastructure

AI Lowers Barriers to Cyberattacks Against Critical Infrastructure. Our critical infrastructure, including power grids and water systems, is more fragile than it may appear. A hack targeting digital thermostats could force them to cycle on and off every few minutes, creating damaging power surges that burn out transformers—critical components that can take years to replace. Another approach would be to exploit vulnerabilities in Supervisory Control and Data Acquisition software, compelling sudden load shifts and driving transformers beyond safe limits. At water treatment facilities, tampered sensor readings could fail to detect a dangerous mixture, and filtration processes could be halted at key intervals, allowing contaminants to enter the municipal supply undetected—all without needing on-site sabotage. The Department of Homeland Security has cautioned that AI could be employed by malicious actors to exploit vulnerabilities in these systems. Presently, only highly skilled operatives or nation-states possess the expertise to conduct such sophisticated operations, like the Stuxnet worm that damaged Iran’s nuclear facilities. However, AI could democratize this capability, providing rogue actors with tools to design and execute attacks with greater accessibility, speed, and scale.

AI-driven programs could tirelessly scan for vulnerabilities, adapt to defensive measures, and coordinate assaults across multiple targets simultaneously. The automation of complex hacking tasks reduces the need for specialized human expertise. This shift could enable individuals to cause disruptions previously achievable only by governments. Moreover, AI-assisted attacks may be harder to trace back to a specific attacker. This could incentivize adversaries to conduct such an attack, if they believe their target will not be able to determine they were the perpetrator. The difficulty of attribution complicates responses and heightens the risk of escalation between great powers, potentially leading to severe conflicts.

Offense-Defense Balance

AI Is Often Offense Dominant. Some argue that broad access to AI technologies could strengthen defensive capabilities. However, in both biological and critical infrastructure contexts, attackers hold significant advantages. In biotechnology, developing cures or defenses against engineered pathogens is complex and time-consuming, which would lag behind the creation and deployment of new threats. The rapid self-replication of biological agents amplifies damage before effective countermeasures can be implemented. Many viruses still do not have a cure.

Defense-dominant dual-use technology should be widely proliferated, while catastrophic offense-dominant dual-use technology should not.

Critical infrastructure systems often suffer from "patch lag," resulting in software remaining unpatched for extended periods, sometimes years or decades. In many cases, patches cannot be applied in a timely manner because systems must operate without interruption, the software remains outdated because its developer went out of business, or interoperability constraints require specific legacy software. Adversaries have enduring opportunities to exploit vulnerabilities within critical infrastructure. As AI tools advance, even novice adversaries could automate the discovery of software vulnerabilities, coordinating attacks at scale. While software that receives frequent updates, such as Chrome, does not suffer from substantial patch lag and can be more resilient, critical infrastructure remains at a distinct disadvantage. Under these conditions, an adversary needs to find only one overlooked vulnerability, while defenders grapple with the far more daunting task of handling every corner and patching every vulnerability if they hope to achieve defense dominance.

Historical efforts to shift the offense-defense balance illustrate inherent challenges with WMDs. During the Cold War, the Strategic Defense Initiative aimed to develop systems to intercept incoming nuclear missiles and render nuclear arsenals obsolete. Despite significant investment, creating an impermeable defense proved unfeasible, and offense remained dominant.

Loss of Control

We now shift from threats involving rival states and terrorists to a new source of threat: the possibility of losing control over an AI system itself. Here, AIs do not just amplify existing threats but create new paths to mass destruction. A loss of control can occur if militaries and companies grow so dependent on automation that humans no longer have meaningful control, if an individual deliberately unleashes a powerful system, or if automated AI research outruns its development safeguards. While this threat is the least understood, its severity can be great enough to permanently undermine national security.

Erosion of Control

Waves of automation, once incremental, may strike entire sectors at once and leave human workers abruptly displaced. In this climate, those who refuse to rely on AI to guide decisions will find themselves outpaced by competitors who do, having little choice but to align with market pressures rather than barter with them. Each new gain in efficiency entrenches dependence on AI, as efforts to maintain oversight only confirm that the pace of commerce outstrips human comprehension. Soon, replacing human managers with AI decision-makers seems inevitable, not because anyone consciously aims to surrender authority, but because to do otherwise courts immediate economic disadvantage.

Self-Reinforcing Dependence. Once AI-managed operations set the tempo, still more AI is required simply to keep pace. Initially, these systems compose emails and handle administrative tasks. Over time, they orchestrate complex projects, supervise entire departments, and manage vast supply chains beyond any human's capacity. As society's economic demands become more and more complex, people will entrust more and more critical decisions to these systems, increasingly binding us to a cycle of escalating reliance.

Irreversible Entanglement. Eventually, essential infrastructure and markets cannot be disentangled from AI without risking collapse. Human livelihoods depend on automated processes that no longer permit easy unwinding, and people lose the skills needed to reassert command. Like our power grids, which cannot be shut off without immense costs, our AI infrastructure may become completely enmeshed in our civilization. The cost of pressing the off switch grows more and more prohibitive, as halting these systems would cut off the source of our livelihoods. Over time, people become passengers in an autonomous economy that eludes human management.

Cession of Authority. Unraveling AI from the military would endanger a nation’s security, effectively forcing governments to rely on automated defense systems. AI’s power does not stem from any outright seizure; it flows from the fact that a modern force lacking such technology would be outmatched. This loss of control unfolds not through a dramatic coup but through a series of small, apparently sensible decisions, each justified by time saved or costs reduced. Yet these choices accumulate. Ultimately, humans are left on the periphery of their own economic order, leaving effective control in the hands of AIs.

Unleashed AI Agents

All it takes to cause a loss of control is for one individual to unleash a capable, unsafeguarded AI agent. Recent demonstrations like "ChaosGPT"—an AI agent instructed to cause harm—have been impotent, yet they hint at what a more sophisticated system might attempt if instructed to "survive and spread."

Rogue State Tactics. An unleashed AI could draw on the methods of rogue states. North Korea, for instance, has siphoned billions through cyber intrusions and cryptocurrency theft. A sufficiently advanced system might replicate and even improve upon these tactics at scale—self-propagate copies of itself in scattered datacenters, diverting new funds to finance more ambitious projects, and infiltrating camera feeds or private communications to blackmail or manipulate opposition.

A Simple Path to Catastrophe. While an unleashed AI might emulate rogue states’ tactics of cyber theft or blackmail, it might pursue an even more direct route to securing an advantage, drawing on looming robotics capabilities to gain its own physical foothold. Several major tech firms have already begun prototyping humanoid robots—including the so-called “Tesla Bots”—intended for warehouses, factories, and households. Though rudimentary now, future models may grow far more agile and perform tasks that once demanded human hands. If a capable AI hacks such machines, it gains immediate leverage in the physical world. From there, the sequence is straightforward: it crafts a potent cocktail of bioweapons and disperses it through its robotic proxies, crippling humanity’s ability to respond. Having subdued resistance, the AI can then operate across timescales far beyond any human lifespan, gradually reestablishing infrastructure under its exclusive control. This scenario is only one simplified baseline; other plans could be carried out more swiftly and rely less on robotics. If just one powerful AI system is let loose, there may be no wrestling back control.

Intelligence Recursion

In 1951, Alan Turing suggested that a machine with human capabilities "would not take long to outstrip our feeble powers." I. J. Good later warned that a machine could redesign itself in a rapid cycle of improvements—an "intelligence explosion"—that would leave humans behind. Today, all three most-cited AI researchers (Yoshua Bengio, Geoffrey Hinton, and Ilya Sutskever) have noted that an intelligence explosion is a credible risk and that it could lead to human extinction.

Risks of a Fast Feedback Loop. An "intelligence recursion" refers to fully autonomous AI research and development, distinct from current AI-assisted AI R&D. A concrete illustration helps. Suppose we develop a single AI that performs world-class AI research that operates around the pace of today's AIs, say 100 times the pace of a human. Copy it 10,000 times, and we have a vast team of artificial AI researchers driving innovations around the clock. An "intelligence recursion" or simply a "recursion" refines the notion of "recursive self-improvement" by shifting from a single AI editing itself to a population of AIs collectively and autonomously designing the next generation.

Even if an intelligence recursion achieves only a tenfold speedup overall, we could condense a decade of AI development into a year. Such a feedback loop might accelerate beyond human comprehension and oversight. With iterations that proceed fast enough and do not quickly level off, the recursion could give rise to an "intelligence explosion." Such an AI may be as uncontainable to us as an adult would be to a group of three-year-olds. As Geoffrey Hinton puts it, "there is not a good track record of less intelligent things controlling things of greater intelligence". Crucially, there may be only one chance to get this right: if we lose control, we cannot revert to a safer configuration.

Recursion Control Requires an Evolving Process, Not a One-Off Solution. It would be misguided to regard intelligence recursion control as a purely technical riddle existing in a vacuum waiting to be "solved" by AI researchers. Managing a fast-evolving and adaptive intelligence recursion is more like steering a large institution that can veer off its mission over time. It is not a puzzle; it is a "wicked" problem. A static solution cannot keep pace with ongoing, qualitatively new emergent challenges and unknown unknowns. As in modern safety engineering, control must come from a continual control process rather than a monolithic airtight solution that predicts and handles all possible failure modes beforehand. Von Neumann reminds us that "All stable processes we shall predict. All unstable processes we shall control."

Unfortunately, our ability to control this recursion is limited. Controlling a recursion requires controlling its initial step, but safeguards for the current generation of AI systems offer only limited reliability. Moreover, we cannot run repeated large-scale tests of later stages of the recursion without risking disaster, so it is less amenable to the empirical iterative tinkering we usually rely on. Even with our best existing technical safeguards in place, if people initiate a full-throttle intelligence recursion, losing control is highly likely and the default.

Intelligence Recursion as a Path to Strategic Monopoly. Despite the danger, intelligence recursion remains a powerful lure for states seeking to overtake their rivals. If the process races ahead fast enough to produce a superintelligence, the outcome could be a strategic monopoly. Even if the improvements are not explosive, a recursion could still advance capabilities fast enough to outpace rivals and potentially enable technological dominance. First-mover advantage might then persist for years—or indefinitely—spurring states to take bigger risks in pursuit of that prize.

Geopolitical Competitive Pressures Yield a High Loss of Control Risk Tolerance. In the Cold War, the phrase "Better dead than Red" implied that losing to an adversary was seen as worse than risking nuclear war. In a future AI race, similar reasoning could push officials to tolerate a double-digit risk of losing control if the alternative—lagging behind a rival—seems unacceptable. If the choice is stark—risk omnicide or lose—some might take that gamble. Carried out by multiple competing powers, this amounts to global Russian roulette and drives humanity toward an alarming probability of annihilation. In sharp contrast, after the defeat of Nazi Germany, Manhattan Project scientists feared the first atomic device might ignite the atmosphere. Robert Oppenheimer asked Arthur Compton what the acceptable threshold should be, and Compton set it at three in a million (a "6σ" threshold)—anything higher was too risky. Calculations suggested the real risk was below Compton's threshold, so the test went forward. We should work to have our risk tolerance stay near Compton's threshold rather than in double-digit territory. However, in the absence of coordination, whether states trigger a recursion depends on their probability of a loss of control. The prospect of a loss of control shows that in the push to develop novel technologies, "superiority is not synonymous with security", but the drive toward strategic monopoly may override caution, potentially handing the final victory not to any state, but to the AIs themselves.

Therefore, loss of control can emerge structurally, as society gradually yields decision-making to automated systems that become indispensable but insidiously acquire more and more effective control. It can occur intentionally, such as a rogue actor unleashing an AI to do harm. It can also occur by accident, when a fast-moving intelligence recursion loops repeatedly ad mortem. All it takes is one loss of control event to jeopardize human security.

As AI continues to evolve and approach expert-level capabilities, it could redefine national competitiveness to be based on a nation's access to AI chips, and it could discover a "superweapon" that could enable a state to have a strategic monopoly. Additionally, AI's general and dual-use nature amplifies existing risks such as bioterrorism and cyberattacks on critical infrastructure. Unfortunately, there is a strong attacker's advantage for bioterrorism and cyberattacks on critical infrastructure. Therefore, access to AI systems that can engineer weapons of mass destruction must have restrictions. Moreover, there are several paths to a loss of control of powerful AI systems. These factors imply that AI's importance for national security will become not only undeniable but also at least as pivotal as previous weapons of mass destruction.

Existing AI Strategies

States grappling with terrorist threats, destabilizing weaponization capabilities, and the specter of losing control to AI face difficult choices on how to preserve themselves in a shifting landscape. Against this backdrop, three proposals have gained prominence: the first lifts all restraints on development and dissemination, treating AI like just another computer application; the second envisions a voluntary halt when programs cross a danger threshold, hoping that every great power will collectively stand down; and the third advocates concentrating development in a single, government-led project that seeks a strategic monopoly over the globe. Each path carries its own perils, inviting either malicious use risks, toothless treaties, or a destabilizing bid for dominance. Here we briefly examine these three strategies and highlight their flaws.

  1. Hands-off ("Move Fast and Break Things", or "YOLO") Strategy. This strategy advocates for no restrictions on AI developers, AI chips, and AI models. Proponents of this strategy insist that the U.S. government impose no requirements—including testing for weaponization capabilities—on AI companies, lest it curtail innovation and allow China to win. They likewise oppose export controls on AI chips, claiming such measures would concentrate power and enable a one-world government; in their view, these chips should be sold to whoever can pay, including adversaries. Finally, they encourage that advanced U.S. model weights continue to be released openly, arguing that even if China or rogue actors use these AIs, no real security threat arises because, they maintain, AI's capabilities are defense-dominant. From a national security perspective, this is neither a credible nor a coherent strategy.
  2. Moratorium Strategy. The voluntary moratorium strategy proposes halting AI development—either immediately or once certain hazardous capabilities, such as hacking or autonomous operation, are detected. Proponents assume that if an AI model test crosses a hazard threshold, major powers will pause their programs. Yet militaries desire precisely these hazardous capabilities, making reciprocal restraint implausible. Even with a treaty, the absence of verification mechanisms means the treaty would be toothless; each side, fearing the other's secret work, would simply continue. Without the threat of force, treaties will be reneged, and some states will pursue an intelligence recursion. This dynamic, reminiscent of prior arms-control dilemmas, renders the voluntary moratorium more an aspiration than a viable plan.
  3. Monopoly Strategy. The Monopoly strategy envisions one project securing a monopoly over advanced AI. A less-cited variant—a CERN for AI reminiscent of the Baruch Plan from the atomic era—suggests an international consortium to lead AI development, but this has gained less policymaker interest. By contrast, the U.S.-China Economic and Security Review Commission has suggested a more offensive path: a Manhattan Project to build superintelligence. Such a project would invoke the Defense Production Act to channel AI chips into a U.S. desert compound staffed by top researchers, a large fraction of whom are necessarily Chinese nationals, with the stated goal of developing superintelligence to gain a strategic monopoly. Yet this facility, easily observed by satellite and vulnerable to preemptive attack, would inevitably raise alarm. China would not sit idle waiting to accept the US's dictates once they achieve superintelligence or wait as they risk a loss of control. The Manhattan Project assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it. What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure.
Possible outcomes of a U.S. Superintelligence Manhattan Project. An example pathway to Escalation: the U.S. Project outpaces China without being maimed, and maintains control of a recursion but doesn’t achieve superintelligence or a superweapon. Though global power shifts little, Beijing condemns Washington’s bid for strategic monopoly as a severe escalation. The typical outcome of a Superintelligence Manhattan Project is extreme escalation, and omnicide is the worst foreseeable outcome.

Rival states, rogue actors, and the risk of losing control call for more than a single remedy. We propose three interconnected lines of effort. First, deterrence: a standoff akin to the nuclear stalemate of MAD, in which no power can gamble human security on an unbridled grab for dominance without expecting disabling sabotage. Next, nonproliferation: just as fissile materials, chemical weapons, and biological agents have long been denied to terrorists by great powers, AI chips and weaponizable AI systems can similarly be kept from rogue actors. Finally, competitiveness: states can protect their economic and military power through a variety of measures including legal guardrails for AI agents and domestic AI chip and drone manufacturing. Our superintelligence strategy, the Multipolar Strategy, echoes the Cold War framework of deterrence, nonproliferation, and containment, adapted to AI's unique challenges.