Competitiveness

Survival achieved through deterrence or nonproliferation can forestall catastrophe, but it does not by itself secure a state's future. If a state aspires to shape events rather than merely endure them, it must strive to remain competitive. In this chapter, we turn to the crucial goal of competitiveness, by discussing integrating AI into the military, strengthening economic resilience through guaranteed access to AI chips, crafting legal structures to govern AI agents effectively, and maintaining political stability amid explosive economic growth.

Rapid and prudent adoption of AI in economic and military spheres will become critical for a nation’s strength.
Competitiveness
Military Economy Law Politics
Manufacture drones and carefully integrate AI into military command and control Guarantee access to AI chips through domestic manufacturing and export controls Extend legal requirements to AI agents to facilitate commerce Maintain political stability in the face of mass automation

Military Strength

Even if a state pioneers a breakthrough, it can fall behind if it fails to integrate that capability into actual operations. Britain introduced the first tanks during World War I but was soon eclipsed by Germany's systematic adoption of tanks in the second World War. Similarly, even if superintelligence provides the technical roadmap to, for example, a comprehensive second-strike missile defense, the speed at which it can be built may still rely on a nation's preexisting industrial capacity. We turn next to three short-term imperatives for AI diffusion in the military: securing a reliable drone supply chain, carefully weaving AI into command and control, and integrating AI into cyber offense.

Guarantee Drone Supply Chains and Reduce Misunderstanding. Although general-purpose AI can pose larger-scale dangers, drones occupy a more conventional yet increasingly pivotal role on modern battlefields. Drones are cheap, agile, lethal, and decentralized, attributes that make them indispensable for states determined to keep pace with military trends. Yet many states remain heavily reliant on Chinese manufacturers for key drone and robotic components, leaving them vulnerable if those parts are withheld or disrupted at a decisive juncture.

Even if a state secures its supply of drones, the sheer volume and autonomy of drones can drive a conflict into unintended terrain if they approach disputed lines or misread ambiguous signals. To defuse potential clashes, states could build on practices that proved valuable in earlier eras, such as maintaining open crisis hotlines, arranging routine exchanges, and other confidence-building measures. Even then, large-scale production of drones and robots will become a near-inevitable step for any state seeking to defend its position in future conflicts.

Diffuse AI Into Command and Control and Cyber Offense. Modern battlefields demand rapid decisions drawn from torrents of data across land, sea, air, and cyber domains. AI systems can sift through these streams faster than human officers, enticing commanders to rely on automated judgments. Similarly, AI hacking systems which outpace humans in speed and cost could greatly expand a military's capacity to perform cyberattacks.

Incorporating AI into command and control and cyberoffense would significantly enhance military capabilities, yet this dynamic risks reducing "human in the loop" to a reflexive click of "accept, accept, accept," with meaningful oversight overshadowed by the speed of events. Demanding human approval of all individual lower-level engagements may be less important than ensuring explicit human approval for more severe or escalatory attacks. However, human oversight of key military decisions is nonetheless crucial. A human backstop can reduce the risk of a "flash war", akin to the 2010 flash crash, where a minor AI mistake might spiral into destructive reprisals before any human can intervene.

Economic Security

Economic security is a cornerstone of national security, and AI is set to become crucial for economic security. By bolstering domestic AI chip production and attracting skilled AI scientists from abroad, nations can enhance their resilience and solidify their positions.

Manufacture AI Chips

A Chinese invasion of Taiwan would remove the West’s access to new AI chips.

The world's reliance on Taiwan for high-end AI chips constitutes a strategic vulnerability. Since there is sole-source supply chain dependence on Taiwan for high-end AI chips, Taiwan poses a critical chokepoint that could undermine a nation's competitiveness in AI. Many analysts think there is a double-digit probability that China will invade Taiwan in the next decade. In addition to causing global conflict and economic upheaval across the world, this could severely disrupt the supply of AI chips. While the West has a decisive AI chip advantage, an invasion could reverse this and enable China to gain a decisive advantage in AI capabilities and potentially become a unipolar force.

China has been investing extensively in its domestic chip manufacturing, allocating resources equivalent to a U.S. CHIPS Act annually. This commitment positions China to endure geopolitical shocks and potentially outpace other nations in AI development. In contrast, many countries remain dependent on Taiwan for their AI chip supply, exposing themselves to risks associated with geopolitical tensions. As a result, a Chinese invasion of Taiwan would damage the West's ability to develop and use AI much more than it would damage China's.

The most important resource for building new technology should not be exclusively produced in one of the world's most volatile regions. To address this vulnerability, nations should invest in building advanced AI chip fabrication facilities within their own territories. Constructing such facilities domestically entails higher costs, but government subsidies can bridge this gap. By incentivizing domestic production, countries can secure their AI supply chains, reduce dependence on external sources, and improve their bargaining power. Moreover, when AI agents generate clear economic value, a nation's economic power may hinge on the number of its AI chips, a supply that domestic manufacturing can expand.

This strategic move mirrors historical efforts to control critical technologies. During the Manhattan Project, significant investment was made not only in the development of nuclear weapons at Los Alamos but also in uranium enrichment at Oak Ridge. Similarly, ensuring access to AI chips requires substantial investment in both innovation and manufacturing infrastructure.

By strengthening domestic capabilities in AI chip production, nations can enhance their competitive position and resilience against a foreseeable devastating defeat. We now turn to a restrictive rather than constructive AI chip strategy.

Facilitate Immigration for AI Scientists

Just as the United States once harnessed the talents of immigrant scientists during the Manhattan Project, so too does American leadership in AI partly rely on attracting exceptional AI scientists from abroad. In a recent survey, 60% of non-citizen AI PhDs working in the United States reported significant immigration difficulties, and many indicated that these challenges made them more likely to leave. As global competition for AI talent intensifies, implementing reliable immigration pathways tailored specifically to AI scientists will help ensure that the United States remains at the forefront of AI development. By refining visa and residency policies for these researchers—distinct from broader immigration reforms and southern border policy—the United States can maintain its edge in an area increasingly vital to both national security and economic strength.

Aligning Individual AI Agents

Ethical dilemmas have long been subjects of intense debate, with no definitive resolutions in sight. Yet, in the absence of universal answers to what individuals ought to do, societies have crafted legal systems to punish unacceptable behaviors and promote safety and prosperity. While no legal system is perfect, many share foundational principles that generally function effectively. For instance, laws prohibit murder, governments enforce contracts, and those who cause harm are often required to provide restitution. Legal frameworks are not intended to ensure optimal behavior from all; such an endeavor would unduly constrain individual liberties and prove ineffective. However, they deter some of the worst actions, leaving society to use informal institutions and economic incentives to encourage beneficial conduct.

We contend that the same principles will apply to AI. Determining a universal solution for AI behavior in all scenarios is intractable. Yet, we need not postpone legislating AI behavior until every question about AI values is resolved. Rather, we can start formulating principles to govern AI conduct and prevent harmful actions, without unduly limiting the functions of various AIs.

While there are some laws that do not straightforwardly cover AI—such as laws that rely on human intent and mental states—we can adapt legal concepts to establish constraints for AI agents so that they follow the spirit of the law. In particular, though much law hinges on the mindset or intention behind an act (mens rea), we can ensure that AI does not carry out the acts (actus reus) the law is meant to prohibit. Further, by treating AI as assistants to human principals, we can impose constraints that mirror those already applied to human behavior, ensuring that AI agents contribute positively to society without causing undue harm.

Constraints on AI Behavior

We propose some basic constraints on AI behavior similar to human legal obligations, including obligations to the public like preventing harm and not lying, and special obligations to the AI's human principal.

Duty of Care to the Public (Reasonable Care). AI agents should exercise a level of caution commensurate with that of a reasonable person in similar circumstances to prevent harm. This is the legal concept of reasonable care. This involves avoiding actions that could foreseeably cause harm in a legal sense—such as violating tort or criminal laws—rather than merely offending sensibilities or engaging in controversial discourse. The application of reasonable care is context-dependent; for instance, providing detailed information about weapons materials might be appropriate for a verified professional but not for an unvetted individual.

Duty Not to Lie. AI agents should refrain from making statements they know to be false. This would be overly restrictive for humans, who are allowed to lie to one another in most situations, but chilling effects on free speech are less relevant for AIs. Moreover, since AIs can be tested, it is more feasible to determine whether an AI overtly lied than it is with humans. Instead, they should be held to a standard more similar to the prohibitions against perjury and fraud even in casual or professional settings. This is separate from nuances like puffery or strategic omissions. AI agents should avoid overt lies, opting instead to withhold responses when necessary.

Duty of Care to the Principal (Fiduciary Duties). In their role as assistants, AI agents owe special obligations to their principals. They should act with loyalty, prioritizing the principal's interests without engaging in self-dealing or serving conflicting interests simultaneously. Additionally, they should keep the principal reasonably informed, providing pertinent information without key omissions to enable informed consent.

Custom Goals and Market-Driven Variations

Within these legally inspired constraints, there is ample room for diversity in how AI agents are designed and operate. The free market and consumer preferences can shape the specific goals and propensities of AI agents. Some may prioritize speed and efficiency, delivering quick results with minimal embellishment, while others might focus on providing thorough, well-crafted responses. Personality traits—such as humor, formality, or reservedness—can also be tailored to suit different user preferences and contexts. For example, a customer support AI might be programmed to avoid discussions on irrelevant topics, maintaining focus on service-related issues. This customization allows AI agents to meet the varied needs of users across different industries and personal preferences.

Grounding AI behavior in legal principles offers a pragmatic framework for governing AI agents, ensuring they act in ways that prevent harm and uphold societal standards. Within these boundaries, customization and market dynamics can shape the specific characteristics of AI agents, allowing for diversity and innovation. This approach avoids the pitfalls of imposing narrow or arbitrary ethical standards. By leveraging established legal concepts and societal processes, we can shape AI actions in a manner that respects individual freedoms and fosters a pluralistic environment where AI contributes positively to society without undue restrictions.

Aligning Collectives of AI Agents

The emerging presence of AI agents online will soon result in a complex ecosystem where these entities execute tasks for users, conduct financial transactions, handle sensitive information, and enter into contracts—often engaging with a multitude of other agents and systems. This proliferation poses significant challenges: when interacting with an AI agent, one may remain unaware of where it operates, the entities behind its deployment, or its history of conduct. Should an agent cause harm, pursuing legal remedy becomes an arduous endeavor.

Establishing Trust Mechanisms and Institutions. To navigate these challenges, we must develop mechanisms that reinforce trust and accountability within the network of AI agents. Insurance services can underwrite the risks inherent in agent interactions, providing a safeguard against potential losses. By transferring liability from users and developers to insurers, these services enable broader participation in the agent ecosystem while mitigating financial risks. Action firewall services can oversee and regulate agent activities, acting as intermediaries that filter and monitor actions initiated by AI agents. They ensure that agents adhere to legal and ethical standards, thereby fostering trust among parties that interact with them. Human oversight services can facilitate human review and approval of AI agent decisions. Reputation systems can chronicle and disseminate information regarding agent behavior, enabling parties to assess the reliability of agents they engage with. By maintaining records of past interactions and outcomes, these systems help identify agents that consistently act in good faith. Mediation and collateral arrangements offer further security, wherein disputes are resolved through impartial entities, and agents furnish guarantees against misconduct. By requiring agents to provide collateral, parties gain assurance of compensation in case of breach, while mediation services facilitate fair resolutions.

Linking AI Agents to Human Legal Entities via IDs. A foundational measure involves assigning unique identifiers to AI agents, anchoring them to human-backed legal entities. This linkage ensures that agents do not operate in anonymity and that lines of accountability are distinctly drawn. It should become customary that AI agents abstain from providing services or exchanging resources with other agents not connected to legitimate legal entities with human oversight—entities not solely backed by AI agents themselves.

Deferring Consideration of AI Rights. It is imperative to clarify that this approach does not entail granting rights or direct accountability to AI agents themselves. Bestowing rights upon AI agents presents ambiguous benefits at best, with clear and significant downsides. An AI agent can be replicated and deployed in mere seconds, whereas cultivating a human being to maturity demands decades. Granting rights to AI could lead to a rapid proliferation of autonomous entities that outpace human control. Moreover, heightened intelligence in AI does not inherently equate to moral discernment; history records many intelligent individuals who act without ethical consideration. Since we are unlikely to attain definitive certainty about AI consciousness in the near term, it is prudent to postpone the consideration of AI rights. By deferring this decision, we avoid the risks associated with prematurely granting rights to entities whose moral status remains uncertain.

By instituting legal frameworks and cultivating institutional mechanisms, we can avert the emergence of a chaotic AI ecosystem. Tethering AI agents to human-backed legal entities and implementing systems that enhance accountability and trust positions us to adeptly manage the complexities introduced by the widespread deployment of AI agents.

Comparison of Low, Medium, and High Control approaches. We consistently recommend the medium control option.
Low Control Medium Control High Control
Risk Management Accelerate—no restrictions Deterrence with MAIM
Nonproliferation
Competitiveness
Pause AI
Distribution of Cutting-Edge
AI Weights Among States
Everyone including
rogue states (open weight)
Multipolar regime with
responsible states
Unipolar regime with
strategic monopoly
(AI Manhattan Project)
Government Control
over Domestic AI
No involvement Light-touch legislation
(e.g., mandatory testing,
clarified liability)
Nationalization
Information Security Standard corporate security Secure against well-financed
terrorist groups
Secure against top-priority
programs of the most
capable nation-states
AI Autonomy Liberate Avoid giving rights for
the foreseeable future
Avoid ever giving rights
AI Behavior Restrictions AI only constrained by
existing law
AIs constrained by the spirit
of the law (exercise reasonable
care and fiduciary duties)
Sanctimonious AI (refuse if
something might be harmful
or cause offense to somebody)
Historical WMD Proposals Biological Weapons
Convention
IAEA, OPCW Baruch Plan

Political Stability

As AI integrates more deeply into communities, it poses challenges that, if unaddressed, could undermine political stability. Confronting censorship and misinformation as well as the disruptions wrought by rapid automation is imperative to maintaining national competitiveness.

Censorship and Inaccurate Information

Erosion of the Information Ecosystem. The fabric of our society is woven from the threads of shared information. When incorrect or misleading content proliferates, it distorts public perception and leads to flawed collective decisions. Concurrently, heavy-handed censorship can erode trust in institutions and provoke backlash. AI has the dual capacity to generate vast amounts of misinformation and to enable unprecedented levels of surveillance and content suppression.

AI as a Tool for Clarity. Amid this challenge, AI can be harnessed to enhance our information ecosystem. AIs can be trained to get better at predicting future events using present information. By tuning AI systems to prioritize accuracy and assert probabilistic judgments—even when they contradict popular opinion—we can address the "Galileo problem" of unpopular truths being suppressed. As AI accelerates the pace of change and ushers in rapid transformative change, increasingly accurate factual judgment will be imperative to prevent societal derailment.

AI Forecasts May Be Cheaper and More Accurate than Human Ones. Current AI systems are already around the accuracy of the best humans in some kinds of forecasting, such as geopolitical forecasting, and soon they could substantially surpass them. Testing forecasting AIs across a wide variety of domains would quickly provide public evidence of the AI's prediction track record. Forecasting AIs created by different organizations may independently arrive at similar forecasts. This convergence can help clarify consensus reality and increase trust.

Forecasting with AI During Crises. In times of crisis, such as geopolitical tensions or public health emergencies, AI-powered forecasting can provide nonpartisan insights that improve decision-making processes. By posing crucial questions such as "When will superintelligence be created?," "Will China invade Taiwan this decade?," and "Is this strategy likely to increase the chance of World War III?," we can obtain probabilistic assessments that can aid policy and high-level decision-making.

Empowering Decision-Makers with AI Insights. Equipping leaders at all levels with AI-generated forecasts can improve governance. By offering well-reasoned predictions and outlining the likely consequences of various actions, AI can support informed choices in fast-moving scenarios. Conditional forecasts that illustrate how catastrophe probabilities decrease with specific interventions can mitigate fatalism and encourage proactive measures. While AI contributes to the challenges of misinformation and censorship, it also offers powerful tools to strengthen our information ecosystem and navigate the uncertainties ahead.

Overview of various national security threats and proposed strategic responses.
National Security Threat Strategic Response
Shifting Basis of Power Competitiveness (Domestic AI Chip Manufacturing)
Destabilizing Superweapons Deterrence (MAIM)
Terrorism Nonproliferation
Unleashed AI Agents Nonproliferation
Erosion of Control Competitiveness (Forecasts + Fiduciary Duties)
Loss of Recursion Control Deterrence (MAIM)

Automation

Automation and Political Stability. As AI systems accelerate the automation of human tasks at an unprecedented scale and pace, societies face the daunting challenge of responding to swift changes in employment. Historical precedents for major transformations in the workforce, such as the Industrial Revolution, unfolded over decades and allowed populations and institutions to adapt. Yet even this more gradual process seems likely to have caused a significant level of disruption and transient unemployment. By contrast, AI-driven automation could occur far more rapidly, with advanced systems soon rivaling or surpassing human performance across a wide range of vocations. Traditional solutions like vocational retraining may prove inadequate if AI capabilities outpace the speed at which large portions of the workforce can be effectively reskilled. Current social safety nets, designed to address episodic or sector-specific unemployment, appear ill-equipped to manage widespread job displacement impacting multiple industries simultaneously.

Uncertain Winners and Losers. As AI displaces large segments of the workforce, the resulting economic outcomes will hinge on how many tasks AIs can soon replace, how well AIs perform them, and the importance of economic bottlenecks. If bottlenecks—such as legal requirements for building factories—are strong, people with the remaining scarce abilities may capture most of the economic gains. But if AIs are highly general-purpose and eliminate bottlenecks, owners of datacenter compute could capture most of the gains instead.

Wealth and Power Distribution. To share some of the benefits of automation, policymakers can weigh options such as a targeted value-added tax on AI services, complemented by rebates, though the exact structure of a tax policy will of course need to be determined in the future. Yet distributing wealth alone can prove fleeting if governments later withhold that wealth. A more durable, long-term approach would also distribute power: states could equip each individual with a unique key tied to a portion of compute, which that citizen alone can activate or lease to others. This arrangement would give them leverage in the economy akin to how laborers currently have the power to withhold their work, tempering the concentration of wealth and authority that might otherwise arise from the coming automation waves.

AI competitiveness requires an expansion of drone manufacturing, a legal framework that keeps AI agents tethered to human accountability, and an unblinking recognition of how automation can roil the labor market. Most importantly, the dependence on Taiwan for advanced AI chips presents a critical vulnerability. A blockade or invasion may spell the end of the West's advantage in AI. To mitigate this foreseeable risk, Western countries should develop guaranteed supply chains for AI chips. Though this requires considerable investment, it is potentially necessary for national competitiveness.