Artificial intelligence is shaping up to be one of the most impactful technologies of the 21st century — with the potential to transform economies, societies, and security. But what does its rapid development mean for the most enduring and dangerous element in international politics — nuclear deterrence?
In this analysis, Sam Winter-Levy and Nikita Lalvani of the Carnegie Endowment for International Peace examine whether artificial intelligence could undermine the central tenet of nuclear stability: a state’s ability to strike back with devastating force. Is a "brilliant first strike", supported by algorithms, possible that would render an adversary's nuclear arsenal useless? And if so, is the world prepared for the consequences?
The rapid development of artificial intelligence in recent years has led many analysts to speculate that it will upend international politics and the military balance of power.
Some have gone so far as to argue, in the words of technologists Dan Hendricks, Eric Schmidt, and Alexander Wang, that advanced artificial intelligence systems could "establish complete dominance and control of a state, leaving the fate of rivals at its will".
Artificial intelligence is undoubtedly a transformative technology that will strengthen the economic, political, and military foundations of state power. But the winner of the AI race will not necessarily enjoy undisputed dominance over its main competitors. The power of nuclear weapons, the most significant invention of the last century, remains a major obstacle to the radical change brought about by AI.
As long as nuclear deterrents remain in place, the economic and military advantages accrued by AI will not allow states to fully impose their political preferences on each other. Consider that the US economy is almost 15 times larger than Russia’s and almost 1,000 times larger than North Korea’s, yet Washington struggles to get Moscow or Pyongyang to do what it wants, largely because of their nuclear arsenals.
Some analysts suggest that advances in AI could challenge this dynamic. To undermine nuclear deterrence, AI would have to destroy its central pillar: a state’s ability to respond to a nuclear attack with a devastating nuclear strike, known as a second-strike capability. AI technology could plausibly make it easier for a state to destroy an adversary’s entire nuclear arsenal in a single "splendid first strike" by pinpointing the location of nuclear submarines and mobile launchers.
It could also prevent an adversary from launching a retaliatory strike by disabling command and control networks. And it could strengthen missile defenses so that an adversary can no longer convincingly threaten a retaliatory strike. If AI could thus help a state avoid the prospect of mutually assured destruction, the technology would make that state unrivaled in its ability to threaten and coerce its adversaries—a result consistent with increasingly popular visions of AI-assisted dominance.
But undermining the nuclear balance of power will not be easy. Emerging technologies still face very real limitations in the nuclear realm. Even the most sophisticated AI-powered guidance and sensor systems may struggle to locate a mobile nuclear missile hidden under a bridge, isolate the signals of a nuclear submarine from the background noise of the ocean, and orchestrate the simultaneous destruction of hundreds of targets on land, in the air, and at sea—with zero margin for error. And competitors will respond to their adversaries’ use of new technologies with their own moves to protect their systems, as they have done at every turn since the dawn of the atomic age.
Even if it does not challenge nuclear deterrence, artificial intelligence could encourage mistrust and dangerous actions among nuclear states.
Many of the steps governments could take to protect and enhance their second-strike capabilities risk unsettling their rivals, potentially spurring costly and dangerous arms races.
It is also possible that AI systems could cross a critical threshold and show extremely rapid improvements in their capabilities. If this happens, their advantages to the country that possesses them could become more pronounced and difficult for adversaries to deal with. Policymakers should monitor such a scenario and facilitate regular communication between AI and nuclear experts.
At the same time, they should take steps to reduce the likelihood of incidents and escalation, including assessing nuclear systems for AI-related vulnerabilities and maintaining channels of communication between nuclear forces. Such steps will help ensure that nuclear stability—not just nuclear deterrence—is preserved in the age of artificial intelligence.
First Strike
Nuclear deterrence depends largely on whether states possess the ability to strike back after a nuclear attack: as long as two nuclear powers maintain a credible second-strike capability that can inflict unacceptable damage on their adversary, a first strike is suicidal.
This understanding has maintained a relatively stable equilibrium for decades. But second-strike capabilities are not invulnerable. States can eliminate delivery platforms, such as mobile missile launchers and nuclear submarines, as long as they can find them.
The difficulty of finding and disabling these platforms is one of the main obstacles to delivering a successful first strike. The sheer size of China, Russia, the United States, the Atlantic, and the Pacific Oceans—the most important areas of nuclear competition today—make such a strike difficult to execute.
The emergence of powerful artificial intelligence systems, however, could solve this problem. Capable of processing and analyzing vast amounts of data, militaries equipped with such technologies could better target their adversaries’ nuclear assets. Consider mobile ground-launched missiles, one of the platforms that underpin Russian and Chinese second-strike capabilities. These missiles, carried on vehicles that can hide under camouflage nets, bridges, or tunnels and move from one hidden location to another, are arguably the most difficult element of Russian and Chinese nuclear forces to eliminate. (Silo-based ballistic missiles, by contrast, are much more vulnerable to attack.)
The improved speed and scale of AI-powered intelligence processing could make it easier to conduct operations against these vehicles. AI systems can search and integrate vast amounts of data from satellites, reconnaissance aircraft, electronic intelligence intercepts, stealth drones, ground-based sensors, and human intelligence to more effectively detect and track mobile nuclear forces.
As for the sea, the potential fusion of AI with sensor technology could make the oceans "transparent", allowing governments to track ballistic missile submarines in real time. This is particularly worrisome for the United States, which keeps a much higher percentage of its warheads on submarines than Russia or China. Artificial intelligence could make it easier to track submarines by automating pattern recognition across vast ocean areas and over long periods of time. It could also help a country hack the systems its adversaries use to track their own weapons.
But even with the help of artificial intelligence, countries won’t be completely sure that a brilliant first strike can knock out an adversary’s ability to strike back. On land, for example, China and Russia could respond to improvements in American tracking systems with their own countermeasures. They could invest in anti-satellite weapons and jamming capabilities.
They could adopt old-fashioned, low-tech solutions, such as covering roads with nets or building decoys to increase the number of targets an attacker would have to hit. They could instruct their launchers to emit fewer signals, making them harder for the United States to track. They could modify the launchers to move faster, expanding the target area that American strikes would need to reach. They could even use their own artificial intelligence systems to insert false information into channels monitored by the American intelligence community.
At sea, too, AI is unlikely to make the sea completely transparent. Any system will struggle to identify, track, and continuously monitor multiple targets over long distances and against the backdrop of ocean noise, especially as submarines become quieter and oceans become noisier. Submarines remain extremely difficult to detect when submerged at depth and operating at low speeds, due to the way sound travels underwater, changing ocean conditions, and the inherent noise of the marine environment.
At sea, false alarms are common; reliable contact is rare. At sea, as on land, major powers can tip the scales in their favor through a variety of countermeasures: they can jam signals, manipulate sensor data, use underwater sensors and unmanned vehicles to detect enemy assets, and operate their own submarines in protected bastions near their shores. Thus, detection will remain a matter of probability, even with the introduction of artificial intelligence - and it is unlikely that states will want to risk a brilliant first strike with anything less than a sure bet.
Command and control
In addition to making it easier to detect and destroy an adversary's nuclear weapons, artificial intelligence could plausibly compromise the nuclear command and control systems that would be needed to launch a counterstrike. Command and control systems are responsible for detecting attacks, reporting them to the appropriate authority, and relaying orders for a retaliatory strike to nuclear forces. These systems must be able to identify a wide range of missiles; assess damage on the ground; send short messages thousands of miles, including deep underwater; and protect the leaders responsible for nuclear decision-making.
States seeking to disable a command and control system must proceed with caution, since overt actions to knock out an adversary's system could be seen as a prelude to a nuclear attack, potentially justifying a preemptive response. Like most elements of a brilliant first strike, disabling a command and control system requires, in the words of scholar James Acton, the destruction of "as much of each component as possible with as little warning as possible".
Many parts of the nuclear command and control system are already vulnerable to attack and will become increasingly vulnerable with new technological breakthroughs. Nuclear and conventional strikes can now destroy fixed equipment such as radars and high-frequency transmitters. With improvements in artificial intelligence, meanwhile, it could become easier to track mobile command posts on the ground, as well as more precisely target those in the air.
Satellites that provide early warning of impending nuclear attacks could become increasingly vulnerable to AI-enhanced anti-satellite weapons. And sophisticated AI-enhanced cyber operations could allow states to penetrate adversary command and control networks, disable early warning systems, and disrupt the transmission of orders—a risk that could grow as states modernize their systems, moving from analog and wired capabilities to digital ones.
But eliminating an entire command and control system in one fell swoop will never be easy, even with advanced technology. Countries have gone to great lengths to make their systems resilient, incorporating redundancy and sophisticated countermeasures.
On land, some command bunkers in China, Russia, and the United States are buried at least 700 meters underground in solid rock, deep enough to survive even a direct hit from a large nuclear weapon. In space, every major nuclear power has hundreds or thousands of satellites in orbit or close to it, making it difficult to destroy a country’s entire early warning and communications system. In the air, the curvature of the Earth limits the range at which surveillance radar can track airborne command posts. And in cyberspace, the ability to launch a disarming strike would require constant access to an adversary’s systems, risking detection and possible retaliation.
Countries will also go to great lengths to try to prevent such intrusions, most likely with the help of their own artificial intelligence-powered defenses while operating multiple secure networks. As a result of these challenges, it seems unlikely that artificial intelligence will threaten the elimination of command and control systems.
Protect, protect!
The final fear is that AI could weaken nuclear deterrence by improving missile defenses, thereby minimizing the chance of a successful second strike and making first strikes more tempting. Building a strong defense against nuclear missile attacks has always been an extremely difficult task. Russia and China have few such capabilities, and American defense systems are error-prone, scattered, and unable to completely prevent an attack by a major nuclear power.
These systems must do the nearly impossible: detect a launch, track hundreds of missiles traveling through space at 20 times the speed of sound, estimate their future trajectories, and destroy them with interceptors—all in less than 30 minutes, the approximate flight time of most ground-based missiles traveling between the United States and Russia or China. National security officials liken the process to trying to shoot a bullet through another bullet. And they need to be able to do it all economically and at scale, so that an attacker can’t overload a defense system with much cheaper missiles.
Artificial intelligence could improve some of these capabilities. Machine learning algorithms, for example, could quickly analyze data from multiple sensors to distinguish real warheads from sophisticated decoys and speed up decision-making after an adversary fires its missiles. Software advances could make it easier to predict a missile’s trajectory. Furthermore, through advances in materials science, artificial intelligence could help produce lighter and more flexible interceptors, making weapons cheaper and more maneuverable in flight.
But none of these developments will happen overnight: developing defense architectures takes years, and U.S. adversaries will not stand idly by and watch. Attackers retain significant advantages: they can launch missiles from unexpected directions, overcome defenses with coordinated salvos using large numbers of decoys, or directly target key defense systems.
In the very unlikely event that missile defenses become so strong as to be insurmountable, states could resort to more creative delivery methods, such as smuggling and pre-positioning small nuclear devices in enemy territory. An AI-enhanced missile defense system would depend on machine learning algorithms that are trained on large, reliable data sets about decoy measures and missile systems—data that U.S. adversaries have every incentive to hide.
Indeed, U.S. adversaries could try to confuse AI algorithms by manipulating missile tests. In short, even AI-enhanced defense systems face physical and economic limitations that sophisticated nuclear forces can exploit.
Take nothing for granted
Even in the face of AI-driven technological change, nuclear deterrence must remain strong. That doesn’t mean, however, that AI poses no risk to global nuclear stability. Even if predictions of AI power overestimate the technology’s actual capabilities, states may perceive greater threats and take potentially destabilizing actions.
If a state believes that an AI-powered competitor can more easily detect its missiles, for example, it may decide to build more warheads, deploy more of its weapons on harder-to-detect mobile vehicles, direct those vehicles to move faster, and delegate more launch authority to vehicle drivers to reduce the need for long-range communications that could be intercepted.
If states believe their command and control systems are vulnerable to AI-based attacks, they may want to reduce the number of steps required to launch a counterattack. All of these actions increase the likelihood of escalation and accidental launches, especially against states with fewer resources and less experience in managing nuclear weapons, such as India and Pakistan.
In addition to this destabilizing uncertainty, it remains possible that states could develop significantly more powerful AI systems that could threaten nuclear deterrence in ways that cannot yet be predicted, especially if—as some experts suggest is possible—AI research and development becomes fully automated, spurring a sudden increase in capabilities. National security experts should not dismiss this possibility and should continually monitor for evidence of rapid increases in AI capabilities among competitors.
But while intelligence is a powerful asset, it is not magic, and states that seek to use AI to disarm their adversaries will face real physical, practical, and institutional constraints. A state empowered by transformative AI systems would still have to solve a huge number of extremely difficult problems before it could even consider beheading a nuclear competitor: it would have to integrate advanced AI into large-scale military bureaucracies, no easy task, and figure out how to test those systems in advance—a difficult task, given the need both to avoid triggering a preemptive response and to rehearse thousands of steps with little room for error.
However intelligent they may be, powerful AI systems will not be able to evade the laws of physics. And no tool or capability guarantees that one state can impose its preferences on another: the United States, after all, wielded enormous military and economic power over the Taliban in Afghanistan, but suffered an unequivocal defeat after two decades of war.
None of this should encourage complacency. Even if nuclear deterrence continues, AI will still transform national security in many ways, including by helping states develop new autonomous weapons and by strengthening offensive and defensive cyber capabilities. These developments will have significant implications, even if they do not give any state the ability to impose its will on its nuclear-armed adversaries.
AI could also democratize access to dangerous capabilities by lowering the barriers for nonstate actors and individuals to build and deploy devastating biological, chemical, and radiological weapons. These are just some of the issues that policymakers will need to address as AI continues to improve.
In the nuclear arena, U.S. officials should not wait to see what AI will achieve. They should ensure that policy processes include AI experts alongside nuclear experts, fostering a dialogue that has been lacking in the national security community. They should conduct rigorous reviews of nuclear systems to check for vulnerabilities that could be exploited by advanced artificial intelligence, especially in cyberspace, and expand the ranks of AI experts in the U.S. government.
They should carefully calibrate any claims about the need for a race to AI or the importance of being the first to develop a decisive miracle weapon so as not to exacerbate a risky and costly nuclear competition. And through arms control dialogues, they should work to strengthen the significant ethical, political, and legal constraints on a state’s ability to launch a brilliant first strike or take other escalating steps. The nuclear deterrent system, in place since the end of World War II, cannot be taken for granted.
As political rivalries and the growing competition for artificial intelligence supremacy among the world's great powers heat up, it is more important than ever to maintain channels of communication and pathways to reduce the risk of unintended escalation and disaster.