The White House has released a sweeping executive order on artificial intelligence (AI), which is notable in a number of ways. Most significantly, it establishes an early stage means of regulation for the controversial technology, which promises to have a vast impact on our lives.
The new approach is being closely coordinated with the EU and was initially announced by the US administration in July. US President Joe Biden further highlighted the proposal during a September meeting with the President’s Council of Advisors on Science and Technology in San Francisco.
The executive order pushes a high degree of private-public cooperation and was timed to come out just days before Silicon Valley leaders gather with international government officials in the UK to look at both the dangers and benefits of AI.
Illustration: Mountain People
It also requires detailed assessments — think of drug testing by the US Food and Drug Administration — before specific AI models could be used by the government. The new regulations seek to bolster the cybersecurity aspects of AI and make it easier for brainy technologists — H-1B program candidates — to immigrate to the US.
Most of the key actors in the AI space seem to be onboard with the thrust of the new regulations, and companies as varied as chipmaker Nvidia and Open AI have already made voluntary agreements to regulate the technology along the lines of the executive order. Google is also fully involved, as is Adobe which makes Photoshop, a key area of concern because of the potential for AI manipulation. The US National Institute of Standards and Technology is to lead the government side in creating a framework for risk assessment and mitigation.
All of this represents sensible steps in the right direction, but something is missing, at least at an unclassified level. There are no indications of similar efforts in the sphere of military activity. What should the US and its allies be considering in terms of regulating AI in that sphere, and how can we convince adversaries to be involved?
First, we need to consider the potentially significant military aspects of AI. Like other pivotal moments in military history, such as the introduction of the long bow, the invention of gunpowder, the creation of rifled barrels, the arrival of airplanes and submarines, the development of long-range sensors, cyberwarfare or the advent of nuclear weapons, AI would rearrange the battlefield in significant ways.
For example, AI would allow decisionmakers to instantly surveil all military history and select the best path to a victory. Imagine an admiral who can simultaneously be afforded the advice of every successful predecessor from Lord Nelson at Trafalgar to Admiral Spruance at Midway to Sir Sandy Woodward in the Falklands?
Conversely, AI would also be able to accurately predict logistical and technological failure points. What if the Russians had been able to use AI to correct their glaring faults in logistics and vulnerability to drones in the early days of Ukraine war?
The ability to spoof intelligence collection through manipulated images spread instantly throughout social networks and driven directly into the sensors of satellites and radars might be possible. AI can also speed up the invention and distribution of new forms of offensive military cyberattacks, overcoming current levels of protection. It could, for example, convince an enemy sensor system that a massive battle fleet was approaching its shores — while the main attack was actually occurring from space.
All of this and much more are coming at an accelerated pace. Look at the time lines: It took a couple of centuries for gunpowder to fulfill its lethal potential. Military aviation went from Kitty Hawk to massive aerial bombing campaigns in less than 40 years. AI is likely to have a dramatic military impact within a decade or even sooner.
As we have with nuclear weapons, we are going to need to develop military guardrails around AI — like arms control agreements in the nuclear sphere. In creating these, some of what decisionmakers must consider includes potential prohibitions on lethal decisionmaking by AI (keeping a human in the loop); restrictions on using AI to attack nuclear command and control systems; a Geneva Conventions-like set of rules prohibiting manipulation or harm to civilian populations using AI-generated images or actions; and limits on size and scale of AI driven “swarm” attacks by small, deadly combinations of uncrewed sensors and missiles.
Using the 1972 Cold War “Incidents at Sea” protocols as a model might make sense. The Soviet Union and the US agreed to limit closure distances between ships and aircraft; refrain from simulated attacks or manipulation of fire control radars; exchange honest information about operations under certain circumstances; and take measures to limit damage to civilian vessels and aircraft in the vicinity. The parallels are obviously far from exact, but the idea — having a conversation about reducing the risk of disastrous military miscalculation — makes sense.
Starting a conversation within NATO could be a good beginning, setting a sensible course for the 32 allied nations in terms of military developments in AI. Then comes the hard part — broadening the conversation to include at a minimum China and Russia, both of whom are attempting to outrace the US in every aspect of AI.
The Biden administration is on the right path with the new executive order. Certainly, we need to get the technology sector on board in terms of considering the risks and benefits of AI, but it is also high time to get the Pentagon cracking on the military version of such regulations, and not just including our friends in the West.
James Stavridis is a Bloomberg Opinion columnist, a retired US Navy admiral, former supreme allied commander of NATO, and dean emeritus of the Fletcher School of Law and Diplomacy at Tufts University. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
In their recent op-ed “Trump Should Rein In Taiwan” in Foreign Policy magazine, Christopher Chivvis and Stephen Wertheim argued that the US should pressure President William Lai (賴清德) to “tone it down” to de-escalate tensions in the Taiwan Strait — as if Taiwan’s words are more of a threat to peace than Beijing’s actions. It is an old argument dressed up in new concern: that Washington must rein in Taipei to avoid war. However, this narrative gets it backward. Taiwan is not the problem; China is. Calls for a so-called “grand bargain” with Beijing — where the US pressures Taiwan into concessions
The term “assassin’s mace” originates from Chinese folklore, describing a concealed weapon used by a weaker hero to defeat a stronger adversary with an unexpected strike. In more general military parlance, the concept refers to an asymmetric capability that targets a critical vulnerability of an adversary. China has found its modern equivalent of the assassin’s mace with its high-altitude electromagnetic pulse (HEMP) weapons, which are nuclear warheads detonated at a high altitude, emitting intense electromagnetic radiation capable of disabling and destroying electronics. An assassin’s mace weapon possesses two essential characteristics: strategic surprise and the ability to neutralize a core dependency.
Chinese President and Chinese Communist Party (CCP) Chairman Xi Jinping (習近平) said in a politburo speech late last month that his party must protect the “bottom line” to prevent systemic threats. The tone of his address was grave, revealing deep anxieties about China’s current state of affairs. Essentially, what he worries most about is systemic threats to China’s normal development as a country. The US-China trade war has turned white hot: China’s export orders have plummeted, Chinese firms and enterprises are shutting up shop, and local debt risks are mounting daily, causing China’s economy to flag externally and hemorrhage internally. China’s
During the “426 rally” organized by the Chinese Nationalist Party (KMT) and the Taiwan People’s Party under the slogan “fight green communism, resist dictatorship,” leaders from the two opposition parties framed it as a battle against an allegedly authoritarian administration led by President William Lai (賴清德). While criticism of the government can be a healthy expression of a vibrant, pluralistic society, and protests are quite common in Taiwan, the discourse of the 426 rally nonetheless betrayed troubling signs of collective amnesia. Specifically, the KMT, which imposed 38 years of martial law in Taiwan from 1949 to 1987, has never fully faced its