The White House has released a sweeping executive order on artificial intelligence (AI), which is notable in a number of ways. Most significantly, it establishes an early stage means of regulation for the controversial technology, which promises to have a vast impact on our lives.
The new approach is being closely coordinated with the EU and was initially announced by the US administration in July. US President Joe Biden further highlighted the proposal during a September meeting with the President’s Council of Advisors on Science and Technology in San Francisco.
The executive order pushes a high degree of private-public cooperation and was timed to come out just days before Silicon Valley leaders gather with international government officials in the UK to look at both the dangers and benefits of AI.
Illustration: Mountain People
It also requires detailed assessments — think of drug testing by the US Food and Drug Administration — before specific AI models could be used by the government. The new regulations seek to bolster the cybersecurity aspects of AI and make it easier for brainy technologists — H-1B program candidates — to immigrate to the US.
Most of the key actors in the AI space seem to be onboard with the thrust of the new regulations, and companies as varied as chipmaker Nvidia and Open AI have already made voluntary agreements to regulate the technology along the lines of the executive order. Google is also fully involved, as is Adobe which makes Photoshop, a key area of concern because of the potential for AI manipulation. The US National Institute of Standards and Technology is to lead the government side in creating a framework for risk assessment and mitigation.
All of this represents sensible steps in the right direction, but something is missing, at least at an unclassified level. There are no indications of similar efforts in the sphere of military activity. What should the US and its allies be considering in terms of regulating AI in that sphere, and how can we convince adversaries to be involved?
First, we need to consider the potentially significant military aspects of AI. Like other pivotal moments in military history, such as the introduction of the long bow, the invention of gunpowder, the creation of rifled barrels, the arrival of airplanes and submarines, the development of long-range sensors, cyberwarfare or the advent of nuclear weapons, AI would rearrange the battlefield in significant ways.
For example, AI would allow decisionmakers to instantly surveil all military history and select the best path to a victory. Imagine an admiral who can simultaneously be afforded the advice of every successful predecessor from Lord Nelson at Trafalgar to Admiral Spruance at Midway to Sir Sandy Woodward in the Falklands?
Conversely, AI would also be able to accurately predict logistical and technological failure points. What if the Russians had been able to use AI to correct their glaring faults in logistics and vulnerability to drones in the early days of Ukraine war?
The ability to spoof intelligence collection through manipulated images spread instantly throughout social networks and driven directly into the sensors of satellites and radars might be possible. AI can also speed up the invention and distribution of new forms of offensive military cyberattacks, overcoming current levels of protection. It could, for example, convince an enemy sensor system that a massive battle fleet was approaching its shores — while the main attack was actually occurring from space.
All of this and much more are coming at an accelerated pace. Look at the time lines: It took a couple of centuries for gunpowder to fulfill its lethal potential. Military aviation went from Kitty Hawk to massive aerial bombing campaigns in less than 40 years. AI is likely to have a dramatic military impact within a decade or even sooner.
As we have with nuclear weapons, we are going to need to develop military guardrails around AI — like arms control agreements in the nuclear sphere. In creating these, some of what decisionmakers must consider includes potential prohibitions on lethal decisionmaking by AI (keeping a human in the loop); restrictions on using AI to attack nuclear command and control systems; a Geneva Conventions-like set of rules prohibiting manipulation or harm to civilian populations using AI-generated images or actions; and limits on size and scale of AI driven “swarm” attacks by small, deadly combinations of uncrewed sensors and missiles.
Using the 1972 Cold War “Incidents at Sea” protocols as a model might make sense. The Soviet Union and the US agreed to limit closure distances between ships and aircraft; refrain from simulated attacks or manipulation of fire control radars; exchange honest information about operations under certain circumstances; and take measures to limit damage to civilian vessels and aircraft in the vicinity. The parallels are obviously far from exact, but the idea — having a conversation about reducing the risk of disastrous military miscalculation — makes sense.
Starting a conversation within NATO could be a good beginning, setting a sensible course for the 32 allied nations in terms of military developments in AI. Then comes the hard part — broadening the conversation to include at a minimum China and Russia, both of whom are attempting to outrace the US in every aspect of AI.
The Biden administration is on the right path with the new executive order. Certainly, we need to get the technology sector on board in terms of considering the risks and benefits of AI, but it is also high time to get the Pentagon cracking on the military version of such regulations, and not just including our friends in the West.
James Stavridis is a Bloomberg Opinion columnist, a retired US Navy admiral, former supreme allied commander of NATO, and dean emeritus of the Fletcher School of Law and Diplomacy at Tufts University. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
As Taiwan’s domestic political crisis deepens, the opposition Chinese Nationalist Party (KMT) and Taiwan People’s Party (TPP) have proposed gutting the country’s national spending, with steep cuts to the critical foreign and defense ministries. While the blue-white coalition alleges that it is merely responding to voters’ concerns about corruption and mismanagement, of which there certainly has been plenty under Democratic Progressive Party (DPP) and KMT-led governments, the rationales for their proposed spending cuts lay bare the incoherent foreign policy of the KMT-led coalition. Introduced on the eve of US President Donald Trump’s inauguration, the KMT’s proposed budget is a terrible opening
The Chinese Nationalist Party (KMT) caucus in the Legislative Yuan has made an internal decision to freeze NT$1.8 billion (US$54.7 million) of the indigenous submarine project’s NT$2 billion budget. This means that up to 90 percent of the budget cannot be utilized. It would only be accessible if the legislature agrees to lift the freeze sometime in the future. However, for Taiwan to construct its own submarines, it must rely on foreign support for several key pieces of equipment and technology. These foreign supporters would also be forced to endure significant pressure, infiltration and influence from Beijing. In other words,
“I compare the Communist Party to my mother,” sings a student at a boarding school in a Tibetan region of China’s Qinghai province. “If faith has a color,” others at a different school sing, “it would surely be Chinese red.” In a major story for the New York Times this month, Chris Buckley wrote about the forced placement of hundreds of thousands of Tibetan children in boarding schools, where many suffer physical and psychological abuse. Separating these children from their families, the Chinese Communist Party (CCP) aims to substitute itself for their parents and for their religion. Buckley’s reporting is
Last week, the Chinese Nationalist Party (KMT) and the Taiwan People’s Party (TPP), together holding more than half of the legislative seats, cut about NT$94 billion (US$2.85 billion) from the yearly budget. The cuts include 60 percent of the government’s advertising budget, 10 percent of administrative expenses, 3 percent of the military budget, and 60 percent of the international travel, overseas education and training allowances. In addition, the two parties have proposed freezing the budgets of many ministries and departments, including NT$1.8 billion from the Ministry of National Defense’s Indigenous Defense Submarine program — 90 percent of the program’s proposed