The US innovates, Europe regulates. Just as the world is starting to come to grips with OpenAI, whose boss, Sam Altman, has leapfrogged the competition and pleaded for global rules, the EU has responded with the Artificial Intelligence Act, its own bid for artificial-intelligence (AI) superpower status by being the first to set minimum standards. It was passed on Wednesday by the European Parliament.
Yet we are a long way from the deceptively simple world of Isaac Asimov’s robot stories, which saw sentient machines deliver the benefits of powerful “positronic brains” with just three rules in place — do not harm humans, obey humans and defend your existence. AI is clearly too important to not regulate thoroughly, but the EU has its work cut out to reduce the act’s complexity while promoting innovation.
The act has some good ideas focusing on transparency and trust: Chatbots will have to declare whether they are trained on copyrighted material, deepfakes will have to be labeled as such, and a raft of newly added obligations for the kind of models used in generative AI will require serious efforts to catalogue datasets and take responsibility for how they would be used.
Lifting the lid on opaque machines that process huge swathes of human output is the right idea, and gets us closer to more dignity around treatment of data. As Dragos Tudorache, co-rapporteur of the law, said recently, the purpose is to promote “trust and confidence” in a technology that has attracted huge amounts of investment and excitement yet also produced some very dark failures.
Self-regulation is not an option — neither is “running into the woods” and doing nothing out of fear that AI could wipe out humanity one day, he said.
However, the act also carries a lot of complexity and runs the paradoxical risk of setting the bar too high to promote innovation, but not high enough to avoid unpredictable outcomes. The main approach is to categorize AI applications into buckets of risk, from minimal (spam filters, video games) to high (workplace recruitment) to unauthorized (real-time facial recognition).
That makes sense from a product-safety point of view, with providers of AI systems expected to meet rules and requirements before putting their products on the market. Yet the category of high-risk applications is a broad one, and the downstream chain of responsibility in an application like ChatGPT shows how tech can blur product-safety frameworks. When a lawyer relies on AI to craft a motion that unwittingly becomes full of made-up case law, are they using the product as intended or misusing it?
It is also not clear how exactly this would work with other data-privacy laws like the EU’s General Data Protection Regulation (GDPR), which was used by Italy as justification for a temporary block on ChatGPT.
Moreover, while more transparency on copyright-protected training data makes sense, it could conflict with past copyright exceptions granted for data mining back when AI was viewed less nervously by creative industries.
All this means there is a real possibility that the actual outcome of the act might entrench the EU’s dependency on big US tech firms from Microsoft Corp to Nvidia Corp. European companies are chomping at the bit to tap into the potential productivity benefits of AI, but it is likely that the large incumbent providers would be best-positioned to handle the combination of estimated upfront compliance costs of at least US$3 billion and non-compliance fines of up to 7 percent of global revenue.
Adobe Inc has already offered to legally compensate businesses if they are sued for copyright infringement over any images its Firefly tool creates, Fast Company said. Some firms might take the calculated risk of avoiding the EU entirely: Alphabet Inc has yet to make its chatbot Bard available there.
The EU has a lot of fine-tuning to do as final negotiations begin on the act, which might not come into force until 2026. Countries such as France that are nervous about losing more innovation ground to the US are likely to push for more exemptions for smaller businesses.
Bloomberg Intelligence analyst Tamlin Bason sees a possible “middle ground” on restrictions. That should be accompanied by initiatives to foster new tech ideas such as promoting ecosystems linking universities, start-ups and investors.
There should also be more global coordination at a time when angst around AI is widespread — the G7’s new Hiroshima AI process looks like a useful forum to discuss issues like intellectual property rights.
Perhaps one bit of good news is that AI is not about to destroy all jobs held by human compliance officers and lawyers. Technology consultant Barry Scannell said companies would be looking at hiring AI officers and drafting AI impact assessments, similar to what happened in the aftermath of the GDPR. Reining in the robots requires more human brainpower — perhaps one twist you would not get in an Asimov story.
Lionel Laurent is a Bloomberg Opinion columnist covering digital currencies, the EU and France. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Would China attack Taiwan during the American lame duck period? For months, there have been worries that Beijing would seek to take advantage of an American president slowed by age and a potentially chaotic transition to make a move on Taiwan. In the wake of an American election that ended without drama, that far-fetched scenario will likely prove purely hypothetical. But there is a crisis brewing elsewhere in Asia — one with which US president-elect Donald Trump may have to deal during his first days in office. Tensions between the Philippines and China in the South China Sea have been at
A nation has several pillars of national defense, among them are military strength, energy and food security, and national unity. Military strength is very much on the forefront of the debate, while several recent editorials have dealt with energy security. National unity and a sense of shared purpose — especially while a powerful, hostile state is becoming increasingly menacing — are problematic, and would continue to be until the nation’s schizophrenia is properly managed. The controversy over the past few days over former navy lieutenant commander Lu Li-shih’s (呂禮詩) usage of the term “our China” during an interview about his attendance
Bo Guagua (薄瓜瓜), the son of former Chinese Communist Party (CCP) Central Committee Politburo member and former Chongqing Municipal Communist Party secretary Bo Xilai (薄熙來), used his British passport to make a low-key entry into Taiwan on a flight originating in Canada. He is set to marry the granddaughter of former political heavyweight Hsu Wen-cheng (許文政), the founder of Luodong Poh-Ai Hospital in Yilan County’s Luodong Township (羅東). Bo Xilai is a former high-ranking CCP official who was once a challenger to Chinese President Xi Jinping (習近平) for the chairmanship of the CCP. That makes Bo Guagua a bona fide “third-generation red”
Russian President Vladimir Putin’s hypersonic missile carried a simple message to the West over Ukraine: Back off, and if you do not, Russia reserves the right to hit US and British military facilities. Russia fired a new intermediate-range hypersonic ballistic missile known as “Oreshnik,” or Hazel Tree, at Ukraine on Thursday in what Putin said was a direct response to strikes on Russia by Ukrainian forces with US and British missiles. In a special statement from the Kremlin just after 8pm in Moscow that day, the Russian president said the war was escalating toward a global conflict, although he avoided any nuclear