Beijing’s rigorous push for chatbots with core socialist values is the latest roadblock in its effort to catch up to the US in a race for artificial intelligence (AI) supremacy. It is also a timely reminder for the world that a chatbot cannot have its own political beliefs, the same way it cannot be expected to make human decisions.
It is easy for finger-wagging Western observers to jump on recent reporting that China is forcing companies to undergo intensive political tests as more evidence that AI development would be kneecapped by the government’s censorship regime. The arduous process adds a painstaking layer of work for tech firms, and restricting the freedom to experiment can impede innovation. The difficulty of creating AI models infused with specific values would likely hurt China’s efforts to create chatbots as sophisticated as those in the US in the short term. However, it also exposes a broader misunderstanding around the realities of AI, despite a global arms race and a mountain of industry hype propelling its growth.
Since the launch of OpenAI’s ChatGPT in late 2022 initiated a global generative AI frenzy, there has been a tendency from the US to China to anthropomorphize this emerging technology, but treating AI models like humans, and expecting them to act that way, is a dangerous path to forge for a technology still in its infancy. China’s misguided approach should serve as a wake-up call.
Illustration: Tania Chou
Beijing’s AI ambitions are already under severe threat from all-out US efforts to bar access to advanced semiconductors and chipmaking equipment. However, Chinese Internet regulators are also trying to impose political restrictions on the outputs from homegrown AI models, ensuring their responses do not go against Chinese Communist Party ideals or speak ill of leaders like Chinese President Xi Jinping (習近平). Companies are restricting certain phrases in the training data, which can limit overall performance and the ability to spit out accurate responses.
Moreover, Chinese AI developers are already at a disadvantage. There is far more English-language text online than Chinese that can be used for training data, not even counting what is already cut off by the Great Firewall. The black box nature of large language models (LLM) also makes censoring outputs inherently challenging. Some Chinese AI companies are now building a separate layer onto their chatbots to replace problematic responses in real time.
However, it would be unwise to dismiss all this as simply restricting its tech prowess in the long run.
Beijing wants to be the global AI leader by 2030, and is throwing the entire might of the state and private sector behind this effort. The government reiterated its commitment to develop the high-tech industry during last week’s Third Plenum, and in racing to create AI their own way, Chinese developers are also forced to approach LLMs in novel ways. Their research could potentially sharpen AI tools for harder tasks that they have traditionally struggled with.
Tech companies in the US have spent years trying to control the outputs from AI models and ensure they do not hallucinate or spew offensive responses — or, in the case of Elon Musk, ensure responses are not too “woke.” Many tech giants are still figuring out how to implement and control these types of guardrails.
Earlier this year, Alphabet Inc’s Google paused its AI image generator after it created historically inaccurate depictions of people of color in place of white people. An early Microsoft AI chatbot dubbed “Tay” was infamously shut down in 2016 after it was exploited on Twitter and started spitting out racist and hateful comments. As AI models are trained on gargantuan amounts of text scraped from the Internet, their responses risk perpetuating the racism, sexism and myriad other dark features baked into discourse there.
Companies like OpenAI have since made great strides in reducing inaccuracies, limiting biases and improving the overall outputs from chatbots — but these tools are still just machines trained on the work of humans. They can be re-engineered and tinkered with, or programmed not to use racial slurs or talk politics, but it is impossible for them to grasp morals or their own political ideologies.
China’s push to ensure chatbots toe the party line may be more extreme than the restrictions US companies are self-imposing on their AI tools. However, these efforts from different sides of the globe reveal a profound misunderstanding of how we should collectively approach AI.
The world is pouring vast swaths of money and immense amounts of energy into creating conversational chatbots.
Instead of trying to assign human values to bots and use more resources to make them sound more human, we should start asking how they can be used to help humans.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
The Chinese Communist Party (CCP) continues to bully Taiwan by conducting military drills extremely close to Taiwan in late May 2024 and announcing a legal opinion in June on how they would treat “Taiwan Independence diehards” according to the PRC’s Criminal Code. This article will describe how China’s Anaconda Strategy of psychological and legal asphyxiation is employed. The CCP’s People’s Liberation Army (PLA) and Chinese Coast Guard (CCG) conducted a “punishment military exercise” against Taiwan called “Joint Sword 2024A” from 23-24 May 2024, just three days after President William Lai (賴清德) of the Democratic Progressive Party (DPP) was sworn in and
Former US president Donald Trump’s comments that Taiwan hollowed out the US semiconductor industry are incorrect. That misunderstanding could impact the future of one of the world’s most important relationships and end up aiding China at a time it is working hard to push its own tech sector to catch up. “Taiwan took our chip business from us,” the returnee US presidential contender told Bloomberg Businessweek in an interview published this week. The remarks came after the Republican nominee was asked whether he would defend Taiwan against China. It is not the first time he has said this about the nation’s
In a recent interview with the Malaysian Chinese-language newspaper Sin Chew Daily, former president Ma Ying-jeou (馬英九) called President William Lai (賴清德) “naive.” As always with Ma, one must first deconstruct what he is saying to fully understand the parallel universe he insists on defending. Who is being “naive,” Lai or Ma? The quickest way is to confront Ma with a series of pointed questions that force him to take clear stands on the complex issues involved and prevent him from his usual ramblings. Regarding China and Taiwan, the media should first begin with questions like these: “Did the Chinese Nationalist Party (KMT)
The Yomiuri Shimbun, the newspaper with the largest daily circulation in Japan, on Thursday last week published an article saying that an unidentified high-ranking Japanese official openly spoke of an analysis that the Chinese People’s Liberation Army (PLA) needs less than a week, not a month, to invade Taiwan with its amphibious forces. Reportedly, Japanese Prime Minister Fumio Kishida has already been advised of the analysis, which was based on the PLA’s military exercises last summer. A Yomiuri analysis of unclassified satellite photographs confirmed that the PLA has already begun necessary base repairs and maintenance, and is conducting amphibious operation exercises