Beijing’s rigorous push for chatbots with core socialist values is the latest roadblock in its effort to catch up to the US in a race for artificial intelligence (AI) supremacy. It is also a timely reminder for the world that a chatbot cannot have its own political beliefs, the same way it cannot be expected to make human decisions.
It is easy for finger-wagging Western observers to jump on recent reporting that China is forcing companies to undergo intensive political tests as more evidence that AI development would be kneecapped by the government’s censorship regime. The arduous process adds a painstaking layer of work for tech firms, and restricting the freedom to experiment can impede innovation. The difficulty of creating AI models infused with specific values would likely hurt China’s efforts to create chatbots as sophisticated as those in the US in the short term. However, it also exposes a broader misunderstanding around the realities of AI, despite a global arms race and a mountain of industry hype propelling its growth.
Since the launch of OpenAI’s ChatGPT in late 2022 initiated a global generative AI frenzy, there has been a tendency from the US to China to anthropomorphize this emerging technology, but treating AI models like humans, and expecting them to act that way, is a dangerous path to forge for a technology still in its infancy. China’s misguided approach should serve as a wake-up call.
Illustration: Tania Chou
Beijing’s AI ambitions are already under severe threat from all-out US efforts to bar access to advanced semiconductors and chipmaking equipment. However, Chinese Internet regulators are also trying to impose political restrictions on the outputs from homegrown AI models, ensuring their responses do not go against Chinese Communist Party ideals or speak ill of leaders like Chinese President Xi Jinping (習近平). Companies are restricting certain phrases in the training data, which can limit overall performance and the ability to spit out accurate responses.
Moreover, Chinese AI developers are already at a disadvantage. There is far more English-language text online than Chinese that can be used for training data, not even counting what is already cut off by the Great Firewall. The black box nature of large language models (LLM) also makes censoring outputs inherently challenging. Some Chinese AI companies are now building a separate layer onto their chatbots to replace problematic responses in real time.
However, it would be unwise to dismiss all this as simply restricting its tech prowess in the long run.
Beijing wants to be the global AI leader by 2030, and is throwing the entire might of the state and private sector behind this effort. The government reiterated its commitment to develop the high-tech industry during last week’s Third Plenum, and in racing to create AI their own way, Chinese developers are also forced to approach LLMs in novel ways. Their research could potentially sharpen AI tools for harder tasks that they have traditionally struggled with.
Tech companies in the US have spent years trying to control the outputs from AI models and ensure they do not hallucinate or spew offensive responses — or, in the case of Elon Musk, ensure responses are not too “woke.” Many tech giants are still figuring out how to implement and control these types of guardrails.
Earlier this year, Alphabet Inc’s Google paused its AI image generator after it created historically inaccurate depictions of people of color in place of white people. An early Microsoft AI chatbot dubbed “Tay” was infamously shut down in 2016 after it was exploited on Twitter and started spitting out racist and hateful comments. As AI models are trained on gargantuan amounts of text scraped from the Internet, their responses risk perpetuating the racism, sexism and myriad other dark features baked into discourse there.
Companies like OpenAI have since made great strides in reducing inaccuracies, limiting biases and improving the overall outputs from chatbots — but these tools are still just machines trained on the work of humans. They can be re-engineered and tinkered with, or programmed not to use racial slurs or talk politics, but it is impossible for them to grasp morals or their own political ideologies.
China’s push to ensure chatbots toe the party line may be more extreme than the restrictions US companies are self-imposing on their AI tools. However, these efforts from different sides of the globe reveal a profound misunderstanding of how we should collectively approach AI.
The world is pouring vast swaths of money and immense amounts of energy into creating conversational chatbots.
Instead of trying to assign human values to bots and use more resources to make them sound more human, we should start asking how they can be used to help humans.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Monday was the 37th anniversary of former president Chiang Ching-kuo’s (蔣經國) death. Chiang — a son of former president Chiang Kai-shek (蔣介石), who had implemented party-state rule and martial law in Taiwan — has a complicated legacy. Whether one looks at his time in power in a positive or negative light depends very much on who they are, and what their relationship with the Chinese Nationalist Party (KMT) is. Although toward the end of his life Chiang Ching-kuo lifted martial law and steered Taiwan onto the path of democratization, these changes were forced upon him by internal and external pressures,
Chinese Nationalist Party (KMT) caucus whip Fu Kun-chi (傅?萁) has caused havoc with his attempts to overturn the democratic and constitutional order in the legislature. If we look at this devolution from the context of a transition to democracy from authoritarianism in a culturally Chinese sense — that of zhonghua (中華) — then we are playing witness to a servile spirit from a millennia-old form of totalitarianism that is intent on damaging the nation’s hard-won democracy. This servile spirit is ingrained in Chinese culture. About a century ago, Chinese satirist and author Lu Xun (魯迅) saw through the servile nature of
In their New York Times bestseller How Democracies Die, Harvard political scientists Steven Levitsky and Daniel Ziblatt said that democracies today “may die at the hands not of generals but of elected leaders. Many government efforts to subvert democracy are ‘legal,’ in the sense that they are approved by the legislature or accepted by the courts. They may even be portrayed as efforts to improve democracy — making the judiciary more efficient, combating corruption, or cleaning up the electoral process.” Moreover, the two authors observe that those who denounce such legal threats to democracy are often “dismissed as exaggerating or
The National Development Council (NDC) on Wednesday last week launched a six-month “digital nomad visitor visa” program, the Central News Agency (CNA) reported on Monday. The new visa is for foreign nationals from Taiwan’s list of visa-exempt countries who meet financial eligibility criteria and provide proof of work contracts, but it is not clear how it differs from other visitor visas for nationals of those countries, CNA wrote. The NDC last year said that it hoped to attract 100,000 “digital nomads,” according to the report. Interest in working remotely from abroad has significantly increased in recent years following improvements in