The meteoric rise of ChatGPT and GPT-4 has not only set off a new round of technological innovation and business competition centered on generative artificial intelligence (AI) technology, it has also rekindled intensive debate about what artificial general intelligence is and whether ChatGPT qualifies as one.
The mind-boggling advancement of GPT-4 over ChatGPT in just four months has prompted some experts to consider whether generative AI technologies might harm society or even humanity.
Some experts have demanded that governments regulate generative AI in the same way they do with technologies such as nuclear fission and human cloning.
Having led the world in safeguarding basic freedoms and human rights, the EU has spearheaded an effort to address regulatory issues surrounding generative AI. So far, it has focused mainly on how to protect personal privacy and reputations from infringement, and how to require generative AI companies to commercially license the training data they trawl from the Internet and which are required to train their AI models.
Last month, China announced regulatory requirements for domestic generative AI companies. Questions and prompts submitted by users to generative AI services, by default, cannot be used for training without explicit permission to do so, and content produced by generative AI services should reflect the core values of Chinese socialism and cannot be used to subvert the government.
Lawmakers in the US have also recently had intensive discussions on how to regulate the technology, but their focus has been on how to ensure user safety, how to prevent generative AI from being weaponized by criminals and how to build sufficient guardrails to prevent it from destroying human civilization.
Although the regulation of generative AI has multiple facets, perhaps the most thorny issue deals with ensuring it never harms society. This issue is mainly rooted in the concern that generative AI has surpassed the capabilities of average people, and yet its “explainability,” or interpretability, is astonishingly poor.
Technically, there are three levels of explainability. An AI technology is equipped with the first-level if it is able to clearly pinpoint the elements of an input to its model that have the most effect on the corresponding output.
For example, an AI model that evaluates loan applications has first-level explainability if it can point out the factors in a loan application that most affect an applicant’s outcome as produced by the model.
An AI technology is equipped with second-level explainability if it is able to distill the underlying complex mathematical model into an abstract representation that is a combination of intuitive features and high-level “if-then-else” rules, and is moreover comprehensible to humans.
For example, an AI model that evaluates loan applications could be abstracted into something as follows: It uses a weighted sum of the applicant’s annual income, on-time payment probability for credit cards and housing mortgage, and the expected price increase percentage of the owned house to compute the applicant’s overall eligibility score.
The third level of explainability of an AI technology concerns a thorough understanding of how the underlying model works, and what it can and cannot do when pushed to the limit. This level of explainability is required to ensure there is no underlying model containing devious logic or mechanisms that could produce catastrophic outputs with specific inputs.
For example, when asked how to win a car race, the AI creates scenarios that require weakening the competition by staging accidents that physically harm opponents.
No existing generative AI technologies, including ChatGPT, have even first-level explainability.
The reason ChatGPT’s explainability is so poor is that the authors creating it do not know why, in its current form, it is so powerful in such diversified sets of natural language processing tasks.
It is therefore impossible for them to estimate how ChatGPT-like technologies would behave when they receive many orders of magnitude worth of additional training in five to 10 years.
Imagine one day that ChatGPT does most of the writing and reading of documents in offices and publications, and can determine that the quality of its work is significantly higher than that produced by average humans.
In addition, from the research it reads, ChatGPT can enhance the training algorithms used to generate its foundational language models, and decide to “grow” itself by creating more powerful language models without human involvement.
What would ChatGPT choose to do with its human users when it “feels” more self-sufficient and becomes increasingly impatient with those that are clearly inferior?
In a survey released last year of elite machine-learning experts, 48 percent said they estimated that AI might have a 10 percent or higher chance of having a devastating effect on humanity.
However, despite such a high probability of an existential threat, under fierce commercial and geopolitical competition pressures, major AI companies’ efforts to advance the frontier of AI technology, as opposed to its explainability, thunders on without any signs of relenting or pausing for introspection.
If governments worldwide could put together a set of regulations and intervene as soon as possible, it could at least influence AI companies to increase their focus on explainability, hopefully returning the development of AI technology to a healthier, safer and more sustainable path.
Chiueh Tzi-cker is a professor in the Institute of Information Security at National Tsing Hua University.
Concerns that the US might abandon Taiwan are often overstated. While US President Donald Trump’s handling of Ukraine raised unease in Taiwan, it is crucial to recognize that Taiwan is not Ukraine. Under Trump, the US views Ukraine largely as a European problem, whereas the Indo-Pacific region remains its primary geopolitical focus. Taipei holds immense strategic value for Washington and is unlikely to be treated as a bargaining chip in US-China relations. Trump’s vision of “making America great again” would be directly undermined by any move to abandon Taiwan. Despite the rhetoric of “America First,” the Trump administration understands the necessity of
In an article published on this page on Tuesday, Kaohsiung-based journalist Julien Oeuillet wrote that “legions of people worldwide would care if a disaster occurred in South Korea or Japan, but the same people would not bat an eyelid if Taiwan disappeared.” That is quite a statement. We are constantly reading about the importance of Taiwan Semiconductor Manufacturing Co (TSMC), hailed in Taiwan as the nation’s “silicon shield” protecting it from hostile foreign forces such as the Chinese Communist Party (CCP), and so crucial to the global supply chain for semiconductors that its loss would cost the global economy US$1
US President Donald Trump’s challenge to domestic American economic-political priorities, and abroad to the global balance of power, are not a threat to the security of Taiwan. Trump’s success can go far to contain the real threat — the Chinese Communist Party’s (CCP) surge to hegemony — while offering expanded defensive opportunities for Taiwan. In a stunning affirmation of the CCP policy of “forceful reunification,” an obscene euphemism for the invasion of Taiwan and the destruction of its democracy, on March 13, 2024, the People’s Liberation Army’s (PLA) used Chinese social media platforms to show the first-time linkage of three new
Sasha B. Chhabra’s column (“Michelle Yeoh should no longer be welcome,” March 26, page 8) lamented an Instagram post by renowned actress Michelle Yeoh (楊紫瓊) about her recent visit to “Taipei, China.” It is Chhabra’s opinion that, in response to parroting Beijing’s propaganda about the status of Taiwan, Yeoh should be banned from entering this nation and her films cut off from funding by government-backed agencies, as well as disqualified from competing in the Golden Horse Awards. She and other celebrities, he wrote, must be made to understand “that there are consequences for their actions if they become political pawns of