The meteoric rise of ChatGPT and GPT-4 has not only set off a new round of technological innovation and business competition centered on generative artificial intelligence (AI) technology, it has also rekindled intensive debate about what artificial general intelligence is and whether ChatGPT qualifies as one.
The mind-boggling advancement of GPT-4 over ChatGPT in just four months has prompted some experts to consider whether generative AI technologies might harm society or even humanity.
Some experts have demanded that governments regulate generative AI in the same way they do with technologies such as nuclear fission and human cloning.
Having led the world in safeguarding basic freedoms and human rights, the EU has spearheaded an effort to address regulatory issues surrounding generative AI. So far, it has focused mainly on how to protect personal privacy and reputations from infringement, and how to require generative AI companies to commercially license the training data they trawl from the Internet and which are required to train their AI models.
Last month, China announced regulatory requirements for domestic generative AI companies. Questions and prompts submitted by users to generative AI services, by default, cannot be used for training without explicit permission to do so, and content produced by generative AI services should reflect the core values of Chinese socialism and cannot be used to subvert the government.
Lawmakers in the US have also recently had intensive discussions on how to regulate the technology, but their focus has been on how to ensure user safety, how to prevent generative AI from being weaponized by criminals and how to build sufficient guardrails to prevent it from destroying human civilization.
Although the regulation of generative AI has multiple facets, perhaps the most thorny issue deals with ensuring it never harms society. This issue is mainly rooted in the concern that generative AI has surpassed the capabilities of average people, and yet its “explainability,” or interpretability, is astonishingly poor.
Technically, there are three levels of explainability. An AI technology is equipped with the first-level if it is able to clearly pinpoint the elements of an input to its model that have the most effect on the corresponding output.
For example, an AI model that evaluates loan applications has first-level explainability if it can point out the factors in a loan application that most affect an applicant’s outcome as produced by the model.
An AI technology is equipped with second-level explainability if it is able to distill the underlying complex mathematical model into an abstract representation that is a combination of intuitive features and high-level “if-then-else” rules, and is moreover comprehensible to humans.
For example, an AI model that evaluates loan applications could be abstracted into something as follows: It uses a weighted sum of the applicant’s annual income, on-time payment probability for credit cards and housing mortgage, and the expected price increase percentage of the owned house to compute the applicant’s overall eligibility score.
The third level of explainability of an AI technology concerns a thorough understanding of how the underlying model works, and what it can and cannot do when pushed to the limit. This level of explainability is required to ensure there is no underlying model containing devious logic or mechanisms that could produce catastrophic outputs with specific inputs.
For example, when asked how to win a car race, the AI creates scenarios that require weakening the competition by staging accidents that physically harm opponents.
No existing generative AI technologies, including ChatGPT, have even first-level explainability.
The reason ChatGPT’s explainability is so poor is that the authors creating it do not know why, in its current form, it is so powerful in such diversified sets of natural language processing tasks.
It is therefore impossible for them to estimate how ChatGPT-like technologies would behave when they receive many orders of magnitude worth of additional training in five to 10 years.
Imagine one day that ChatGPT does most of the writing and reading of documents in offices and publications, and can determine that the quality of its work is significantly higher than that produced by average humans.
In addition, from the research it reads, ChatGPT can enhance the training algorithms used to generate its foundational language models, and decide to “grow” itself by creating more powerful language models without human involvement.
What would ChatGPT choose to do with its human users when it “feels” more self-sufficient and becomes increasingly impatient with those that are clearly inferior?
In a survey released last year of elite machine-learning experts, 48 percent said they estimated that AI might have a 10 percent or higher chance of having a devastating effect on humanity.
However, despite such a high probability of an existential threat, under fierce commercial and geopolitical competition pressures, major AI companies’ efforts to advance the frontier of AI technology, as opposed to its explainability, thunders on without any signs of relenting or pausing for introspection.
If governments worldwide could put together a set of regulations and intervene as soon as possible, it could at least influence AI companies to increase their focus on explainability, hopefully returning the development of AI technology to a healthier, safer and more sustainable path.
Chiueh Tzi-cker is a professor in the Institute of Information Security at National Tsing Hua University.
Why is Chinese President Xi Jinping (習近平) not a “happy camper” these days regarding Taiwan? Taiwanese have not become more “CCP friendly” in response to the Chinese Communist Party’s (CCP) use of spies and graft by the United Front Work Department, intimidation conducted by the People’s Liberation Army (PLA) and the Armed Police/Coast Guard, and endless subversive political warfare measures, including cyber-attacks, economic coercion, and diplomatic isolation. The percentage of Taiwanese that prefer the status quo or prefer moving towards independence continues to rise — 76 percent as of December last year. According to National Chengchi University (NCCU) polling, the Taiwanese
US President Donald Trump’s return to the White House has brought renewed scrutiny to the Taiwan-US semiconductor relationship with his claim that Taiwan “stole” the US chip business and threats of 100 percent tariffs on foreign-made processors. For Taiwanese and industry leaders, understanding those developments in their full context is crucial while maintaining a clear vision of Taiwan’s role in the global technology ecosystem. The assertion that Taiwan “stole” the US’ semiconductor industry fundamentally misunderstands the evolution of global technology manufacturing. Over the past four decades, Taiwan’s semiconductor industry, led by Taiwan Semiconductor Manufacturing Co (TSMC), has grown through legitimate means
Today is Feb. 28, a day that Taiwan associates with two tragic historical memories. The 228 Incident, which started on Feb. 28, 1947, began from protests sparked by a cigarette seizure that took place the day before in front of the Tianma Tea House in Taipei’s Datong District (大同). It turned into a mass movement that spread across Taiwan. Local gentry asked then-governor general Chen Yi (陳儀) to intervene, but he received contradictory orders. In early March, after Chiang Kai-shek (蔣介石) dispatched troops to Keelung, a nationwide massacre took place and lasted until May 16, during which many important intellectuals
It would be absurd to claim to see a silver lining behind every US President Donald Trump cloud. Those clouds are too many, too dark and too dangerous. All the same, viewed from a domestic political perspective, there is a clear emerging UK upside to Trump’s efforts at crashing the post-Cold War order. It might even get a boost from Thursday’s Washington visit by British Prime Minister Keir Starmer. In July last year, when Starmer became prime minister, the Labour Party was rigidly on the defensive about Europe. Brexit was seen as an electorally unstable issue for a party whose priority