The meteoric rise of ChatGPT and GPT-4 has not only set off a new round of technological innovation and business competition centered on generative artificial intelligence (AI) technology, it has also rekindled intensive debate about what artificial general intelligence is and whether ChatGPT qualifies as one.
The mind-boggling advancement of GPT-4 over ChatGPT in just four months has prompted some experts to consider whether generative AI technologies might harm society or even humanity.
Some experts have demanded that governments regulate generative AI in the same way they do with technologies such as nuclear fission and human cloning.
Having led the world in safeguarding basic freedoms and human rights, the EU has spearheaded an effort to address regulatory issues surrounding generative AI. So far, it has focused mainly on how to protect personal privacy and reputations from infringement, and how to require generative AI companies to commercially license the training data they trawl from the Internet and which are required to train their AI models.
Last month, China announced regulatory requirements for domestic generative AI companies. Questions and prompts submitted by users to generative AI services, by default, cannot be used for training without explicit permission to do so, and content produced by generative AI services should reflect the core values of Chinese socialism and cannot be used to subvert the government.
Lawmakers in the US have also recently had intensive discussions on how to regulate the technology, but their focus has been on how to ensure user safety, how to prevent generative AI from being weaponized by criminals and how to build sufficient guardrails to prevent it from destroying human civilization.
Although the regulation of generative AI has multiple facets, perhaps the most thorny issue deals with ensuring it never harms society. This issue is mainly rooted in the concern that generative AI has surpassed the capabilities of average people, and yet its “explainability,” or interpretability, is astonishingly poor.
Technically, there are three levels of explainability. An AI technology is equipped with the first-level if it is able to clearly pinpoint the elements of an input to its model that have the most effect on the corresponding output.
For example, an AI model that evaluates loan applications has first-level explainability if it can point out the factors in a loan application that most affect an applicant’s outcome as produced by the model.
An AI technology is equipped with second-level explainability if it is able to distill the underlying complex mathematical model into an abstract representation that is a combination of intuitive features and high-level “if-then-else” rules, and is moreover comprehensible to humans.
For example, an AI model that evaluates loan applications could be abstracted into something as follows: It uses a weighted sum of the applicant’s annual income, on-time payment probability for credit cards and housing mortgage, and the expected price increase percentage of the owned house to compute the applicant’s overall eligibility score.
The third level of explainability of an AI technology concerns a thorough understanding of how the underlying model works, and what it can and cannot do when pushed to the limit. This level of explainability is required to ensure there is no underlying model containing devious logic or mechanisms that could produce catastrophic outputs with specific inputs.
For example, when asked how to win a car race, the AI creates scenarios that require weakening the competition by staging accidents that physically harm opponents.
No existing generative AI technologies, including ChatGPT, have even first-level explainability.
The reason ChatGPT’s explainability is so poor is that the authors creating it do not know why, in its current form, it is so powerful in such diversified sets of natural language processing tasks.
It is therefore impossible for them to estimate how ChatGPT-like technologies would behave when they receive many orders of magnitude worth of additional training in five to 10 years.
Imagine one day that ChatGPT does most of the writing and reading of documents in offices and publications, and can determine that the quality of its work is significantly higher than that produced by average humans.
In addition, from the research it reads, ChatGPT can enhance the training algorithms used to generate its foundational language models, and decide to “grow” itself by creating more powerful language models without human involvement.
What would ChatGPT choose to do with its human users when it “feels” more self-sufficient and becomes increasingly impatient with those that are clearly inferior?
In a survey released last year of elite machine-learning experts, 48 percent said they estimated that AI might have a 10 percent or higher chance of having a devastating effect on humanity.
However, despite such a high probability of an existential threat, under fierce commercial and geopolitical competition pressures, major AI companies’ efforts to advance the frontier of AI technology, as opposed to its explainability, thunders on without any signs of relenting or pausing for introspection.
If governments worldwide could put together a set of regulations and intervene as soon as possible, it could at least influence AI companies to increase their focus on explainability, hopefully returning the development of AI technology to a healthier, safer and more sustainable path.
Chiueh Tzi-cker is a professor in the Institute of Information Security at National Tsing Hua University.
The return of US president-elect Donald Trump to the White House has injected a new wave of anxiety across the Taiwan Strait. For Taiwan, an island whose very survival depends on the delicate and strategic support from the US, Trump’s election victory raises a cascade of questions and fears about what lies ahead. His approach to international relations — grounded in transactional and unpredictable policies — poses unique risks to Taiwan’s stability, economic prosperity and geopolitical standing. Trump’s first term left a complicated legacy in the region. On the one hand, his administration ramped up arms sales to Taiwan and sanctioned
The Taiwanese have proven to be resilient in the face of disasters and they have resisted continuing attempts to subordinate Taiwan to the People’s Republic of China (PRC). Nonetheless, the Taiwanese can and should do more to become even more resilient and to be better prepared for resistance should the Chinese Communist Party (CCP) try to annex Taiwan. President William Lai (賴清德) argues that the Taiwanese should determine their own fate. This position continues the Democratic Progressive Party’s (DPP) tradition of opposing the CCP’s annexation of Taiwan. Lai challenges the CCP’s narrative by stating that Taiwan is not subordinate to the
World leaders are preparing themselves for a second Donald Trump presidency. Some leaders know more or less where he stands: Ukrainian President Volodymyr Zelenskiy knows that a difficult negotiation process is about to be forced on his country, and the leaders of NATO countries would be well aware of being complacent about US military support with Trump in power. Israeli Prime Minister Benjamin Netanyahu would likely be feeling relief as the constraints placed on him by the US President Joe Biden administration would finally be released. However, for President William Lai (賴清德) the calculation is not simple. Trump has surrounded himself
US president-elect Donald Trump is to return to the White House in January, but his second term would surely be different from the first. His Cabinet would not include former US secretary of state Mike Pompeo and former US national security adviser John Bolton, both outspoken supporters of Taiwan. Trump is expected to implement a transactionalist approach to Taiwan, including measures such as demanding that Taiwan pay a high “protection fee” or requiring that Taiwan’s military spending amount to at least 10 percent of its GDP. However, if the Chinese Communist Party (CCP) invades Taiwan, it is doubtful that Trump would dispatch