This essay is an edited excerpt from my new book, Supremacy: AI, ChatGPT and the Race That Will Change the World. Knowing this, you might be half-wondering if a human wrote it. Don’t worry, I’m not offended. Two years ago, the thought would not have even crossed your mind, but today, machines are generating articles, books, illustrations and computer code that seem indistinguishable from the content created by people.
Remember the “novel-writing machine” in the dystopian future of George Orwell’s 1984 and his “versificator” that wrote popular music? Those things exist now, and the change happened so fast that it has given the public whiplash, leaving us wondering whether some of today’s office workers would have jobs in the next five to 10 years. Millions of white-collar professionals suddenly look vulnerable. Talented young illustrators are wondering if they should bother going to art school.
What is remarkable is how quickly this has all come to pass. In the 15 years that I have written about the technology industry, I have never seen a field move as quickly as artificial intelligence (AI) has in just the past two years. The release of ChatGPT in November 2022 sparked a race to create a whole new kind of AI that did not just process information, but generated it. Back then, AI tools could produce wonky images of dogs. Now they are churning out photorealistic pictures of former US president Donald Trump with pores and skin texture that look lifelike.
Many AI builders say this technology promises a path to utopia. Others say it could bring about the collapse of our civilization. In reality, the science-fiction scenarios have distracted us from the more insidious ways AI is threatening to harm society by perpetuating deep-seated biases, threatening entire creative industries and more.
Behind this invisible force are companies that have grabbed control of AI’s development and raced to make it more powerful. Driven by an insatiable hunger to grow, they have cut corners and misled the public about their products, putting themselves on course to become highly questionable stewards of AI.
No other organizations in history have amassed so much power or touched so many people as today’s technology juggernauts. Alphabet Inc’s Google conducts Web searches for 90 percent of Earth’s Internet users, and Microsoft Corp software is used by 70 percent of humans with a computer. The release of ChatGPT sparked a new AI boom, one that since November 2022 has added a staggering US$6.7 trillion to the market valuations of the six Big Tech firms — Alphabet, Amazon.com Inc, Apple Inc, Meta Platforms Inc, Microsoft and, most recently, Nvidia Corp.
Yet none of these firms are satisfied. Microsoft has vied for a chunk of Google’s US$150 billion search business and Google wants Microsoft’s US$110 billion cloud business. To fight their war, each company has grabbed the ideas of others.
Dig into this a bit deeper, and you would find that AI’s present reality has really been written by two men: Sam Altman and Demis Hassabis. One is a scrawny and placid entrepreneur in his late 30s who wears sneakers to the office. The latter is a former chess champion in his late 40s who is obsessed with games.
Both are fiercely intelligent, charming leaders who sketched out visions of AI so inspiring that people followed them with cult-like devotion. Both got here because they were obsessed with winning. Altman was the reason the world got ChatGPT. Hassabis was the reason we got it so quickly. Their journey has not only defined today’s race, but also the challenges coming our way, including a daunting struggle to steer AI’s ethical future when it is under the control of so few incumbents.
Hassabis risked scientific ridicule when he established DeepMind in 2010, the first company in the world intent on building AI that was as smart as a human being. He wanted to make scientific discoveries about the origins of life, the nature of reality and cures for disease.
“Solve intelligence, and then solve everything else,” he said.
A few years later, Altman started OpenAI to try to build the same thing, but with a greater focus on bringing economic abundance to humanity, increasing material wealth, and helping “us all live better lives,” he told me. “This can be the greatest tool humans have yet created, and let each of us do things far outside the realm of the possible.”
Their plans were more ambitious than even the zealous Silicon Valley visionaries. They planned to build AI that was so powerful it could transform society and make the fields of economics and finance obsolete. And Altman and Hassabis alone would be the purveyors of its gifts.
In their quest to build what could become humankind’s last invention, both men grappled with how such transformative technology should be controlled. At first they believed that tech monoliths like Google and Microsoft should not steer it outright, because those firms prioritized profit over humanity’s well-being. So for years and on opposite sides of the Atlantic Ocean, they both fumbled for novel ways to structure their research labs to protect AI and make benevolence its priority. They promised to be AI’s careful custodians.
However, both also wanted to be first. To build the most powerful software in history, they needed money and computing power, and their best source was Silicon Valley. Over time, both Altman and Hassabis decided they needed the tech giants after all. As their efforts to create superintelligent AI became more successful and as strange new ideologies buffeted them from different directions, they compromised on their noble goals. They handed over control to companies that rushed to sell AI tools to the public with virtually no oversight from regulators, and with far-reaching consequences.
This concentration of power in AI threatened to reduce competition and herald new intrusions into private life and new forms of racial and gender prejudice. Ask some popular AI tools to generate images of women, and they will make them scantily clad by default; ask for photorealistic CEOs, and they will generate images of white men. Some systems when asked for a criminal will generate images of black men. In a ham-fisted effort to fix those stereotypes, Google released an image-generating tool in February that badly overcompensated, then shut it down. Such systems are on track to be woven into our media feeds, smartphones and justice systems, sometimes without due care for how they might shape public opinion, thanks to a relative lack of investment in ethics and safety research.
Altman and Hassabis’ journey was not all that different from a century ago, when two entrepreneurs named Thomas Edison and George Westinghouse went to war. Each had pursued a dream of creating a dominant system for delivering electricity to millions of consumers. Both were inventors-turned-entrepreneurs, and both understood that their technology would one day power the modern world. The question was this: whose version of the technology would come out on top? In the end, Westinghouse’s more efficient electrical standard became the most popular in the world, but he did not win the so-called War of the Currents. Edison’s much larger company, General Electric, did.
As corporate interests pushed Altman and Hassabis to unleash bigger and more powerful models, it has been the tech titans who have emerged as the winners, only this time the race was to replicate our own intelligence.
Now the world has been thrown into a tailspin. Generative AI promises to make people more productive and bring more useful information to our fingertips through tools like ChatGPT. However, every innovation has a price to pay. Businesses and governments are adjusting to a new reality where the distinction between real and “AI-generated” is a crapshoot. Companies are throwing money at AI software to help displace their employees and boost profit margins, and devices that can conduct new levels of personal surveillance are cropping up.
We got here after the visions of two innovators who tried to build AI for good were eventually ground down by the forces of monopoly. Their story is one of idealism, but also one of naivety and ego — and of how it can be virtually impossible to keep an ethical code in the bubbles of Big Tech and Silicon Valley. Altman and Hassabis tied themselves into knots over the stewardship of AI, knowing that the world needed to manage the technology responsibly if we were to stop it from causing irreversible harm. However, they could not forge AI with godlike power without the resources of the world’s largest tech firms. With the goal of enhancing human life, they would end up empowering those companies, leaving humanity’s welfare and future caught in a battle for corporate supremacy.
After selling DeepMind to Google in 2014, Hassabis and his cofounders tried for years to spin out and restructure themselves as a nonprofit-style organization. They wanted to protect their increasingly powerful AI systems from being under the sole control of a tech monolith, and they worked on creating a board of independent luminaries that included former heads of state like Barack Obama to oversee its use. They even designed a new legal charter that would prioritize human well-being and the environment. Google appeared to go along with the plan at first and promised its entity billions of dollars, but its executives were stringing the founders along. In the end, Google tightened its grip on DeepMind, making the research lab that once focused on “solving intelligence” to help cure cancer or solve climate change now largely focused on developing its core AI product, Gemini.
Altman made a similar kind of shift, having founded OpenAI on the premise of building AI for the benefit of humanity, “free from financial obligations.” He has spent the past seven years twisting out of that commitment, restructuring his nonprofit as a “capped profit” company so that it could take billions of investment from Microsoft, to effectively become a product arm for the software firm. Now he is reportedly looking to restructure again to become more investor-friendly and raise several billion more dollars. One likely outcome: He would neuter the nonprofit board that ensures OpenAI serves humanity’s best interests.
After the release of ChatGPT, I was struck by how these two innovators had both pivoted from their humanitarian visions. Sure, Silicon Valley’s grand promises of making the world a better place often look like a ruse when its companies make addictive or mediocre services and its founders become billionaires. But there is something more unsettling about Altman and Hassabis’ shift away from their founding principles. They were both trying to build artificial general intelligence, or computers that could surpass our brainpower. The ramifications were huge — and their pivots have now brought new levels of influence and power to today’s tech giants. The rest of us are set to find out the price.
Parmy Olson is a Bloomberg Opinion columnist covering technology. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
The return of US president-elect Donald Trump to the White House has injected a new wave of anxiety across the Taiwan Strait. For Taiwan, an island whose very survival depends on the delicate and strategic support from the US, Trump’s election victory raises a cascade of questions and fears about what lies ahead. His approach to international relations — grounded in transactional and unpredictable policies — poses unique risks to Taiwan’s stability, economic prosperity and geopolitical standing. Trump’s first term left a complicated legacy in the region. On the one hand, his administration ramped up arms sales to Taiwan and sanctioned
The Taiwanese have proven to be resilient in the face of disasters and they have resisted continuing attempts to subordinate Taiwan to the People’s Republic of China (PRC). Nonetheless, the Taiwanese can and should do more to become even more resilient and to be better prepared for resistance should the Chinese Communist Party (CCP) try to annex Taiwan. President William Lai (賴清德) argues that the Taiwanese should determine their own fate. This position continues the Democratic Progressive Party’s (DPP) tradition of opposing the CCP’s annexation of Taiwan. Lai challenges the CCP’s narrative by stating that Taiwan is not subordinate to the
US president-elect Donald Trump is to return to the White House in January, but his second term would surely be different from the first. His Cabinet would not include former US secretary of state Mike Pompeo and former US national security adviser John Bolton, both outspoken supporters of Taiwan. Trump is expected to implement a transactionalist approach to Taiwan, including measures such as demanding that Taiwan pay a high “protection fee” or requiring that Taiwan’s military spending amount to at least 10 percent of its GDP. However, if the Chinese Communist Party (CCP) invades Taiwan, it is doubtful that Trump would dispatch
World leaders are preparing themselves for a second Donald Trump presidency. Some leaders know more or less where he stands: Ukrainian President Volodymyr Zelenskiy knows that a difficult negotiation process is about to be forced on his country, and the leaders of NATO countries would be well aware of being complacent about US military support with Trump in power. Israeli Prime Minister Benjamin Netanyahu would likely be feeling relief as the constraints placed on him by the US President Joe Biden administration would finally be released. However, for President William Lai (賴清德) the calculation is not simple. Trump has surrounded himself