An executive from a large technology firm on Thursday won a Nobel prize. The top prize for chemistry went to the head of Alphabet Inc’s artificial intelligence (AI) efforts, Demis Hassabis, along with two other key scientists, for a years-long project that used AI to predict the structure of proteins. The day before, Geoffrey Hinton, a former executive at Google who has been called a godfather of AI, won the Nobel Prize in Physics along with physicist John Hopfield, for work on machine learning.
It seems the Nobel Foundation is eager to mark AI advancements — and the notion that key scientific problems can be solved computationally — as worthy of its coveted prizes. That would be a reputational boon for firms like Google and executives like Hassabis. However, there is a risk, too, that such recognition obscures concerns about both the technology itself and the increasing concentration of AI power in a handful of companies.
Hassabis himself has long craved this accolade, having told staff for years that that he wanted DeepMind, the AI lab he cofounded and sold to Google in 2015, to win between three and five Nobel prizes over the next decade.
At a news conference on Wednesday, he called the award “an unbelievable honor of a lifetime” and said he had been hoping to win it this time around.
Indeed, he initially shaped DeepMind as a research lab with utopian objectives, where many of its leading scientists worked on building AI systems to help cure diseases like cancer or solve global warming.
However, that humanitarian agenda faded to the background after the sale to Google and especially after the release of OpenAI’s ChatGPT, which sparked a race among tech giants to deploy chatbot-style technology to businesses and consumers.
DeepMind has since become more product-focused (information about its healthcare and climate efforts disappeared from its homepage, for example), although it has continued with health-related efforts like AlphaFold. Out of DeepMind’s roughly 1,500-strong workforce, a team of just two dozen people were running the protein-folding project when it reached a critical milestone in 2020, according to a video documentary about the effort.
The Nobel will surely give Hassabis a credibility boost at Alphabet, where he has been leading the company’s fraught efforts to keep up with OpenAI. Google’s flagship AI model Gemini has grappled with controversies over its frequent mistakes and the possibility it would choke off traffic to the rest of the Web. Now perhaps a smoother path has been paved for Hassabis if he wants to become Alphabet’s next chief executive.
The former chess champion is a consummate strategist and rivals Sam Altman as the world’s most successful builder of AI technology, having pushed the boundaries of fields like deep learning, reinforcement learning and games-based models such as AlphaGo, which beat world champion go players eight years ago. Hassabis was already talking about taking on protein folding during those matches.
The glow benefits Google, too. Recent challenges from antitrust regulators over monopolistic behavior have not helped its reputation as a company founded on the principle of “don’t be evil.” Now with two Nobel prizes linked to work done by its scientists, the tech giant can more easily frame itself as providing services that are ultimately good for society, as its lawyers have been arguing, and perhaps generate goodwill more broadly with the public and regulators.
However, we should not forget the tension between the high-minded goals professed by Big Tech and what their businesses are really focused on. Google, which derives close to 80 percent of its revenue from advertising, is now putting ads into its new AI search tool. For businesses, that invites a new layer of complexity to online advertising, while consumers face the prospect of wading through AI-generated information that Google is trying to monetize, and which could one day become more biased toward advertisers.
Remember also that Google’s prioritization of human well-being was called into question less than three years ago when it fired two leading AI ethics experts who had warned about the risks that its AI models could entrench bias, spread misinformation and hoard energy, issues that have not gone away. A study in Nature last month, for instance, showed that AI tools like ChatGPT were making racist decisions about people based on their dialect.
The Nobel Prize is designed to recognize people who have made outstanding contributions to science, humanism and peace, so the foundation behind it has taken a bold stance in validating the work of AI and of one company in particular. The award to Hassabis — like the Nobel Peace Prize given to Barack Obama one year after he was elected as US president — feels a little premature. It is still unclear what kind of broad, real-world impact DeepMind’s protein-folding project will have on the medical field and drug discovery.
Let us hope the prize motivates well-endowed technology firms to invest much more in using AI for public service efforts like protein folding and in AI ethics research — and does not muddy the debate over the very real risks that AI poses to the world, too.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of We Are Anonymous.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
As China steps up a campaign to diplomatically isolate and squeeze Taiwan, it has become more imperative than ever that Taipei play a greater role internationally with the support of the democratic world. To help safeguard its autonomous status, Taiwan needs to go beyond bolstering its defenses with weapons like anti-ship and anti-aircraft missiles. With the help of its international backers, it must also expand its diplomatic footprint globally. But are Taiwan’s foreign friends willing to translate their rhetoric into action by helping Taipei carve out more international space for itself? Beating back China’s effort to turn Taiwan into an international pariah
Typhoon Krathon made landfall in southwestern Taiwan last week, bringing strong winds, heavy rain and flooding, cutting power to more than 170,000 homes and water supply to more than 400,000 homes, and leading to more than 600 injuries and four deaths. Due to the typhoon, schools and offices across the nation were ordered to close for two to four days, stirring up familiar controversies over whether local governments’ decisions to call typhoon days were appropriate. The typhoon’s center made landfall in Kaohsiung’s Siaogang District (小港) at noon on Thursday, but it weakened into a tropical depression early on Friday, and its structure
Since the end of the Cold War, the US-China espionage battle has arguably become the largest on Earth. Spying on China is vital for the US, as China’s growing military and technological capabilities pose direct challenges to its interests, especially in defending Taiwan and maintaining security in the Indo-Pacific. Intelligence gathering helps the US counter Chinese aggression, stay ahead of threats and safeguard not only its own security, but also the stability of global trade routes. Unchecked Chinese expansion could destabilize the region and have far-reaching global consequences. In recent years, spying on China has become increasingly difficult for the US
Lately, China has been inviting Taiwanese influencers to travel to China’s Xinjiang region to make films, weaving a “beautiful Xinjiang” narrative as an antidote to the international community’s criticisms by creating a Potemkin village where nothing is awry. Such manipulations appear harmless — even compelling enough for people to go there — but peeling back the shiny veneer reveals something more insidious, something that is hard to ignore. These films are not only meant to promote tourism, but also harbor a deeper level of political intentions. Xinjiang — a region of China continuously listed in global human rights reports —