An executive from a large technology firm on Thursday won a Nobel prize. The top prize for chemistry went to the head of Alphabet Inc’s artificial intelligence (AI) efforts, Demis Hassabis, along with two other key scientists, for a years-long project that used AI to predict the structure of proteins. The day before, Geoffrey Hinton, a former executive at Google who has been called a godfather of AI, won the Nobel Prize in Physics along with physicist John Hopfield, for work on machine learning.
It seems the Nobel Foundation is eager to mark AI advancements — and the notion that key scientific problems can be solved computationally — as worthy of its coveted prizes. That would be a reputational boon for firms like Google and executives like Hassabis. However, there is a risk, too, that such recognition obscures concerns about both the technology itself and the increasing concentration of AI power in a handful of companies.
Hassabis himself has long craved this accolade, having told staff for years that that he wanted DeepMind, the AI lab he cofounded and sold to Google in 2015, to win between three and five Nobel prizes over the next decade.
At a news conference on Wednesday, he called the award “an unbelievable honor of a lifetime” and said he had been hoping to win it this time around.
Indeed, he initially shaped DeepMind as a research lab with utopian objectives, where many of its leading scientists worked on building AI systems to help cure diseases like cancer or solve global warming.
However, that humanitarian agenda faded to the background after the sale to Google and especially after the release of OpenAI’s ChatGPT, which sparked a race among tech giants to deploy chatbot-style technology to businesses and consumers.
DeepMind has since become more product-focused (information about its healthcare and climate efforts disappeared from its homepage, for example), although it has continued with health-related efforts like AlphaFold. Out of DeepMind’s roughly 1,500-strong workforce, a team of just two dozen people were running the protein-folding project when it reached a critical milestone in 2020, according to a video documentary about the effort.
The Nobel will surely give Hassabis a credibility boost at Alphabet, where he has been leading the company’s fraught efforts to keep up with OpenAI. Google’s flagship AI model Gemini has grappled with controversies over its frequent mistakes and the possibility it would choke off traffic to the rest of the Web. Now perhaps a smoother path has been paved for Hassabis if he wants to become Alphabet’s next chief executive.
The former chess champion is a consummate strategist and rivals Sam Altman as the world’s most successful builder of AI technology, having pushed the boundaries of fields like deep learning, reinforcement learning and games-based models such as AlphaGo, which beat world champion go players eight years ago. Hassabis was already talking about taking on protein folding during those matches.
The glow benefits Google, too. Recent challenges from antitrust regulators over monopolistic behavior have not helped its reputation as a company founded on the principle of “don’t be evil.” Now with two Nobel prizes linked to work done by its scientists, the tech giant can more easily frame itself as providing services that are ultimately good for society, as its lawyers have been arguing, and perhaps generate goodwill more broadly with the public and regulators.
However, we should not forget the tension between the high-minded goals professed by Big Tech and what their businesses are really focused on. Google, which derives close to 80 percent of its revenue from advertising, is now putting ads into its new AI search tool. For businesses, that invites a new layer of complexity to online advertising, while consumers face the prospect of wading through AI-generated information that Google is trying to monetize, and which could one day become more biased toward advertisers.
Remember also that Google’s prioritization of human well-being was called into question less than three years ago when it fired two leading AI ethics experts who had warned about the risks that its AI models could entrench bias, spread misinformation and hoard energy, issues that have not gone away. A study in Nature last month, for instance, showed that AI tools like ChatGPT were making racist decisions about people based on their dialect.
The Nobel Prize is designed to recognize people who have made outstanding contributions to science, humanism and peace, so the foundation behind it has taken a bold stance in validating the work of AI and of one company in particular. The award to Hassabis — like the Nobel Peace Prize given to Barack Obama one year after he was elected as US president — feels a little premature. It is still unclear what kind of broad, real-world impact DeepMind’s protein-folding project will have on the medical field and drug discovery.
Let us hope the prize motivates well-endowed technology firms to invest much more in using AI for public service efforts like protein folding and in AI ethics research — and does not muddy the debate over the very real risks that AI poses to the world, too.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of We Are Anonymous.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Trying to force a partnership between Taiwan Semiconductor Manufacturing Co (TSMC) and Intel Corp would be a wildly complex ordeal. Already, the reported request from the Trump administration for TSMC to take a controlling stake in Intel’s US factories is facing valid questions about feasibility from all sides. Washington would likely not support a foreign company operating Intel’s domestic factories, Reuters reported — just look at how that is going over in the steel sector. Meanwhile, many in Taiwan are concerned about the company being forced to transfer its bleeding-edge tech capabilities and give up its strategic advantage. This is especially
US President Donald Trump’s second administration has gotten off to a fast start with a blizzard of initiatives focused on domestic commitments made during his campaign. His tariff-based approach to re-ordering global trade in a manner more favorable to the United States appears to be in its infancy, but the significant scale and scope are undeniable. That said, while China looms largest on the list of national security challenges, to date we have heard little from the administration, bar the 10 percent tariffs directed at China, on specific priorities vis-a-vis China. The Congressional hearings for President Trump’s cabinet have, so far,
For years, the use of insecure smart home appliances and other Internet-connected devices has resulted in personal data leaks. Many smart devices require users’ location, contact details or access to cameras and microphones to set up, which expose people’s personal information, but are unnecessary to use the product. As a result, data breaches and security incidents continue to emerge worldwide through smartphone apps, smart speakers, TVs, air fryers and robot vacuums. Last week, another major data breach was added to the list: Mars Hydro, a Chinese company that makes Internet of Things (IoT) devices such as LED grow lights and the
The US Department of State has removed the phrase “we do not support Taiwan independence” in its updated Taiwan-US relations fact sheet, which instead iterates that “we expect cross-strait differences to be resolved by peaceful means, free from coercion, in a manner acceptable to the people on both sides of the Strait.” This shows a tougher stance rejecting China’s false claims of sovereignty over Taiwan. Since switching formal diplomatic recognition from the Republic of China to the People’s Republic of China in 1979, the US government has continually indicated that it “does not support Taiwan independence.” The phrase was removed in 2022