Ever since poem-churning ChatGPT burst on the scene six months ago, expert Gary Marcus has voiced caution against artificial intelligence’s (AI) ultra-fast development and adoption.
However, against AI’s apocalyptic doomsayers, the New York University emeritus professor said in an interview that the technology’s existential threats could be “overblown.”
“I’m not personally that concerned about extinction risk, at least for now, because the scenarios are not that concrete,” Marcus said in San Francisco.
Photo: Reuters
“A more general problem that I am worried about ... is that we’re building AI systems that we don’t have very good control over and I think that poses a lot of risks, [but] maybe not literally existential,” he said.
Long before the advent of ChatGPT, Marcus designed his first AI program in high school — software to translate Latin into English — and after years of studying child psychology, he founded Geometric Intelligence, a machine learning company later acquired by Uber Technologies Inc.
In March, alarmed that ChatGPT creator OpenAI was releasing its latest and more powerful AI model with Microsoft Corp, Marcus signed an open letter with more than 1,000 people including Tesla Inc chief executive officer Elon Musk calling for a global pause in AI development.
However, last week he did not sign the more succinct statement by business leaders and specialists — including OpenAI founder Sam Altman — that caused a stir.
Global leaders should be working to reduce “the risk of extinction” from AI technology, the signatories said.
The one-line statement said that tackling the risks from AI should be “a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Signatories included those who are building systems with a view to achieving “general” AI, a technology that would hold the cognitive abilities on par with those of humans.
“If you really think there’s existential risk, why are you working on this at all? That’s a pretty fair question to ask,” Marcus said.
Instead of putting the focus on more far-fetched scenarios in which no one survives, society should be paying attention to where real dangers lie, Marcus said.
“People might try to manipulate the markets by using AI to cause all kinds of mayhem and then we might, for example, blame the Russians and say: ‘Look what they’ve done to our country,’ when the Russians actually weren’t involved,” he said.
“You [could] have this escalation that winds up in nuclear war or something like that. So I think there are scenarios where it was pretty serious. Extinction? I don’t know,” he said.
In the short term, Marcus is worried about democracy.
Generative AI software produces increasingly convincing fake photographs, and could soon produce videos, at little cost.
“Elections are going to be won by people who are better at spreading disinformation, and those people may change the rules and make it really difficult to have democracy proceed,” he said. “Democracy is premised on having reasonable information and making good decisions. If nobody knows what to believe, then how do you even proceed with democracy?”
The author of the book Rebooting AI does not think hope should be abandoned, as there is still "a lot of upside."
There is a chance that AI not yet invented could "help with science, with medicine, with elder care," Marcus said.
"But in the short term, I feel like we're just not ready. There's going to be some harm along the way and we really need to up our game, we have to figure out serious regulation," he said.
At a US Senate hearing last month, seated beside Altman, Marcus argued for the creation of a national or international agency responsible for AI governance.
The idea is backed by Altman, who has returned from a European tour in which he urged political leaders to find the "right balance" between safety and innovation.
However, beware of leaving the power to corporations, Marcus said.
"The last several months have been a real reminder that the big companies calling the shots here are not necessarily interested in the rest of us," he said.
DAMAGE REPORT: Global central banks are assessing war-driven inflation risks as the law of unintended consequences careens around the world, spiking oil prices Central banks from Washington to London and from Jakarta to Taipei are about to make their first assessments of economic damage after more than two weeks of conflict between the US and Iran. Decisions this week encompassing every member of the G7 and eight of the world’s 10 most-traded currency jurisdictions are likely to confirm to investors that the specter of a new inflation shock is already worrying enough to prompt heightened caution. The US Federal Reserve is widely expected to do exactly what everyone anticipated weeks ahead of its March 17-18 policy gathering: hold rates steady. The narrative surrounding that
At a massive shipyard in North Vancouver, Canadian workers grind metal beams for a powerful new icebreaker crucial to cementing the country’s presence in the increasingly contested arctic. Icebreakers are specialized, expensive vessels able to navigate in the frozen far north. And “this is the crown jewel,” said Eddie Schehr, vice president of production at the Seaspan shipyard. For Canadian Prime Minister Mark Carney, who heads to Norway next Friday to observe arctic defense drills involving troops from 14 NATO states, Canada’s extreme north has emerged as a strategic priority. “Canada is and forever will be an Arctic nation,” he said ahead of
Chinese entrepreneur Frank Gao used to spend long hours running his social media accounts but now outsources the chore to artificial intelligence (AI) agent tool OpenClaw, which is taking China by storm despite official warnings over cybersecurity. OpenClaw, created in November by an Austrian coder, differs from bots such as ChatGPT because it can execute real-life tasks such as sending e-mails, organizing files or even booking flight tickets. “Since January, I’ve spent hours on the lobster every day,” Gao said in an interview, referring to OpenClaw’s red crustacean mascot. “We’re family.” After downloading OpenClaw, users connect it to artificial intelligence models of their
PRICE HIKES: The war in the Middle East would not significantly disrupt supply in the short term, but semiconductor companies are facing price surges for materials Taiwan’s semiconductor companies are not facing imminent supply disruptions of essential chemicals or raw materials due to the war in the Middle East, but surges in material costs loom large, industry association SEMI Taiwan said yesterday. The association’s comments came amid growing concerns that supplies of helium and other key raw materials used in semiconductor production could become a choke point after Qatar shut down its liquefied natural gas (LNG) production and helium output earlier this month due to the conflict. Qatar is the second-largest LNG supplier in the world and accounts for about 33 percent of global helium output. Helium is