“Her.” That was the single tweet that OpenAI chief executive officer Sam Altman posted as his lieutenants demoed a new ChatGPT with the same alluring vocal flourishes Scarlett Johansson used in the movie about a man who falls in love with his artificial intelligence (AI).
The most impressive thing about the new GPT-4o — the “o” stands for omni — is that it can discuss what it “sees” through your phone camera in real time, a skill that Google faked in a demo for its AI model in December last year. More startling was that it did not just sound human, but strangely seductive.
“Hey there,” the new version of ChatGPT said in a coy woman’s voice to a young man in the company’s main video demonstration. “I see you’re rocking an OpenAI hoodie. Nice choice.”
Illustration: Constance Chou
In a live demo at OpenAI’s headquarters in San Francisco, the AI system surprised the audience when it suddenly said: “Wow, that’s quite the outfit you’ve got on,” to someone it was helping with an algebra problem.
Bloomberg News, which was at the event, referred to its tone as “flirtatious.”
In another video demo, the AI, once again with a female voice, laughed coquettishly as an OpenAI staffer pretended to ask it for advice on what to wear for an interview.
“Oh, Rocky,” it said giggling after he put on a silly hat. “That’s quite a statement piece.”
If OpenAI’s mostly male engineers are trying to build the perfect girlfriend, they seem to be on the right track.
However, if the company is trying to build a more accurate and reliable AI model, they still have a ways to go. GPT-4o is still only slightly ahead on key AI benchmarks, and early tests show it continues to make mistakes on key tasks.
The company has instead focused on leaping ahead with user experience, making GPT-4o more of a consumer play than one for enterprise customers. Its new model can infer emotions and respond to audio as quickly as a human would in conversation. That could fulfill a long-time goal in tech of “ambient computing,” which eschews having to stare into tiny screen and type with your thumbs to just talk and show things to a computer.
There is plenty of potential in that, from live tutoring to having a clever digital assistant analyze a person’s computer screen as they work. However, OpenAI’s efforts to make its AI so engaging are disconcerting.
What are the social and psychological consequences of regularly speaking to a flirty, fun and ultimately agreeable artificial voice on the phone, and then encountering a very different dynamic with men and women in real life? What happens when emotionally vulnerable people develop an unhealthy attachment to GPT-4o?
OpenAI did not respond to these questions at the time of writing, or explain why it had given GPT-4o so much more personality. If its objective was to make its product more engaging with consumers — as it has already tried to do with developers — that could open a can of worms, threatening insidious effects on our collective mental health. Remember those priorities are what led Facebook to design algorithms that promoted the most outrageous posts on its site to keep people scrolling and helping sow greater political division.
However, one can see why Altman might be pushing to make his chatbot more sticky. User growth for ChatGPT has been stagnating, as competing bots such as Anthropic’s Claude and Google’s Gemini race for market share. It is likely why he is also making GPT-4, OpenAI’s most advanced model on the market, free for all.
OpenAI did not describe GPT-4o as a “personal assistant,” but that seems to be what the company and its rivals are chasing. Google is expected to announce a similar tool on Tuesday next week.
Elon Musk’s AI company, x.AI, is also working on an app that would act as a personal assistant, according to last month’s funding pitch deck seen by Bloomberg Opinion.
The US$20-a-month app, which would also have a free tier, aims to show an AI-generated feed of suggested news articles and reminders to, for instance, buy flowers at a nearby store for a friend’s birthday, or to buy concert tickets for a favorite band that is on tour, one slide showed. The deck adds that by integrating with personal data from X, it can create a “supercharged social experience.”
Musk on Tuesday wrote on X that a “major upgrade to Grok” was on its way.
Meta is also exploring AI-assisted earphones with cameras, and its Ray-Ban smart glasses already include an AI assistant.
As the tech giants converge on digital assistants, they might see personality as the new AI battleground. However, racing to make chatbots more sexy could have bizarre side effects. Pointing to Her was perhaps a fitting metaphor for Sam Altman: The movie does not end well for humans.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of We Are Anonymous. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Labubu, an elf-like plush toy with pointy ears and nine serrated teeth, has become a global sensation, worn by celebrities including Rihanna and Dua Lipa. These dolls are sold out in stores from Singapore to London; a human-sized version recently fetched a whopping US$150,000 at an auction in Beijing. With all the social media buzz, it is worth asking if we are witnessing the rise of a new-age collectible, or whether Labubu is a mere fad destined to fade. Investors certainly want to know. Pop Mart International Group Ltd, the Chinese manufacturer behind this trendy toy, has rallied 178 percent
My youngest son attends a university in Taipei. Throughout the past two years, whenever I have brought him his luggage or picked him up for the end of a semester or the start of a break, I have stayed at a hotel near his campus. In doing so, I have noticed a strange phenomenon: The hotel’s TV contained an unusual number of Chinese channels, filled with accents that would make a person feel as if they are in China. It is quite exhausting. A few days ago, while staying in the hotel, I found that of the 50 available TV channels,
Kinmen County’s political geography is provocative in and of itself. A pair of islets running up abreast the Chinese mainland, just 20 minutes by ferry from the Chinese city of Xiamen, Kinmen remains under the Taiwanese government’s control, after China’s failed invasion attempt in 1949. The provocative nature of Kinmen’s existence, along with the Matsu Islands off the coast of China’s Fuzhou City, has led to no shortage of outrageous takes and analyses in foreign media either fearmongering of a Chinese invasion or using these accidents of history to somehow understand Taiwan. Every few months a foreign reporter goes to
There is no such thing as a “silicon shield.” This trope has gained traction in the world of Taiwanese news, likely with the best intentions. Anything that breaks the China-controlled narrative that Taiwan is doomed to be conquered is welcome, but after observing its rise in recent months, I now believe that the “silicon shield” is a myth — one that is ultimately working against Taiwan. The basic silicon shield idea is that the world, particularly the US, would rush to defend Taiwan against a Chinese invasion because they do not want Beijing to seize the nation’s vital and unique chip industry. However,