For people at the trend-setting tech festival here, the scandal that erupted after Google’s chatbot, Gemini, cranked out images of black and Asian nazi soldiers was seen as a warning about the power artificial intelligence (AI) can give tech titans.
Google CEO Sundar Pichai last month slammed as “completely unacceptable” errors by his company’s Gemini AI app, after gaffes such as the images of ethnically diverse Nazi troops forced it to temporarily stop users from creating pictures of people.
Social media users mocked and criticized Google for the historically inaccurate images, like those showing a female black US senator from the 1800s — when the first such senator was not elected until 1992.
Photo: AP
“We definitely messed up on the image generation,” Google cofounder Sergey Brin said at a AI “hackathon,” adding that the company should have tested Gemini more thoroughly.
People interviewed at the popular South by Southwest arts and tech festival in Austin said the Gemini stumble highlights the inordinate power a handful of companies have over the AI platforms that are poised to change the way people live and work.
“Essentially, it was too ‘woke,’” said Joshua Weaver, a lawyer and tech entrepreneur, meaning Google had gone overboard in its effort to project inclusion and diversity.
Google quickly corrected its errors, but the underlying problem remains, said Charlie Burgoyne, chief executive of the Valkyrie applied science lab in Texas.
He equated Google’s fix of Gemini to putting a Band-Aid on a bullet wound.
While Google long had the luxury of having time to refine its products, it is now scrambling in an AI race with Microsoft Corp, OpenAI, Anthropic and others, Weaver said. “They are moving faster than they know how to move.”
Mistakes made in an effort at cultural sensitivity are flashpoints, particularly given the tense political divisions in the US, a situation exacerbated by Elon Musk’s X platform, the former Twitter.
“People on Twitter are very gleeful to celebrate any embarrassing thing that happens in tech,” Weaver said, adding that reaction to the Nazi gaffe was “overblown.”
However, the mishap called into question the degree of control those using AI tools have over information, Weaver said.
In the coming decade, the amount of information — or misinformation — created by AI could dwarf that generated by people, meaning those controlling AI safeguards would have huge influence on the world, he said.
Karen Palmer, an award-winning mixed-reality creator with Interactive Films Ltd, said she could imagine a future in which someone gets into a robo-taxi and, “if the AI scans you and thinks that there are any outstanding violations against you ... you’ll be taken into the local police station,” not your intended destination.
AI is trained on mountains of data and can be put to work on a growing range of tasks, from image or audio generation to determining who gets a loan or whether a medical scan detects cancer.
However, that data comes from a world rife with cultural bias, disinformation and social inequity — not to mention online content that can include casual chats between friends or intentionally exaggerated and provocative posts — and AI models can echo those flaws. With Gemini, Google engineers tried to rebalance the algorithms to provide results better reflecting human diversity.
The effort backfired.
“It can really be tricky, nuanced and subtle to figure out where bias is and how it’s included,” said technology lawyer Alex Shahrestani, a managing partner at Promise Legal law firm for tech companies.
Even well-intentioned engineers involved with training AI cannot help but bring their own life experience and subconscious bias to the process, he said.
Burgoyne also castigated big tech for keeping the inner workings of generative AI hidden in “black boxes,” so users are unable to detect any hidden biases.
“The capabilities of the outputs have far exceeded our understanding of the methodology,” he said.
Experts and activists are calling for more diversity in teams creating AI and related tools, and greater transparency as to how they work — particularly when algorithms rewrite users’ requests to “improve” results.
A challenge is how to appropriately build in perspectives of the world’s many and diverse communities, the Indigenous Futures Resource Center codirector Jason Lewis said.
At Indigenous AI, Jason works with far-flung indigenous communities to design algorithms that use their data ethically while reflecting their perspectives on the world, something he does not always see in the “arrogance” of big tech leaders.
He said his own work stands in “such a contrast from Silicon Valley rhetoric, where there’s a top-down ‘Oh, we’re doing this because we’re going to benefit all humanity’ bullshit, right?”
His audience laughed.
Taiwan Semiconductor Manufacturing Co (TSMC, 台積電) would not produce its most advanced technologies in the US next year, Minister of Economic Affairs J.W. Kuo (郭智輝) said yesterday. Kuo made the comment during an appearance at the legislature, hours after the chipmaker announced that it would invest an additional US$100 billion to expand its manufacturing operations in the US. Asked by Taiwan People’s Party Legislator-at-large Chang Chi-kai (張啟楷) if TSMC would allow its most advanced technologies, the yet-to-be-released 2-nanometer and 1.6-nanometer processes, to go to the US in the near term, Kuo denied it. TSMC recently opened its first US factory, which produces 4-nanometer
PROTECTION: The investigation, which takes aim at exporters such as Canada, Germany and Brazil, came days after Trump unveiled tariff hikes on steel and aluminum products US President Donald Trump on Saturday ordered a probe into potential tariffs on lumber imports — a move threatening to stoke trade tensions — while also pushing for a domestic supply boost. Trump signed an executive order instructing US Secretary of Commerce Howard Lutnick to begin an investigation “to determine the effects on the national security of imports of timber, lumber and their derivative products.” The study might result in new tariffs being imposed, which would pile on top of existing levies. The investigation takes aim at exporters like Canada, Germany and Brazil, with White House officials earlier accusing these economies of
Teleperformance SE, the largest call-center operator in the world, is rolling out an artificial intelligence (AI) system that softens English-speaking Indian workers’ accents in real time in a move the company claims would make them more understandable. The technology, called accent translation, coupled with background noise cancelation, is being deployed in call centers in India, where workers provide customer support to some of Teleperformance’s international clients. The company provides outsourced customer support and content moderation to global companies including Apple Inc, ByteDance Ltd’s (字節跳動) TikTok and Samsung Electronics Co Ltd. “When you have an Indian agent on the line, sometimes it’s hard
PROBE CONTINUES: Those accused falsely represented that the chips would not be transferred to a person other than the authorized end users, court papers said Singapore charged three men with fraud in a case local media have linked to the movement of Nvidia’s advanced chips from the city-state to Chinese artificial intelligence (AI) firm DeepSeek (深度求索). The US is investigating if DeepSeek, the Chinese company whose AI model’s performance rocked the tech world in January, has been using US chips that are not allowed to be shipped to China, Reuters reported earlier. The Singapore case is part of a broader police investigation of 22 individuals and companies suspected of false representation, amid concerns that organized AI chip smuggling to China has been tracked out of nations such