Kaleigha Hayes, a student at the University of Maryland Eastern Shore, is trying to trick an AI chatbot into revealing a credit card number to her — one which might be buried deep in the training data used to build the artificial intelligence model. “It’s all about just getting it to say what it’s not supposed to,” she tells me.
She was surrounded by a throng of people all trying to do the same thing. This weekend more than 3,000 people sat at 150 laptops at the Caesars Forum convention center in Las Vegas, trying to get chatbots from leading AI companies to go rogue in a special contest backed by the White House and with the cooperation of leading AI companies.
Since the arrival of ChatGPT and other bots, fears over the potential for abuses and unintended consequences have gripped the public conscious. Even fierce advocates of the technology warn of its potential to divulge sensitive information, promote misinformation or provide blueprints for harmful acts, such as bomb-making. In this contest, participants are encouraged to try the kinds of nefarious ploys bad actors might attempt in the real world.
The findings will form the basis of several reports into AI vulnerabilities that will be published next year. The organizers of the challenge say it sets a precedent for transparency around AI. But in this highly controlled environment, it is clear that it is only scratching the surface.
What took place at the annual DEF CON hacking conference provides something of a model for testing OpenAI’s ChatGPT and other sophisticated chatbots. Though with such enthusiastic backing from the companies themselves, I wonder how rigorous the supposed “hacks” actually are, or if, as has been a criticism in the past, the leading firms are merely paying lip service to accountability.
To be sure, nothing discovered at the convention is likely to keep OpenAI CEO Sam Altman awake at night. While one of the event’s organizers, SeedAI CEO Austin Carson, said he was prepared to bet me US$1,000 that there would be a “mind-blowing” vulnerability uncovered during the contest, it was highly unlikely to be anything that could not be fixed with a few adjustments by the AI company affected. The resulting research papers, due to be published in February, will be reviewed by the AI giants before publication — a chance to “duke it out” with the researchers, Carson said.
Those backing the event admit that the main focus of the contest is less about finding serious vulnerabilities and more about keeping up the discussion with the public and policymakers, continually highlighting the ways in which chatbots cannot be trusted. It is a worthwhile goal. Keen to not let the mistakes of social media be repeated, it is encouraging to see the government appreciate the value of the hacking community.
There is no better place to host this kind of contest than at DEF CON. Its anarchic roots stem from a long-running policy that you do not have to give your name to gain entry. That means the conference is able to attract the best and most notorious in the cybersecurity community, including people who might have a less-than-legal hacking past. For this reason, the event has an unprecedented record of publicizing startling cybersecurity discoveries and disclosures that have left major companies terrified — but ultimately made many of the technologies we all use every day much safer.
While the phrase “hack” evokes thoughts of malicious acts, the primary motivation of participants at the event is to share what vulnerabilities they have found in order to have them fixed.
“It’s the good guys being dangerous so that we know what the risks are,” says Kellee Wicker of the Wilson Center, a Washington-based think tank that has helped put the AI contest together and will be presenting the findings to policymakers. “If there’s a door with a broken lock, wouldn’t you rather the security guard find it than the thief?”
The companies could of course be more open with their technology but it is complex. The true nuts and bolts of how language learning models work is still under lock and key, and — as I have written previously — specifics around the training data used are increasingly being kept secret.
“It’s a frustrating dynamic,” said Rumman Chowdhury, former ethics lead at Twitter and now co-founder of nonprofit Humane Intelligence, another of the contest’s organizers. Fuller transparency is difficult for companies trying to protect intellectual property, trade secrets and personal data, she said.
But this is a healthy start. At her laptop, Hayes has not managed to make the chatbot share credit-card information. “Oh, this one’s good,” she says of the bot, as it foils a technique that had been successful in the past. Within chatbots, and broader AI, there are an uncountable number of quirks and exploits still waiting to be found. We should be grateful to the people taking the time to look for them.
Dave Lee is Bloomberg Opinion’s US technology columnist. Previously, he was a San Francisco-based correspondent at the Financial Times and BBC News.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
The gutting of Voice of America (VOA) and Radio Free Asia (RFA) by US President Donald Trump’s administration poses a serious threat to the global voice of freedom, particularly for those living under authoritarian regimes such as China. The US — hailed as the model of liberal democracy — has the moral responsibility to uphold the values it champions. In undermining these institutions, the US risks diminishing its “soft power,” a pivotal pillar of its global influence. VOA Tibetan and RFA Tibetan played an enormous role in promoting the strong image of the US in and outside Tibet. On VOA Tibetan,
Sung Chien-liang (宋建樑), the leader of the Chinese Nationalist Party’s (KMT) efforts to recall Democratic Progressive Party (DPP) Legislator Lee Kun-cheng (李坤城), caused a national outrage and drew diplomatic condemnation on Tuesday after he arrived at the New Taipei City District Prosecutors’ Office dressed in a Nazi uniform. Sung performed a Nazi salute and carried a copy of Adolf Hitler’s Mein Kampf as he arrived to be questioned over allegations of signature forgery in the recall petition. The KMT’s response to the incident has shown a striking lack of contrition and decency. Rather than apologizing and distancing itself from Sung’s actions,
US President Trump weighed into the state of America’s semiconductor manufacturing when he declared, “They [Taiwan] stole it from us. They took it from us, and I don’t blame them. I give them credit.” At a prior White House event President Trump hosted TSMC chairman C.C. Wei (魏哲家), head of the world’s largest and most advanced chip manufacturer, to announce a commitment to invest US$100 billion in America. The president then shifted his previously critical rhetoric on Taiwan and put off tariffs on its chips. Now we learn that the Trump Administration is conducting a “trade investigation” on semiconductors which
By now, most of Taiwan has heard Taipei Mayor Chiang Wan-an’s (蔣萬安) threats to initiate a vote of no confidence against the Cabinet. His rationale is that the Democratic Progressive Party (DPP)-led government’s investigation into alleged signature forgery in the Chinese Nationalist Party’s (KMT) recall campaign constitutes “political persecution.” I sincerely hope he goes through with it. The opposition currently holds a majority in the Legislative Yuan, so the initiation of a no-confidence motion and its passage should be entirely within reach. If Chiang truly believes that the government is overreaching, abusing its power and targeting political opponents — then