Last year, the shape of politics to come appeared in a video. In it, former Democratic Party US presidential candidate and secretary of state Hillary Clinton says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”
It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).
The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future.
Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalized advertising being produced to manipulate voters. The results could be so-called “October surprises” — ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it — and the generation of misleading information about electoral administration, such as where polling stations are.
Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet are to go to the polls. This year, it is projected that there will be elections in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the EU, the US and the UK. Many of these elections would not determine just the future of nation-states; they would also shape how we tackle global challenges such as geopolitical tensions and the climate crisis.
It is likely that each of these elections would be influenced by new generative AI technologies in the same way the elections of the 2010s were shaped by social media.
While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because, during the past decade, we have witnessed the role that so-called “bullshit” can play in politics.
In a short book on the topic, the late Princeton philosopher Harry Frankfurt defined bullshit specifically as speech intended to persuade without regard to the truth. Throughout the 2010s this appeared to become an increasingly common practice among political leaders. With the rise of generative AI and technologies such as ChatGPT, we could see the rise of a phenomenon my colleagues and I label “botshit.”
In a recent paper, Tim Hannigan, Ian McCarthy and I sought to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce what are called “hallucinations.” This is because generative AI answers questions by making statistically informed guesses. Often these guesses are correct, but sometimes they are wildly off. The result could be artificially generated “hallucinations” that bear little relationship to reality, such as explanations or images that seem superficially plausible, but are not actually the correct answer to whatever the question was.
Humans might use untrue material created by generative AI in an uncritical and thoughtless way. And that could make it harder for people to know what is true and false in the world. In some cases, these risks might be relatively low, for example, if generative AI were used for a task that was not very important (such as to come up with some ideas for a birthday party speech), or if the truth of the output were easily verifiable using another source (such as when did the battle of Waterloo happen?).
The real problems arise when the outputs of generative AI have important consequences and the outputs cannot easily be verified.
If AI-produced hallucinations are used to answer important but difficult-to-verify questions, such as the state of the economy or the war in Ukraine, there is a real danger it could create an environment where some people start to make important voting decisions based on an entirely illusory universe of information. There is a danger that voters could end up living in generated online realities that are based on a toxic mixture of AI hallucinations and political expediency.
Although AI technologies pose dangers, there are measures that could be taken to limit them. Technology companies could continue to use watermarking, which allows users to easily identify AI-generated content. They could also ensure AIs are trained on authoritative information sources. Journalists could take extra precautions to avoid covering AI-generated stories during an election cycle. Political parties could develop policies to prevent the use of deceptive AI-generated information. Most importantly, voters could exercise their critical judgment by reality-checking important pieces of information they are unsure about.
The rise of generative AI has already started to fundamentally change many professions and industries. Politics is likely to be at the forefront of this change.
The Brookings Institution points out that there are many positive ways generative AI could be used in politics. However, at the moment its negative uses are most obvious, and more likely to affect us imminently.
It is vital we strive to ensure that generative AI is used for beneficial purposes and does not simply lead to more botshit.
Andre Spicer is professor of organisational behavior at the Bayes Business School at City, University of London. He is the author of the book Business Bullshit.
Trying to force a partnership between Taiwan Semiconductor Manufacturing Co (TSMC) and Intel Corp would be a wildly complex ordeal. Already, the reported request from the Trump administration for TSMC to take a controlling stake in Intel’s US factories is facing valid questions about feasibility from all sides. Washington would likely not support a foreign company operating Intel’s domestic factories, Reuters reported — just look at how that is going over in the steel sector. Meanwhile, many in Taiwan are concerned about the company being forced to transfer its bleeding-edge tech capabilities and give up its strategic advantage. This is especially
US President Donald Trump’s second administration has gotten off to a fast start with a blizzard of initiatives focused on domestic commitments made during his campaign. His tariff-based approach to re-ordering global trade in a manner more favorable to the United States appears to be in its infancy, but the significant scale and scope are undeniable. That said, while China looms largest on the list of national security challenges, to date we have heard little from the administration, bar the 10 percent tariffs directed at China, on specific priorities vis-a-vis China. The Congressional hearings for President Trump’s cabinet have, so far,
US political scientist Francis Fukuyama, during an interview with the UK’s Times Radio, reacted to US President Donald Trump’s overturning of decades of US foreign policy by saying that “the chance for serious instability is very great.” That is something of an understatement. Fukuyama said that Trump’s apparent moves to expand US territory and that he “seems to be actively siding with” authoritarian states is concerning, not just for Europe, but also for Taiwan. He said that “if I were China I would see this as a golden opportunity” to annex Taiwan, and that every European country needs to think
For years, the use of insecure smart home appliances and other Internet-connected devices has resulted in personal data leaks. Many smart devices require users’ location, contact details or access to cameras and microphones to set up, which expose people’s personal information, but are unnecessary to use the product. As a result, data breaches and security incidents continue to emerge worldwide through smartphone apps, smart speakers, TVs, air fryers and robot vacuums. Last week, another major data breach was added to the list: Mars Hydro, a Chinese company that makes Internet of Things (IoT) devices such as LED grow lights and the