From deepfake videos of Indonesia’s presidential contenders to online hate speech directed at India’s Muslims, social media misinformation has been rising ahead of a bumper election year, and experts say tech platforms are not ready for the challenge.
Voters in Bangladesh, Indonesia, Pakistan and India go to the polls this year as more than 50 nations hold elections, including the US where former president Donald Trump is looking to make a comeback.
Despite the high stakes and evidence from previous polls of how fake online content can influence voters, digital rights experts say social media platforms are ill-prepared for the inevitable rise in misinformation and hate speech.
Illustration: Yusha
Recent layoffs at big tech firms, new laws to police online content that have tied up moderators and artificial intelligence (AI) tools that make it easier to spread misinformation could hurt poorer countries more, said Sabhanaz Rashid Diya, an expert in platform safety.
“Things have actually gotten worse since the last election cycle for many countries: The actors who abuse the platforms have gotten more sophisticated, but the resources to tackle them haven’t increased,” said Diya, founder of Tech Global Institute.
“Because of the mass layoffs, priorities have shifted. Added to that is the large volume of new regulations ... platforms have to comply, so they don’t have resources to proactively address the broader content ecosystem [and] the election integrity ecosystem,” she said.
“That will disproportionately impact the Global South,” which generally gets fewer resources from tech firms, she said.
As generative AI tools, such as Midjourney, Stable Diffusion and DALL-E, make it cheap and easy to create convincing deepfakes, concern is growing about how such material could be used to mislead or confuse voters.
AI-generated deepfakes have already been used to deceive voters from New Zealand to Argentina and the US, and authorities are scrambling to keep up with the tech even as they pledge to crack down on misinformation.
The EU — where elections for the European Parliament is to take place in June — requires tech firms to clearly label political advertising and say who paid for it, while India’s IT Rules “explicitly prohibit the dissemination of misinformation,” the Ministry of Electronics and Information Technology said last month.
Alphabet’s Google has said it plans to attach labels to AI-generated content and political ads that use digitally altered material on its platforms, including on YouTube, and also limit election queries its Bard chatbot and AI-based search can answer.
YouTube’s “elections-focused teams are monitoring real-time developments ... including by detecting and monitoring trends in risky forms of content and addressing them appropriately before they become larger issues,” a spokesperson for YouTube said.
Meta Platforms — which owns Facebook, WhatsApp and Instagram — has said it would bar political campaigns and advertisers from using its generative AI products in advertisements.
Meta has a “comprehensive strategy in place for elections, which includes detecting and removing hate speech and content that incites violence, reducing the spread of misinformation, making political advertising more transparent [and] partnering with authorities to action content that violates local law,” a spokesperson said.
X, formerly known as Twitter, did not respond to a request for comment on its measures to tackle election-related misinformation. TikTok, which is banned in India, also did not respond.
Misinformation on social media has had devastating consequences ahead of, and after, previous elections in many of the nations where voters are going to the polls this year.
In Indonesia, which votes on Feb. 14, hoaxes and calls for violence on social media networks spiked after the 2019 election result. At least six people were killed in subsequent unrest.
In Pakistan, where a national vote is scheduled for Feb. 8, hate speech and misinformation were rife on social media ahead of a 2018 general election, which was marred by a series of bombings that killed scores across the country.
Last year, violent clashes following the arrests of supporters of jailed former Pakistani prime minister Imran Khan led to Internet shutdowns and the blocking of social media platforms. Former cricket hero Khan was arrested on corruption charges last year and given a three-year prison sentence.
While social media firms have developed advanced algorithms to tackle misinformation and disinformation, “the effectiveness of these tools can be limited by local nuances and the intricacies of languages other than English,” said Nuurrianti Jalli, an assistant professor at Oklahoma State University.
In addition, the critical US election and global events, such as the Israel-Hamas conflict and the Russia-Ukraine war, could “sap resources and focus that might otherwise be dedicated to preparing for elections in other locales,” she added.
In Bangladesh, violent protests erupted in the months ahead of the Jan. 7 election. The vote was boycotted by the main opposition party and Prime Minister Sheikh Hasina won a fourth straight term.
Political ads on Facebook — the biggest social media platform in the country, with more than 44 million users — are routinely mislabeled or lack disclaimers and key details, revealing gaps in the platform’s verification process, a recent study by tech research firm Digitally Right said.
Separately, a report published last month by Tech Global Institute revealed how difficult it was to determine the affiliation between Facebook pages and groups and Bangladesh’s two leading political parties or to figure out what constitutes “authoritative information” from either party.
Facebook has not commented on the studies.
In the past year, Meta, X and Alphabet have rolled back at least 17 major policies designed to curb hate speech and misinformation, and laid off more than 40,000 people, including teams that maintained platform integrity, the US non-profit Free Press said in a report last month.
“With dozens of national elections happening around the world in 2024, platform-integrity commitments are more important than ever. However, major social media companies are not remotely prepared for the upcoming election cycle,” civil rights lawyer Nora Benavidez wrote in the report.
“Without the policies and teams they need to moderate violative content, platforms risk amplifying confusion, discouraging voter engagement and creating opportunities for network manipulation to erode democratic institutions,” she wrote.
Some governments have responded to this perceived lack of control by introducing restrictive laws on online speech and expression, and these could lead social media platforms to over-enforce content moderation, tech experts said.
India — where Prime Minister Narendra Modi is widely expected to win a third term — has stepped up content removal demands, introduced individual liability provisions for firms and warned companies could lose safe harbor protections that protect them from liability for third-party content if they do not comply.
“The legal obligation puts additional strains on platforms ... if safe harbor is at risk, the platform will inadvertently over-enforce, so it will end up taking down a lot more content,” Diya said.
For Raman Jit Singh Chima, Asia policy director at non-profit Access Now, the issue is preparation; he says big tech firms have failed to engage with civil society ahead of elections and have not provided enough information in local languages.
“Digital platforms are even more important for this election cycle, but they are not set up to handle the problems around elections, and they are not being transparent about their measures to mitigate harms,” he said.
“It’s very worrying,” he added.
Why is Chinese President Xi Jinping (習近平) not a “happy camper” these days regarding Taiwan? Taiwanese have not become more “CCP friendly” in response to the Chinese Communist Party’s (CCP) use of spies and graft by the United Front Work Department, intimidation conducted by the People’s Liberation Army (PLA) and the Armed Police/Coast Guard, and endless subversive political warfare measures, including cyber-attacks, economic coercion, and diplomatic isolation. The percentage of Taiwanese that prefer the status quo or prefer moving towards independence continues to rise — 76 percent as of December last year. According to National Chengchi University (NCCU) polling, the Taiwanese
US President Donald Trump’s return to the White House has brought renewed scrutiny to the Taiwan-US semiconductor relationship with his claim that Taiwan “stole” the US chip business and threats of 100 percent tariffs on foreign-made processors. For Taiwanese and industry leaders, understanding those developments in their full context is crucial while maintaining a clear vision of Taiwan’s role in the global technology ecosystem. The assertion that Taiwan “stole” the US’ semiconductor industry fundamentally misunderstands the evolution of global technology manufacturing. Over the past four decades, Taiwan’s semiconductor industry, led by Taiwan Semiconductor Manufacturing Co (TSMC), has grown through legitimate means
Today is Feb. 28, a day that Taiwan associates with two tragic historical memories. The 228 Incident, which started on Feb. 28, 1947, began from protests sparked by a cigarette seizure that took place the day before in front of the Tianma Tea House in Taipei’s Datong District (大同). It turned into a mass movement that spread across Taiwan. Local gentry asked then-governor general Chen Yi (陳儀) to intervene, but he received contradictory orders. In early March, after Chiang Kai-shek (蔣介石) dispatched troops to Keelung, a nationwide massacre took place and lasted until May 16, during which many important intellectuals
It would be absurd to claim to see a silver lining behind every US President Donald Trump cloud. Those clouds are too many, too dark and too dangerous. All the same, viewed from a domestic political perspective, there is a clear emerging UK upside to Trump’s efforts at crashing the post-Cold War order. It might even get a boost from Thursday’s Washington visit by British Prime Minister Keir Starmer. In July last year, when Starmer became prime minister, the Labour Party was rigidly on the defensive about Europe. Brexit was seen as an electorally unstable issue for a party whose priority