Hours after the Israel-Hamas conflict erupted on Oct. 7, Bharat Nayak, a fact-checker in the east Indian state of Jharkhand, noticed a surge of disinformation and hate speech directed at Muslims on his dashboard of WhatsApp messages.
The viral messages from hundreds of public WhatsApp groups in India contained graphic images and videos, including many from Syria and Afghanistan falsely labeled as being from Israel, with captions in Hindi that called Muslims evil.
“They are using the crisis to spread misinformation against Muslims, saying they will attack Hindus in a similar way, and to falsely accuse opposition parties and others of supporting Hamas, and calling for their elimination,” Nayak said. “The content is very graphic, the messaging is extreme, and it gets forwarded many times, as there is no content moderation on WhatsApp.”
The conflict, which has killed more than 1,400 people in Israel and more than 8,000 in the Gaza Strip, has triggered a surge in disinformation and hate speech against Muslims and Jews across social media platforms from India to China to the US.
Meta and X, formerly known as Twitter, said they have removed tens of thousands of posts, but the volume of disinformation and hate speech underlines the failure of social media platforms to boost content moderation, particularly in languages other than English, digital rights experts say.
“We’ve tirelessly drawn their attention to these issues over the years, but social media platforms continue to fall short when it comes to combating hate speech, incitement and disinformation,” said Mona Shtaya, a nonresident fellow at the non-profit Tahrir Institute for Middle East Policy.
“The recent layoffs in trust and safety teams across platforms underscore this deficiency,” she said. “Additionally, their resource allocation — based on market size, rather than assessed risks — exacerbates the challenges faced by marginalized communities including Palestinians and others.”
In a blog post, Meta — which owns Facebook, Instagram and WhatsApp — wrote that it had “quickly established a special operations center staffed with experts, including fluent Hebrew and Arabic speakers,” and that it is working with third-party fact-checkers in the region “to debunk false claims.”
X did not respond to a request for comment.
Failures of content moderation are not limited to the decades-long Israel-Palestine conflict.
UN human rights investigators said in 2018 that the use of Facebook had played a key role in spreading hate speech that fueled violence against the ethnic Rohingya community in Myanmar in 2017.
Rohingya refugees in 2021 sued Meta for US$150 billion over allegations that the company’s failures to police content, and its platform’s design contributed to the real-world violence.
Meta has acknowledged being “too slow” to act in Myanmar.
Last year, a lawsuit against Meta filed in Kenya accused the platform of allowing violent and hateful posts from Ethiopia on Facebook, and its recommendation systems of amplifying violent posts that inflamed the Ethiopian civil war.
The company has faced similar accusations related to violence in Sri Lanka, India, Indonesia and Cambodia.
The surge in disinformation during the Israel-Hamas conflict underscores that “platforms do not have the right systems in place,” said Sabhanaz Rashid Diya, a former head of policy at Meta for Bangladesh and founding board director of the Tech Global Institute think tank.
Diya said that “the historical under-investment in specific parts of the world and specific languages is now being tested in this crisis.”
“Some of the challenges we’re seeing around the information ecosystem are consequences of not building capacity; these are consequences of automated systems, staffing issues; not having sufficient fact-checkers in these markets; not having policies that are contextualized for local regions,” Diya said.
The Arab Center for Social Media Advancement, or 7amleh, has documented more than 500,000 instances in Hebrew of hate speech and incitement to violence against Palestinians and their supporters.
There is also a more than 50-fold increase in the absolute volume of anti-Semitic comments on YouTube videos, the Institute for Strategic Dialogue in London said in a report this week.
State-affiliated accounts of Iran, Russia and China are also spreading disinformation and hate speech on Facebook and X, it said, adding that it could contribute to “popularization and deepening mistrust towards democratic institutions and the media.”
Reports of anti-Semitic and Islamophobic incidents have surged worldwide, including assaults, vandalism and the fatal stabbing of a six-year-old Palestinian boy in the US.
They are a result of the hate speech online, said Marc Owen Jones, an associate professor who researches disinformation in the Middle East at Hamad bin Khalifa University in Qatar.
“Much of the disinformation is violent, graphic and highly emotive — designed to provoke polarization and turn people against each other,” Jones said.
It is “driving a sense of righteousness and tribalism that contributes to violence, as we’ve seen as far away as Dagestan and Illinois. The upshot is dire,” Jones said.
Yet despite heated conversations around the need for better content moderation, trust and safety is “resource-intensive, meaning that tackling the issue is a challenge for any platform,” said Yu-lan Scholliers, head of product at Checkstep, a UK-based content moderation services firm.
With easy access to artificial intelligence, “it’s now much easier to generate real-looking but fake content — requiring more advanced detection mechanisms,” said Scholliers, who previously worked in Meta’s product data science team.
However, even if platforms invest heavily in their trust and safety teams, the main challenge “is and will be adversarial behavior — users always find more and more creative ways to avoid detection,” she said. “It is a whack-a-mole that can never be fully solved.”
With additional reporting by Avi Asher-Schapiro
Monday was the 37th anniversary of former president Chiang Ching-kuo’s (蔣經國) death. Chiang — a son of former president Chiang Kai-shek (蔣介石), who had implemented party-state rule and martial law in Taiwan — has a complicated legacy. Whether one looks at his time in power in a positive or negative light depends very much on who they are, and what their relationship with the Chinese Nationalist Party (KMT) is. Although toward the end of his life Chiang Ching-kuo lifted martial law and steered Taiwan onto the path of democratization, these changes were forced upon him by internal and external pressures,
Chinese Nationalist Party (KMT) caucus whip Fu Kun-chi (傅?萁) has caused havoc with his attempts to overturn the democratic and constitutional order in the legislature. If we look at this devolution from the context of a transition to democracy from authoritarianism in a culturally Chinese sense — that of zhonghua (中華) — then we are playing witness to a servile spirit from a millennia-old form of totalitarianism that is intent on damaging the nation’s hard-won democracy. This servile spirit is ingrained in Chinese culture. About a century ago, Chinese satirist and author Lu Xun (魯迅) saw through the servile nature of
In their New York Times bestseller How Democracies Die, Harvard political scientists Steven Levitsky and Daniel Ziblatt said that democracies today “may die at the hands not of generals but of elected leaders. Many government efforts to subvert democracy are ‘legal,’ in the sense that they are approved by the legislature or accepted by the courts. They may even be portrayed as efforts to improve democracy — making the judiciary more efficient, combating corruption, or cleaning up the electoral process.” Moreover, the two authors observe that those who denounce such legal threats to democracy are often “dismissed as exaggerating or
Taiwan People’s Party (TPP) Acting Chairman Huang Kuo-chang (黃國昌) has formally announced his intention to stand for permanent party chairman. He has decided that he is the right person to steer the fledgling third force in Taiwan’s politics through the challenges it would certainly face in the post-Ko Wen-je (柯文哲) era, rather than serve in a caretaker role while the party finds a more suitable candidate. Huang is sure to secure the position. He is almost certainly not the right man for the job. Ko not only founded the party, he forged it into a one-man political force, with himself