Artificial intelligence (AI) is revolutionizing the gathering, processing and dissemination of information. The outcome of this revolution will depend on our technological choices. To ensure that AI supports the right to information, we at Reporters without Borders think that ethics must govern technological innovation in the news and information media.
AI is radically transforming the world of journalism. How can we ensure information integrity when most Web content will be AI-generated? How do we maintain editorial independence when opaque language models, driven by private-sector interests or arbitrary criteria, are used by newsrooms? How can we prevent the fragmentation of the information ecosystem into numerous streams fueled by chatbots?
Predicting the full extent of AI’s impacts on the media is a challenging task. Yet, one thing is clear: Innovation per se does not automatically lead to progress. It must be accompanied by sensible regulation and ethical guardrails to truly benefit humanity.
History offers numerous examples, such as the ban on human cloning, nuclear non-proliferation treaties and drug safety controls, where technological development has been responsibly curtailed, regulated or directed in the name of ethics. Likewise, in journalism, innovation should be governed by clear ethical rules. This is crucial to protect the right to information, which underpins our fundamental freedoms of opinion and expression.
In the summer of last year, Reporters Without Borders convened an international commission to draft what became the first global ethical reference for media in the AI era. The commission brought together 32 prominent figures from 20 countries, specialists in journalism or AI. To chair it, none other than Maria Ressa, winner of the 2021 Nobel Peace Prize, who embodies both the challenges of press freedom and commitment to addressing technological upheavals (she denounced the “invisible atomic bomb” of digital technology from the podium in Oslo).
The goal was clear: Establish a set of fundamental ethical principles to protect information integrity in the AI era, as these technologies transform the media industry. After five months of meetings, 700 comments and an international consultation, the discussions revealed consensus and differences. Aligning the views of journalism defense non-governmental organizations, media organizations, investigative journalism consortia and a major journalists’ federation was challenging — but an unprecedented alliance gathered around this digital table.
In response to the upheavals caused by AI in the information arena, the charter that was published in Paris in November last year outlines 10 essential principles to ensure information integrity and preserve journalism’s social function. It is crucial that the international community cooperates to ensure that AI systems uphold human rights and democracy, but this does not absolve journalism of particular ethical and professional responsibilities in using these technologies.
Of the charter’s core principles, we will mention just four.
First, ethics must guide technological choices in the media. The pace of adopting one of history’s most transformative technologies should not be dictated by the pressure of economic competition. Polls suggest that an overwhelming majority of citizens would prefer a slower, safer deployment of AI. Let us listen to them.
Second, human judgement must remain central in editorial decisions. Generative AI systems are more than mere tools; they acquire a form of agency and interfere with our intentions. Though lacking will, AI is full of certainties, reflecting its data and training process. Each automated decision is a missed opportunity for human judgement. We aspire to augmented journalism, not diminished human judgement.
Third, the media must help society to confidently discern authentic from synthetic content. Generative AI, more than any past technology, is capable of crafting the illusion of facts and the artifice of evidence. The media have a special responsibility to help society confidently discern fact from fiction. Trust is built, not decreed. Source verification, evidence authentication, content traceability and editorial responsibility are crucial in the AI era. To avoid contributing to general confusion, the media must maintain a clear distinction between authentic material (captured in the real world) and synthetic material (material generated or significantly altered by AI).
Finally, in their negotiations with technology companies, media outlets and rights holders should prioritize journalism’s societal mission, placing public interest above private profit. Chatbots are likely to become a primary method for accessing news in the near future. It is therefore imperative to ensure that their owners provide fair compensation to content creators and rights holders.
Additionally, solid guarantees must be demanded concerning the quality, pluralism and reliability of the information disseminated. This becomes even more crucial as media entities start to form their initial partnerships with AI providers and engage in legal battles with tech companies over copyright infringement.
The media stand at a crossroads. Used ethically and discerningly, AI offers unprecedented opportunities to enrich our understanding of a complex world. As deepfakes potentially amplify disinformation and erode public trust in all audiovisual content, and language models promise increased productivity at the expense of information integrity, this charter affirms an approach where human discernment and journalistic ethics are the pillars of journalism’s social trust function.
In a noisy world, there are only two ways to gain attention: extort it or earn it. Social media, aided by recommendation algorithms, chose the former, with known consequences in terms of misinformation and the polarization of opinion. In a field where anything goes, quality journalism has no chance unless it abandons its defining traits: the pursuit of factual truth, nuance and impartiality.
The media must therefore earn our attention by focusing their practice on trust, authenticity and human experience.
We encourage media and information professionals to embrace the principles of the Paris Charter on AI and Journalism.
Charlie Beckett, professor at the London School of Economics (LSE) and director of the LSE Journalism and AI Project.
Christophe Deloire, secretary-general at Reporters Without Borders and chair of the Forum on Information and Democracy.
Gary Marcus, founder and CEO of the Center for the Advancement of Trustworthy AI and professor emeritus at New York University.
Maria Ressa, 2021 Nobel Peace Prize laureate, journalist and cofounder of Rappler media, chair of the Committee of the Paris Charter on AI and Journalism.
Stuart Russell, distinguished professor of computer science at the University of California, Berkeley and founder of the Center for Human-compatible AI.
Anya Schiffrin, senior lecturer in discipline of international and public affairs, Columbia University School of International and Public Affairs.
Concerns that the US might abandon Taiwan are often overstated. While US President Donald Trump’s handling of Ukraine raised unease in Taiwan, it is crucial to recognize that Taiwan is not Ukraine. Under Trump, the US views Ukraine largely as a European problem, whereas the Indo-Pacific region remains its primary geopolitical focus. Taipei holds immense strategic value for Washington and is unlikely to be treated as a bargaining chip in US-China relations. Trump’s vision of “making America great again” would be directly undermined by any move to abandon Taiwan. Despite the rhetoric of “America First,” the Trump administration understands the necessity of
US President Donald Trump’s challenge to domestic American economic-political priorities, and abroad to the global balance of power, are not a threat to the security of Taiwan. Trump’s success can go far to contain the real threat — the Chinese Communist Party’s (CCP) surge to hegemony — while offering expanded defensive opportunities for Taiwan. In a stunning affirmation of the CCP policy of “forceful reunification,” an obscene euphemism for the invasion of Taiwan and the destruction of its democracy, on March 13, 2024, the People’s Liberation Army’s (PLA) used Chinese social media platforms to show the first-time linkage of three new
If you had a vision of the future where China did not dominate the global car industry, you can kiss those dreams goodbye. That is because US President Donald Trump’s promised 25 percent tariff on auto imports takes an ax to the only bits of the emerging electric vehicle (EV) supply chain that are not already dominated by Beijing. The biggest losers when the levies take effect this week would be Japan and South Korea. They account for one-third of the cars imported into the US, and as much as two-thirds of those imported from outside North America. (Mexico and Canada, while
I have heard people equate the government’s stance on resisting forced unification with China or the conditional reinstatement of the military court system with the rise of the Nazis before World War II. The comparison is absurd. There is no meaningful parallel between the government and Nazi Germany, nor does such a mindset exist within the general public in Taiwan. It is important to remember that the German public bore some responsibility for the horrors of the Holocaust. Post-World War II Germany’s transitional justice efforts were rooted in a national reckoning and introspection. Many Jews were sent to concentration camps not