Artificial intelligence (AI) is revolutionizing the gathering, processing and dissemination of information. The outcome of this revolution will depend on our technological choices. To ensure that AI supports the right to information, we at Reporters without Borders think that ethics must govern technological innovation in the news and information media.
AI is radically transforming the world of journalism. How can we ensure information integrity when most Web content will be AI-generated? How do we maintain editorial independence when opaque language models, driven by private-sector interests or arbitrary criteria, are used by newsrooms? How can we prevent the fragmentation of the information ecosystem into numerous streams fueled by chatbots?
Predicting the full extent of AI’s impacts on the media is a challenging task. Yet, one thing is clear: Innovation per se does not automatically lead to progress. It must be accompanied by sensible regulation and ethical guardrails to truly benefit humanity.
History offers numerous examples, such as the ban on human cloning, nuclear non-proliferation treaties and drug safety controls, where technological development has been responsibly curtailed, regulated or directed in the name of ethics. Likewise, in journalism, innovation should be governed by clear ethical rules. This is crucial to protect the right to information, which underpins our fundamental freedoms of opinion and expression.
In the summer of last year, Reporters Without Borders convened an international commission to draft what became the first global ethical reference for media in the AI era. The commission brought together 32 prominent figures from 20 countries, specialists in journalism or AI. To chair it, none other than Maria Ressa, winner of the 2021 Nobel Peace Prize, who embodies both the challenges of press freedom and commitment to addressing technological upheavals (she denounced the “invisible atomic bomb” of digital technology from the podium in Oslo).
The goal was clear: Establish a set of fundamental ethical principles to protect information integrity in the AI era, as these technologies transform the media industry. After five months of meetings, 700 comments and an international consultation, the discussions revealed consensus and differences. Aligning the views of journalism defense non-governmental organizations, media organizations, investigative journalism consortia and a major journalists’ federation was challenging — but an unprecedented alliance gathered around this digital table.
In response to the upheavals caused by AI in the information arena, the charter that was published in Paris in November last year outlines 10 essential principles to ensure information integrity and preserve journalism’s social function. It is crucial that the international community cooperates to ensure that AI systems uphold human rights and democracy, but this does not absolve journalism of particular ethical and professional responsibilities in using these technologies.
Of the charter’s core principles, we will mention just four.
First, ethics must guide technological choices in the media. The pace of adopting one of history’s most transformative technologies should not be dictated by the pressure of economic competition. Polls suggest that an overwhelming majority of citizens would prefer a slower, safer deployment of AI. Let us listen to them.
Second, human judgement must remain central in editorial decisions. Generative AI systems are more than mere tools; they acquire a form of agency and interfere with our intentions. Though lacking will, AI is full of certainties, reflecting its data and training process. Each automated decision is a missed opportunity for human judgement. We aspire to augmented journalism, not diminished human judgement.
Third, the media must help society to confidently discern authentic from synthetic content. Generative AI, more than any past technology, is capable of crafting the illusion of facts and the artifice of evidence. The media have a special responsibility to help society confidently discern fact from fiction. Trust is built, not decreed. Source verification, evidence authentication, content traceability and editorial responsibility are crucial in the AI era. To avoid contributing to general confusion, the media must maintain a clear distinction between authentic material (captured in the real world) and synthetic material (material generated or significantly altered by AI).
Finally, in their negotiations with technology companies, media outlets and rights holders should prioritize journalism’s societal mission, placing public interest above private profit. Chatbots are likely to become a primary method for accessing news in the near future. It is therefore imperative to ensure that their owners provide fair compensation to content creators and rights holders.
Additionally, solid guarantees must be demanded concerning the quality, pluralism and reliability of the information disseminated. This becomes even more crucial as media entities start to form their initial partnerships with AI providers and engage in legal battles with tech companies over copyright infringement.
The media stand at a crossroads. Used ethically and discerningly, AI offers unprecedented opportunities to enrich our understanding of a complex world. As deepfakes potentially amplify disinformation and erode public trust in all audiovisual content, and language models promise increased productivity at the expense of information integrity, this charter affirms an approach where human discernment and journalistic ethics are the pillars of journalism’s social trust function.
In a noisy world, there are only two ways to gain attention: extort it or earn it. Social media, aided by recommendation algorithms, chose the former, with known consequences in terms of misinformation and the polarization of opinion. In a field where anything goes, quality journalism has no chance unless it abandons its defining traits: the pursuit of factual truth, nuance and impartiality.
The media must therefore earn our attention by focusing their practice on trust, authenticity and human experience.
We encourage media and information professionals to embrace the principles of the Paris Charter on AI and Journalism.
Charlie Beckett, professor at the London School of Economics (LSE) and director of the LSE Journalism and AI Project.
Christophe Deloire, secretary-general at Reporters Without Borders and chair of the Forum on Information and Democracy.
Gary Marcus, founder and CEO of the Center for the Advancement of Trustworthy AI and professor emeritus at New York University.
Maria Ressa, 2021 Nobel Peace Prize laureate, journalist and cofounder of Rappler media, chair of the Committee of the Paris Charter on AI and Journalism.
Stuart Russell, distinguished professor of computer science at the University of California, Berkeley and founder of the Center for Human-compatible AI.
Anya Schiffrin, senior lecturer in discipline of international and public affairs, Columbia University School of International and Public Affairs.
In their New York Times bestseller How Democracies Die, Harvard political scientists Steven Levitsky and Daniel Ziblatt said that democracies today “may die at the hands not of generals but of elected leaders. Many government efforts to subvert democracy are ‘legal,’ in the sense that they are approved by the legislature or accepted by the courts. They may even be portrayed as efforts to improve democracy — making the judiciary more efficient, combating corruption, or cleaning up the electoral process.” Moreover, the two authors observe that those who denounce such legal threats to democracy are often “dismissed as exaggerating or
The Chinese Nationalist Party (KMT) caucus in the Legislative Yuan has made an internal decision to freeze NT$1.8 billion (US$54.7 million) of the indigenous submarine project’s NT$2 billion budget. This means that up to 90 percent of the budget cannot be utilized. It would only be accessible if the legislature agrees to lift the freeze sometime in the future. However, for Taiwan to construct its own submarines, it must rely on foreign support for several key pieces of equipment and technology. These foreign supporters would also be forced to endure significant pressure, infiltration and influence from Beijing. In other words,
“I compare the Communist Party to my mother,” sings a student at a boarding school in a Tibetan region of China’s Qinghai province. “If faith has a color,” others at a different school sing, “it would surely be Chinese red.” In a major story for the New York Times this month, Chris Buckley wrote about the forced placement of hundreds of thousands of Tibetan children in boarding schools, where many suffer physical and psychological abuse. Separating these children from their families, the Chinese Communist Party (CCP) aims to substitute itself for their parents and for their religion. Buckley’s reporting is
Last week, the Chinese Nationalist Party (KMT) and the Taiwan People’s Party (TPP), together holding more than half of the legislative seats, cut about NT$94 billion (US$2.85 billion) from the yearly budget. The cuts include 60 percent of the government’s advertising budget, 10 percent of administrative expenses, 3 percent of the military budget, and 60 percent of the international travel, overseas education and training allowances. In addition, the two parties have proposed freezing the budgets of many ministries and departments, including NT$1.8 billion from the Ministry of National Defense’s Indigenous Defense Submarine program — 90 percent of the program’s proposed