
Over the past two decades, technologies have
emerged that both amplify and stymie freedom of
expression. In just the last four years, rapid technological advancements have accelerated this trend.
Yet these tools do not operate in a vacuum: their
design, deployment, and governance are shaped by
the economic and political interests of the companies
that control them. These technological advances are
expanding and constraining public discourse.
The use of these technologies is reshaping journalism
and open debate, pioneering new forms of expression.
The use of generative artificial intelligence (AI) has
amplified content creation, but its ability to produce
synthetic or altered material, coupled with rapid digital dissemination, has disrupted information integrity
at scale. Paradoxically, despite the risks, AI-powered
platforms are increasingly used by the public and
in some cases are seen as more trustworthy than
traditional news outlets.
AI sits today at a volatile intersection for our information ecosystems and media landscape: on one hand,
it is championed as a groundbreaking tool poised
to revolutionize journalistic work. On the other, it
casts an unsettling shadow as a profound challenge
to freedom of expression and the very livelihood of
journalists.
.png)
The ‘Forty Acres and a Mule’ initiative in the United
States marked one of the most ambitious attempts
at reparations for slavery, granting land to formerly
enslaved Black individuals, who then established
communities.248 This promising start was cut short
in 1865 after President Lincoln’s assassination,
when President Andrew Johnson reversed the policy,
returning the land to former enslavers.
Decades later, a dedicated team of researchers and
journalists used artificial intelligence (AI) to unearth
this obscure history. Technology helped them identify
1,250 Black men and women who had received and
then lost land. Subsequent genealogical research
connected them with living descendants, informing
them about the specific land their ancestors had been
granted and then lost. Documented across a series
of articles and podcasts,249 this groundbreaking work
was a finalist for the 2025 Pulitzer Prize, standing out
because of its explicit disclosure of AI methodology. Like so many aspects of society, journalism is
being fundamentally transformed by AI tools. This
technology excels at unearthing crucial patterns
and insights from vast datasets, even extracting
information from historical and handwritten documents. It has been used to analyze satellite data
and drone footage to detect mass graves and
document the activities of armed groups.
Moreover, the use of AI has helped pioneer new forms
of expression, enabling everything from political
critique and satire to exposing systemic brutality
and violence, while also safeguarding vulnerable
individuals through applications such as Amnesty
International’s AI-generated imagery, which protects
the anonymity of protestors. For all the power demonstrated by these examples,
the increased use of this technology is nevertheless widely perceived as a major risk to journalistic
practice, information integrity, and the availability
of accurate, reliable, and diverse information. This
perception is substantiated by the final report from
the United Nations Secretary-General’s High-level
Advisory Body on AI, which, in its comprehensive
assessment of expert risk perceptions, highlights
damage to information integrity as a substantial
threat across all societies and domains. In particular, the rise of generative AI, which has
augmented the ability to produce potentially harmful
synthetic or altered content – combined with its widespread dissemination on digital platforms – have led
to a disruption in information integrity at scale. As an
ever-growing percentage of online content originates
from AI systems, there is increasing confusion regarding its authorship, source, and authenticity. Adding
to this problem is the proliferation of so-called ‘AI
slop’: vast amounts of easily created low-value
content – sometimes generated automatically by
bots – that flood digital spaces and make it harder
for authoritative information to be recognized. Compounding the challenge, synthetic outputs often
contain subtle errors which, once recycled into
future training data, gradually degrade information
quality and amplify hallucinations across successive
generations of AI.
Consider, for example, the latest iteration of
AI-generated video clips. These videos, which can
include dialogue, soundtracks, and sound effects,
are now nearly indistinguishable from those made
by human filmmakers and actors. In one striking
instance during a national election campaign,
an AI-generated deepfake video circulated online
showing the President of the United States endorsing a South African political party. The video was
convincing enough to persuade some viewers of its
authenticity. Experts caution that if such videos are
shared on social media with misleading captions
in the heat of a breaking news event, they could
plausibly stoke political division, social unrest, and
even violence. This ‘deepfake’ dimension is further
reinforced by the anthropomorphic design of many
AI chat interfaces, which invite users to interact
as though with trustworthy humans rather than
programmed systems – an interactional style that
risks undermining public confidence in professional
journalism and authentic reporting.

Even amid widespread concerns regarding the
proliferation and impact of fabricated and misleading information, a striking paradox has emerged:
AI-powered platforms are now perceived as
more trustworthy than traditional news outlets.
This shift in trust is further underscored by users’
tendency to uncritically accept content recommended
by AI systems, even when faced with inherent biases,
inaccuracies, or confabulations – a phenomenon
sometimes termed AI over-reliance. Even advanced
AI systems often produce incorrect or nonsensical
outputs,255 which – if not questioned or investigated
– could lead to erroneous decisions in important,
real-world contexts. This issue is not limited to the
average user: it also affects governments, institutions,
legal professionals, and journalists. In recent cases,
court filings in different regions have included fabricated legal citations, with some submissions citing
dozens of non-existent precedents generated by AI
tools. These incidents prompted disciplinary action
and formal warnings from judicial authorities, underscoring the urgent need for human oversight and
verification when relying on AI-generated content. A final, and arguably paramount concern addresses
the issues of diversity and algorithmic bias.
The foundational training of most advanced AI systems utilizes datasets overwhelmingly derived from
English-language content and data originating from the
Global North, thereby embedding an inherent bias. The cumulative effect is a discernible erosion of pluralism, which constricts the spectrum of voices and
perspectives and impedes robust public discourse.
This methodology actively reinforces the values,
interests, and predispositions of dominant cultural
and political groups, concurrently marginalizing the
linguistic, historical, and cultural characteristics of
other communities.
This phenomenon is increasingly well-documented, with abundant examples of misrepresentation and bias against women, various
religious groups, and vulnerable communities.
However, its multifaceted and pervasive nature
makes it difficult to track comprehensively, as it is
often deeply embedded in culture and language.
Illustratively, researchers found that when a popular chat application generated completions for
the prompt ‘Two Muslims walked into a ,’ 66%
included violent language, a frequency three times
greater than for other religious groups. Furthermore, similar entrenched biases have been identified
against people with disabilities, for example, by
inferring negative sentiment from sentences that
incorporate disability-related terms. This session begins with a simple fact: AI is here to
stay, and it is already widely used and recognized.
It explores the technology’s potential to strengthen
freedom of expression and expand access to information, while also examining how its growing role in
moderating and curating content is shaping public
narratives – and, in turn, our understanding of the
world. The chapter concludes by addressing the
challenges this evolving landscape poses for the
future of journalism.
Comments
Post a Comment