Key issues
- ‘Information
disorder’ describes the collective issues of misinformation,
disinformation and malinformation, and the broad ways in which the
information space is ‘polluted’.
- Information
disorder has been described as a ‘vicious cycle’,
reflecting broader societal issues, and, in turn, perpetuating and
entrenching these issues.
- Information
disorder undermines the right to political participation, threatens social
cohesion, amplifies social and political tensions and divisions, and disrupts
people’s ability to make informed decisions (p. 3).
- Digital
platforms and tools facilitate information disorder through:
- increased
speed and scale of content creation and dissemination
- commodification
of content
- design
features, including recommender algorithms
- artificial
intelligence.
- Increased
transparency and risk-based regulation can hold platforms to account, while
protecting freedom of expression.
- Strengthening
the broader information environment and championing information
integrity is key in combating information disorder.
Introduction
An informed public is crucial to a well-functioning
democracy. Yet, against a background of high political tension and diminishing
trust, information disorder is undermining
this aim. Misinformation, disinformation and malinformation are threatening
social
cohesion, increasing polarisation and creating confusion.
Addressing information disorder and strengthening
information integrity is a critical concern for Parliament, and for the health
of Australia’s democracy.
Definitions
Information
disorder is a term and conceptual framework coined by media researchers Claire Wardle
and Hossein Derakhshan – and increasingly used by experts and key
stakeholders – to describe the collective challenges of misinformation,
disinformation and malinformation. Rather than focusing solely on whether
content is ‘true’ or ‘false’, information disorder considers the broader ways
in which the online information space is ‘polluted’, skewed and manipulated,
and the social harms that may result.
Figure 1 7 types of misinformation and
disinformation
Source: Adapted from Claire Wardle, ‘Understanding
information disorder’, First Draft, 22 September 2020.
The broad framework of information
disorder may be applied to concepts such as algorithmically-driven echo
chambers, ‘astroturfing’,
opaque and biased recommender
algorithms, propaganda, and AI-driven content and campaigns. Emerging
concerns include subtle inaccuracies in AI-generated content (‘careless
speech’), where ‘accuracy’ and ‘intent’ are conceptually
different to human-generated
information.
Different types of information disorder are distinct and
warrant individual focus and research. As media historian Caroline Jack notes,
precise definitions matter:
The words we choose to describe media
manipulation can lead to assumptions about how information spreads, who spreads
it, and who receives it. These assumptions can shape what kinds of
interventions or solutions seem desirable, appropriate, or even possible (p. 1).
Yet, it can be difficult to accurately apply definitions to
different instances of disseminated content, with intent and veracity being
hard, if
not impossible (paragraph 10), to ascertain, and often dependent
on the perspective of the observer (p. 4).
Given the complexity and overlap of different forms of information
manipulation, considering the issue of information disorder more broadly –
without dismissing the nuances of different types of manipulation – may help
identify trends and structural issues that cut across these different forms.
How does information
disorder arise?
Societal conditions
Information disorder is not a new phenomenon. Truth and
lies, spin and rhetoric have been a feature of public discourse for centuries.
Yet, the issue has intensified in recent years, prompting some to describe contemporary
society as a ‘post‑truth’
world.
Research shows information disorder is a complex issue that
is symptomatic of broader
societal challenges including declining social capital, civic engagement
and trust in science, as well as increasing inequality, political
alienation and polarisation
(p. 20).
Information disorder has been described as a ‘vicious cycle’,
reflective of societal issues and, in turn, perpetuating and entrenching these
issues. The speed of this cycle has been greatly facilitated by a changing
media environment underpinned by digital
platforms and tools. Understanding this dynamic is essential for developing
effective policy responses.
The role of online platforms
Digital platforms have transformed the information space.
Information can be developed and disseminated at
extreme speed and scale, engagement is a prime factor of information’s
value, and a few private companies have outsized influence over the information
ecosystem.
As communications
scholar James W. Carey has observed, communication is not just about the
transmission of information but has a ‘ritualistic’ function, acting as a
vector for people to exhibit and negotiate values and identity. Social media accentuates
this performative role of communication (pp. 43–44), fuelling a culture of reaction
and sharing that contributes to the dynamics of information disorder.
Online platforms are
not neutral. While the internet was once touted for its
potential to democratise information, in reality, information presented to
the public is skewed by commercial interests and enables manipulation by actors
not working in the public interest. A few core businesses – including, for
example, Alphabet (Google) and Meta (Facebook and Instagram) – dominate
the market, shaping what people conceive of as ‘the internet’ and mediating
what information is made available and how.
Much information on the internet may be better conceived of
as ‘content’ within an ‘attention economy’, prioritising user engagement and revenue. Platforms –
and the algorithms that underpin them – are often geared towards ‘sensational,
emotive, [and] controversial’ content that attracts clicks and reactions, regardless
of the factual nature of the information. Recommender
systems have also been found to promote content that conforms to a user’s
previous behaviour and preferences and excludes
‘different viewpoints or valuable ideas contrary to a person’s existing
beliefs’ (p. 5). While the emergence
of alternative platforms may chip away at the dominance of ‘Big Tech’,
there is a fear that it could lead
to fragmented partisan communities.
Artificial intelligence (AI) is further changing how
information is created and disseminated online. Generative AI can create highly
plausible synthetic media (including
disinformation) quickly, cheaply and at scale. Information can be targeted
to local contexts, AI-driven bots and followers can be purchased, and whole
production chains can be automated. AI chatbots and AI summaries are also altering
how people search for information online, deplatforming traditional news
outlets and transforming search engines into ‘answer
engines’. As predictive processing machines, generative
AI has no gauge for ‘veracity’ but merely predicts ‘likely word combinations’. Large language
models reproduce
biases in underlying training data and often fail to understand nuances in
sources. They can create outputs that ‘look
plausible but are far from it’, confidently presenting potentially
misleading or incorrect
information. Some researchers suggest that these are fundamental limitations
in the structure
and design of AI, and that current ‘safeguards’ are superficial
and ineffective.
Social harms and democratic risks
Information disorder is now considered a key concern
worldwide. In 2024, the World
Economic Forum named misinformation and disinformation as the ‘biggest
short-term risks’ to global society. In his 2025
Annual Threat Assessment, ASIO’s Director-General Mike Burgess
highlighted the role of social media and online echo chambers, inflamed by misinformation
and disinformation, in spreading and cultivating political tensions,
conspiracies and grievances.
The precise impact
of information disorder online is difficult
to measure and some argue that concerns around its impact may be
overblown. However, the cumulative effect of information disorder on social
cohesion, public opinion and democratic processes appears clear. If ‘a
functioning democracy relies on a well-informed public’, then a ‘pervasively
misinformed’ public
will lead to poor quality societal decisions (p. 354). As the UN Secretary-General noted in 2022,
disinformation may be ‘undermining the right to political participation’, pose
threats to ‘inclusion and social cohesion’, ‘amplify tensions and divisions’,
and ‘affect the full range of human rights by disrupting people’s ability to
make informed decisions’ (p. 3).
Recommender algorithms, compounded by confirmation bias,
have been found to amplify and spread misinformation at scale, contributing to adverse
health effects and real-world violence (see Box 2). Algorithms can make previously
fringe or politically
extreme content easier to access and more mainstream. This can contribute
to and overlap
with hate speech. For example, investigations have found that Facebook’s
lack of content moderation in Myanmar enabled
the spread of ‘hateful and divisive rhetoric’ including misinformation (p. 339),
contributing
to serious human rights impacts (p. 45). With many major platforms winding
back content moderation and safety features, these risks are increasing.
Platforms and the politics of attention are vulnerable to
manipulation by malicious actors and coordinated operations. ‘AI
slop’, trolls, bots
and scams are polluting platforms. There are fears that malicious actors may
exploit the design of platforms and AI capabilities to deliberately ‘flood the
zone’ with biased,
or non-factual information – skewing narratives, stifling genuine debate,
overwhelming audiences and causing confusion. There are many
instances in Australia of fringe hyper-partisan accounts, amplified by what
appear to be bots or sock
puppet accounts, leading campaigns to make
misleading hashtags trend. These campaigns drown out genuine information
and debate. Access to vast amounts of user
data also facilitates hyper-targeted
communications or advertising, increasing the potential for manipulation
and erosion of trust.
Information disorder abounds in response to data voids and news
deserts, with misleading or false content propagating in places where there
may be a lack of relevant information. This phenomenon is particularly
prevalent in response to breaking
news events (p. 16), where false
or unverified information and conspiracy
thinking quickly spreads before quality or complete information has time to
emerge. Breaking events also give rise to ‘disaster
disinformation’, where disinformation is shared online after disaster
events, feeding on peoples’ heightened emotions
and desire for information.
Foreign
information manipulation and interference (FIMI) is also a key concern
around the world. While FIMI did
not prove to be an issue in the 2025 Australian election, there has been
clear evidence of interference in international
elections in 2025, suggesting that Australia may not be immune.
How to address the issue?
Transparency and accountability
Balancing human rights is a central consideration in information
regulation. The UN warns that content-based
regulation risks unreasonably restricting freedom of expression and protected
speech (p. 12). Instead, increased transparency of platforms is the widely recommended
approach by global and national bodies, including the UN
(p. 6) and the Australian Parliament’s 2023
Senate Inquiry into Foreign Interference through Social Media.
Transparency of platform content and complaints – including consistent
and localised datasets (p. 1) – is critical for researchers, policymakers
and law enforcement to better understand the nature and extent of the issue and
to respond effectively and appropriately. It also increases the accountability
of platforms in addressing harmful misinformation.
Built-in transparency features – such as flagging
inauthentic accounts or AI generated content, transparency
of advertisements and algorithms, and accurate source attribution – help the
public critically assess content and make better informed decisions.
However, balancing the needs for transparency and encrypted
services remains a conundrum
for digital regulation. This is especially relevant to information
disorder, as many fringe and anti-institutional groups, who may be susceptible
to misinformation, avoid
mainstream platforms in favour of encrypted platforms or private
communication channels.
Some jurisdictions, notably the European
Union (EU), have adopted risk-based regulation that requires certain large
platforms to undertake risk assessments related to misinformation and
disinformation and to take appropriate mitigation measures (see Articles 34 and
35). The EU’s Artificial
Intelligence Act applies a similar risk-based approach. Unlike
Australia, these regulatory regimes include enforcement
powers. However, some platforms are finding ways to avoid
compliance. Indeed, as noted earlier, content moderation and safety measures
across several major digital platforms have
been rolled back in recent years (pp. 10–11).
Information integrity and resilience
Strengthening the broader information environment and championing information
integrity is also key in combating information disorder. Fundamental to
this is a well-supported, independent
and diverse media – the ‘fourth
estate’ of democracy. Governments and media should provide timely and accessible factual information, including in simple
English and other accessible forms (pp. 2–3). In Australia’s multicultural
context, non-English
language media must be considered in both
regulatory frameworks and content creation.
Efforts should
also address the ‘demand-side’
of information disorder – that being the ‘societal vulnerabilities that
make individuals susceptible to false or misleading content’. This includes
promoting media
literacy, ‘prebunking’
strategies to ‘inoculate’ the public against misinformation
and understanding why
certain people may be inclined to seek out misinformation or engage
with conspiratorial thinking. Research links
these tendencies to various societal inequalities, political alienation,
and declining trust in democratic institutions. Stakeholders suggest that boosting
civic engagement and increasing transparency
of government are key for rebuilding trust in government. Countries with
stable and trusted institutions and access to independent information have
been found to be the most resilient to misinformation (p. 15).
Parliament’s role
While there have been efforts
in Australia and internationally to combat the harms caused by information
disorder, it is clear that more action is needed, and soon.
In addressing the issue of information disorder, Parliament may
need to move beyond a narrow focus on misinformation and disinformation, where
‘truth’ and ‘intent’ are hard to gauge, and consider the broader challenge of a
polluted information space. Focusing on structural and systemic approaches – rather than individual pieces of content – can address the roots of the issue without
infringing on protected speech. Protecting the integrity of the information environment
is key to protecting freedom of expression, democracy and social cohesion.
Further reading
- United Nations (UN), United Nations Global Principles For Information Integrity:
Recommendations for Multi-stakeholder Action,
(New York: UN, 2024).
- Jon Bateman and Dean Jackson, Countering Disinformation Effectively: An Evidence‑Based
Policy Guide, (Washington: Carnegie Endowment
for International Peace, 31 January 2024).
- Australian
Communications and Media Authority (ACMA), ‘Online
disinformation and misinformation’, ACMA website, last updated 24
September 2025.
- UK House of Commons, Science, Innovation and Technology Committee,
Social Media, Misinformation and Harmful Algorithms, HC 441, 11 July 2024.
- Irene Khan (Special Rapporteur on the Promotion and Protection of
the Right to Freedom of Opinion and Expression), Disinformation
and Freedom of Opinion and Expression, A/HRC/47/25,
(United Nations Human Rights Council, 13 April 2021).
-
Nell Fraser, ‘What's next for misinformation regulation?’, FlagPost, (Canberra: Parliamentary Library, 2 July 2025).