Misinformation
and disinformation are pressing issues in a polarised global environment,
where the weaponisation
of information directly threatens democratic processes. Strengthening
society’s resilience to the dissemination of harmful misinformation is a key
strategy in strengthening democracy’s resilience.
While the Federal Government has proposed various
legislative approaches, few actions have progressed and Australians remain highly concerned (p. 17) about
the prevalence
of online misinformation. Accordingly, this issue will likely be of continued
relevance to the Parliament in terms of further responses.
Not a new issue
The harms caused by misinformation have been acknowledged
across the political spectrum. In 2022, the former Morrison
Government committed to ‘introduce legislation… to combat harmful
misinformation and disinformation online’. The task was picked up by the subsequent
Albanese government with its (ultimately discontinued) Communications
Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024.
Both governments’ proposals responded to the Australian Communications and
Media Authority’s (ACMA) 2021
review of the Australian
Code of Practice on Disinformation and Misinformation, which recognised shortcomings
with the voluntary industry code.
Since this initial report, which has been followed by 2
more,
the perceived harms of misinformation have only increased. According to the
University of Canberra’s 2025 Digital
News Report (p. 17), out of 42 surveyed countries, Australians are now the most
concerned about misinformation on the internet. The level of concern rose
from 64% of survey respondents in 2021 to 74% in 2025.
Recent government action
In 2024, the Government proposed 2 legislative approaches to
combatting harmful misinformation and disinformation online: the Combatting
Misinformation and Disinformation Bill 2024 and the ‘truth
in political advertising’ Bill. The Government decided
not to proceed with the former, as it became clear it lacked support, while
the latter lapsed at the dissolution of Parliament and may
be reintroduced.
Discussions
and critiques
of the Combatting Misinformation and Disinformation Bill were published widely
over the course of its debate and the Senate
inquiry into the Bill received 105
public submissions. Concerns primarily centred on the Bill’s potential to censor
legitimate speech and content. For example, definitions of ‘misinformation’ and
‘disinformation’ – which included opinions and commentary – were regarded as
overly broad and ambiguous, while arbitrarily high penalties could lead
platforms to ‘over-censor’ content. The Parliamentary
Joint Committee on Human Rights also cautioned that, ‘questions remain as
to whether the scheme would constitute a proportionate limit on the right to
freedom of expression and the right to privacy in practice’ (p. 90).
Possible actions
In the absence of sufficient support for the above measures,
which effectively targeted specific content online, other possible policies
could be pursued that address the underlying social and systemic conditions
that allow misinformation to thrive.
As noted within the Combatting Misinformation and
Disinformation Bill, verifiably false information does not in itself constitute
misinformation. Among other criteria, it is the dissemination of content
containing this information. Risk of harm is therefore affected by how content
is shared, to whom, and in what context. A regulatory focus on the conditions
facilitating misinformation is thus well positioned to address harms while minimising
the risk to human rights, such as freedom of expression, that a focus on content
poses.
In line with advice
from the United Nations (UN) (pp. 6–11), potential pathways to tackling
harmful misinformation include, ‘regulatory approaches focus on transparency’;
‘promoting robust public information regimes and wide-ranging access to
information’; ‘protecting free and independent media and dialogue with
communities’; and ‘building digital, media and information literacy’.
Transparency obligations
The Combatting Misinformation and Disinformation Bill proposed
that the ACMA develop ‘digital platform rules’ to improve transparency for matters
including:
- risk
management
- media
literacy plans
- complaints
and dispute handling
- record
keeping
- data
access schemes.
While these measures garnered a range
of views (pp. 59–74), they were generally less
controversial than the proposed codes and standards to effectively remove
certain harmful content from online platforms and could be progressed as
standalone measures. Stakeholders
suggested that transparency measures would increase the accountability of
platforms to address harmful misinformation, shed greater light on the extent
of the issue, and empower users to critically engage with content (p. 64).
Transparency measures could also target platform
design; for example, by making recommender
algorithms transparent and clearly labelling bot accounts – a measure recommended
by Senator David Pocock (pp. 130–131). Many researchers have identified recommender
algorithms (p. 18) and automated
bots’
role in boosting the spread of online misinformation. Specifically, recommender
algorithms often prioritise popular information that encourages
engagement, without consideration of factual accuracy. As algorithms and bots
are non-human actors, they
are not afforded human rights such as freedom of expression, making
regulation more feasible (pp. 130–131).
Similarly, generative
artificial intelligence regulation—including transparency around the source
or provenance of information—may mitigate the risks of AI in generating and perpetuating
false information.
Promoting access to information and increasing media literacy
Research shows that false information often spreads when
there is a lack of
easily understandable or consistent
factual information. Conversely, timely
and accessible factual information such as government and health advice in
simple English, community
languages, and other accessible
forms (pp. 2-3) can remedy the dominance of misinformation. Similarly, well-funded
local journalism can build community trust in robust information sources
and ensure that stories are being accurately communicated (pp. 17–22). Finally,
media
literacy is an important tool to improve awareness
of, and resilience to, misinformation (pp. 87–88). Recent initiatives in this
area include funding
for a National Media Literacy Strategy (p. 285) and a
dedicated resource introduced to the Australian Curriculum.
The re-elected Labor Government’s plan for addressing
misinformation is not yet clear, but the issue remains
of concern. While legislative frameworks for removing specific online content
lacked support, these other actions aimed at addressing the drivers of
misinformation may be considered more viable.