Appendix 3 - Digital platform actions in relation to online misinformation and disinformation

Appendix 3Digital platform actions in relation to online misinformation and disinformation

Google

1.1Google has 'teams of experts around the world working in the fight against misinformation'. Since 2021, Google has had a Safety Engineering Center for Content Responsibility in Dublin, which it describes as 'a regional hub for Google experts working to combat the spread of illegal and harmful content'.[1]

1.2Google's Jigsaw unit has collaborated with experts to study the effectiveness of prebunking—a technique designed to help people build resilience to misleading narratives before they encounter them. In particular, it alerts individuals to the false claims or tactics likely to be used in attempts to manipulate them, and refutes those claims or tactics.[2]

1.3In October 2021, Google announced a new monetisation policy for Google advertisers, publishers and YouTube creators that prohibited ads for, and monetisation of, content contradicting 'well-established scientific consensus around the existence and causes of climate change'.[3]

1.4However, according to the Center for Countering Digital Hate in January 2023, Google profited 'by running ads on search results promoting … climate disinformation' on the Daily Wire media outlet.[4] Further, a 2023 report from the Center for Countering Digital Hate and the Climate Action Against Disinformation coalition argued that 'YouTube is breaking its promise not to profit from ads on climate denial content', with researchers identifying 100 videos breaching the policy that have carried ads.[5]

1.5Google Drive, Google Docs and Google Maps all have policies that prohibit users from distributing content that 'deceives, misleads or confuses users', including misleading content related to civic and democratic processes.[6]

1.6Google informed the committee that it’s approach to managing misinformation and disinformation (mis/disinformation) involves three strategies: ‘firstly, making quality count; secondly, counteracting malicious actors; and, thirdly, giving users more context’.[7] Google acknowledged that it did not now, or ever, employ fact-checking as a strategy against mis/disinformation, but instead focused its efforts ‘on a range of issues relating to deceptive practices’ including ‘the exploitation of AI through deepfakes’.[8]

YouTube

1.7Google-owned YouTube's misinformation policy states that certain 'types of misleading or deceptive content with serious risk of egregious harm are not allowed on YouTube'. This includes some types of misinformation that could cause real-world harm, content interfering with democratic processes, and some types of technically manipulated content. Users are able to report content violating the policy.[9]

1.8Users who violate Community Guidelines for the first time will likely receive a warning without penalty to their channel, with the option to undertake policy training to allow the warning to expire after 90 days (with the 90-day period starting from when the training is completed). If the same policy is violated within the 90 days, the channel may be given a strike. If a user has three strikes within 90 days, the channel may be terminated. YouTube also may terminate a channel or account after a single case of severe abuse, or when the channel is focused on a policy violation.[10]

LinkedIn

1.9LinkedIn states that it removes 'specific claims, presented as fact, that are demonstrably false or substantially misleading and likely to cause harm'. It also removes 'content that is synthetic or manipulated in a way to misrepresent or distort real-life events without clear disclosure of the fake or altered nature of the material'. In addition, content 'that is false or substantially misleading but not likely to cause harm is not eligible for distribution beyond the author's network'.[11]

Meta

1.10Meta's policy on disinformation (as at 4 December 2025) begins with the statement that misinformation 'is different from other types of speech addressed in our Community Standards because there is no way to articulate a comprehensive list of what is prohibited'. It notes that 'what is true one minute may not be true the next minute. People also have different levels of information about the world around them, and may believe something is true when it is not'.[12]

1.11Meta states that it removes 'misinformation where it is likely to directly contribute to the risk of imminent physical harm' and 'content that is likely to directly contribute to interference with the functioning of political processes'. In both instances, Meta states that it partners 'with independent experts who possess knowledge and expertise to assess the truth of the content and whether it is likely to directly contribute to the risk of imminent harm'.[13]

1.12Meta focuses 'on reducing its [mis/disinformation] prevalence or creating an environment that fosters a productive dialogue', noting that in 'some cases, people share deeply-held personal opinions that others consider false or share information that they believe to be true but others consider incomplete or misleading'.[14]

1.13Previously, Meta partnered with international fact-checking organisations who review and rate climate change content in various languages. This includes information that experts say undermines the existence and impacts of climate change, misrepresents scientific data and mischaracterises mitigation and adaptation efforts.[15] Information identified as false has a warning label applied and Facebook reduces the visibility of that content so fewer users can see it.[16]

1.14In Australia, fact-checking for Meta's platforms was conducted by AAP Factcheck and Agence France-Presse.[17] While RMIT Factlab was also a partner in 2024, it ceased operations in January 2025.[18]

1.15In January 2025, Meta CEO Mark Zuckerberg announced that Facebook and Instagram would be abandoning the use of fact checkers on its US platforms, replacing the system with a 'community notes' function where assessment of a posts accuracy is left to the users of the platforms themselves.[19] However, Meta has advised the Australian Government it had 'no immediate plan' to end fact checking on its platforms in Australia.[20]

Meta Climate Science Information Center

1.16In September 2020, Meta acknowledged the company had a role to play in curbing climate change dis/misinformation online and announced that it would establish a Climate Science Information Center to connect its users to factual climate information.[21]

1.17The centre provides a space for facts, figures and data from leading climate organisation such as the Intergovernmental Panel on Climate Change (IPCC), the UN Environmental Programme, the National Oceanic and Atmospheric Administration, World Meteorological Organisation (WMO), The Met Office and others.[22]

1.18However, questions have been raised about the future status of the Information Centre given Meta's announcement to suspend fact-checking on its US platform.[23]

TikTok

1.19TikTok's policy on Integrity and Authenticity, effective 13 September 2025, states that it does not 'allow misinformation that could cause significant harm to individuals or society, no matter the intent of the person posting it'. This includes 'harmful conspiracy theories', 'hoaxes' and 'false information related to public safety, crises, or major civic events—when such content may lead to violence or cause public panic'. TikTok states that it works 'with independent fact-checkers and experts to assess the accuracy of content, and we factor their assessments into our moderation decisions'.[24]

1.20However, TikTok does allow:

personal opinions that do not include harmful misinformation;

people sharing personal medical experiences, as long as they do not promote misinformation or discourage professional care;

conversations about climate policy, weather or technology, as long as they do not deny or misrepresent scientific consensus; and

documentary or educational content reporting on or condemning misinformation.[25]

1.21TikTok is the only platform that 'explicitly bans climate change misinformation in its policies'. These policies refer to 'disinformation and misinformation that undermines well-established scientific consensus, so that's the basis on which it would make decisions about whether it breached its policies'.[26]

X (formerly Twitter)

1.22X's rules, as of 5 December 2025, stipulate that users may not:

use X's services in a manner intended to artificially amplify or suppress information;

use X's services for the purpose of manipulating or interfering in elections or other civic processes; or

deceptively share synthetic or manipulated media that are likely to cause harm, with X stating that it 'may label posts containing synthetic and manipulated media to help people understand their authenticity and to provide additional context'.[27]

Anthropic

1.23Anthropic is the developer of AI assistant Claude, which can both generate content as well as perform automated tasks functions. Anthropic is incorporated as a public benefit corporation. Its stated purpose is the responsible development and maintenance of AI for the long-term benefit of humanity. Anthropic’s constitution incudes a set of principles that guide development of Claude’s functions, and the corresponding safety. According to Anthropic representative, Mr Evan Frodorf, these principles require Claude to be factually accurate, represent consensus where it exists, be honest about the limits of its knowledge, but ‘still engage with a range of perspectives, where appropriate, on contested topics’.[28]

1.24Mr Frondorf conceded that because the malicious misuse of AI models is possible, that was why Anthropic deployed a range of safeguards to reduce the occurrence in the first place and then detect and shut down instances where it had occurred. This includes detection tools­–classifiers­–which screen models for violations of policy and then block responses or flag content for review by enforcement staff. For content or activity that is not stopped by the first stage, Anthropic maintains a threat intelligence team which follows up on external leads or indications of abuse.[29]

1.25Anthropic further noted that it partners with various groups to stress test its models in areas such as election related mis/disinformation or mental health issues.[30]

Footnotes

[1]Google, Google's approach to fighting misinformation online (accessed 4 December 2025).

[2]Google, Google's approach to fighting misinformation online (accessed 4 December 2025).

[3]Google Ads Help, Updating our ads and monetization policies on climate change, 7 October 2021 (accessed 5 December 2025).

[4]Center for Countering Digital Hate, Google runs ads on search queries for racist disinformation and conspiracies, 27 January 2023 (accessed 5 December 2025).

[5]Center for Countering Digital Hate and Climate Action Against Disinformation, YouTube's Climate Denial Dollars, May 2023, p. 3.

[6]Google, Privacy, terms & AI: Google Drive (accessed 4 December 2025); Google, Privacy, terms & AI: Google Docs (accessed 4 December 2025) and Google Maps User Generated Content Policy Help, Prohibited and restricted content (accessed 4 December 2025).

[7]Ms Rachel Lord, Senior Manager, YouTube Government Affairs and Public Policy, Google, Committee Hansard, 5 March 2026, p. 1.

[8]Mr Jean-Jacques Sahel, Senior Manager, Public Policy, Google, Committee Hansard, 5 March 2026, p. 2.

[9]YouTube, Misinformation policies (accessed 4 December 2025).

[10]YouTube, Misinformation policies (accessed 4 December 2025).

[11]LinkedIn, False or misleading content (accessed 5 December 2025); LinkedIn, Professional community policies (accessed 5 December 2025).

[12]Meta, Misinformation: Policy details (accessed 4 December 2025).

[13]Meta, Misinformation: Policy details (accessed 4 December 2025).

[14]Meta, Misinformation: Policy details (accessed 4 December 2025).

[15]Meta, Stepping Up the Fight Against Climate Change, 14 September 2020 (accessed 19 August 2025).

[16]Meta, Our Approach to Climate Content, 4 November 2022 (accessed 19 August 2025).

[17]QUT, Meta is abandoning fact checking – this doesn't bode well for the fight against misinformation, 9January 2025 (accessed 19 August 2025); Jason Pollock, 'Meta's Australian fact checkers stay, for now', Ad News, 9 January 2025 (accessed 5 January 2026).

[19]Brad Ryan, 'After Trump's election win, Meta is firing fact checkers and making big changes', ABC News, 8 January 2025 (accessed 19 August 2025).

[20]Joseph Olbrycht-Palmer, 'Meta has 'no immediate plan' to end fact-checking in Australia, Communications Minister Michelle Rowland says', The Australian, 14 January 2025 (accessed 19 August 2025).

[21]Meta, Stepping Up the Fight Against Climate Change, 14 September 2020 (accessed 19 August 2025).

[22]Meta, Stepping Up the Fight Against Climate Change, 14 September 2020 (accessed 19 August 2025).

[23]Jill Hopke, 'Climate misinformation is rife on social media – and poised to get worse', The Conversation, 18 January 2025 (accessed 19 August 2025).

[24]TikTok, Integrity and authenticity (accessed 5 December 2025).

[25]TikTok, Integrity and authenticity (accessed 5 December 2025).

[26]Ms Autumn Field, General Manager, Content Division, Australian Communications and Media Authority, Committee Hansard, 29 September 2025, p. 15.

[27]X Help Center, The X Rules, (accessed 5 December 2025).

[28]Mr Evan Frondorf, Head, External Policy and Partnerships, Safeguards, Anthropic, Committee Hansard, 12 March 2026, pp. 1, 2.

[29]Mr Evan Frondorf, Head, External Policy and Partnerships, Safeguards, Anthropic, Committee Hansard, 12 March 2026, pp. 2, 3.

[30]Mr Evan Frondorf, Head, External Policy and Partnerships, Safeguards, Anthropic, Committee Hansard, 12 March 2026, p. 3.