Labor members' additional comments

While time constraints of this Inquiry did not permit a line-by-line negotiation of the final report between committee members, Labor members of the Committee support the intent of the final Committee Report and its recommendations.
Labor members thank government members, particularly the Chair, for the bipartisan spirit in which this inquiry was undertaken and the efforts that were made to ensure that issues raised by Labor members were heard in this committee process.
The harms currently being caused by social media platforms are by now well known. The era of effective self-regulation by these platforms failed to deliver a safe online environment for Australians. All sides of politics are united in the need for government regulation to underpin the safety of these platforms, particularly for vulnerable groups.
Labor has a strong record of supporting online safety measures for Australians, both in government and from opposition, and Labor members have sought to contribute to this inquiry on a constructive, bipartisan basis.
While supporting the Committee Report’s recommendations, Labor members wish to make the following Additional Comments.

Disinformation and misinformation

While the Committee report notes that the broader, societal harms caused by social media platforms were not the focus of this inquiry, Labor members were concerned by the inadequacy of existing responses to the harms caused by the amplification of mis- and disinformation by social media platforms – particularly in the context of the upcoming Federal election.

Inconsistent Enforcement of Misinformation Policies and the Lack of Enforcement Transparency

The Committee heard significant evidence that social media platforms are inconsistently enforcing their published policies on misinformation and disinformation. Compounding this, the platforms offered little transparency about the enforcement actions taken in response to users who repeatedly share misinformation.
In the context of the upcoming Federal election, it was particularly disturbing that Google could not explain why it was unable to stop misinformation being amplified on YouTube through paid advertising.
When asked about the United Australia Party’s then $5 million spend on advertising on 57 YouTube videos since August 2021, of which six were removed for breaching YouTube’s COVID-19 misinformation policies, Google’s advice was that it had increased investments in its automated machine learning and AI systems to detect misinformation. It is unclear why machine learning and AI systems were necessary to scrutinise the advertising of users who have a lengthy history of distributing misinformation in contravention of the platforms own community guidelines.
Google’s evidence was that the removal of these UAP six videos did not trigger their “three-strikes” policy on the sharing of misinformation as “if a number of videos are found to be violative at the same time, they’re bundled into one strike in a period of time.” Google did not expand on how many videos would need to be removed or the period of time that needs to elapse before triggering a second or third strike.
Google were also unwilling to disclose how many strikes they have issued against the United Australia Party for the sharing of misinformation on YouTube, noting:
We preserve and balance the privacy of users on our platforms with the need for accountability to our members of parliament. Indeed, in a committee at which we appeared at the end of last year, we provided confidential information to committee members, to members of parliament, about particular strikes that had been issued. That in that way ensures that we balance the rights to privacy with the rights and the need for accountability in our policymakers and members of parliament.1
Labor members are of the view that this approach does not strike an appropriate balance between privacy and the public interest. In the context of the serious harms caused by misinformation during the COVID-19 pandemic, and the 2022 Federal election, there is a broader public interest in understanding Google’s specific approach to the enforcement of its misinformation policies against public figures and political actors.
It is not in the public interest for enforcement action taken against political actors to be a black box that prevents public scrutiny of these actions. Public confidence in the integrity of these matters relies on the public at large, not just the subjects of enforcement action, being able to understand the enforcement actions taken against political actors for identified breaches of the platforms policies. This is particularly so when these platforms are the recipients of tens of millions of dollars in advertising revenues from the subjects of enforcement action.
It does not help that YouTube’s problems with mis- and dis-information are well documented. The International Fact-Checking Network, a coalition of 80 international fact-checking groups including RMIT ABC Fact Check, in Australia, sent an open letter to the Chief Executive Officer of YouTube, stating:
YouTube is allowing its platform to be weaponized by unscrupulous actors to manipulate and exploit others, and to organize and fundraise themselves.2
Any steps taken by Google to provide transparency in its enforcement decision making will assist to provide public confidence that this issue is being taken seriously by the company.

Limiting the Reach of Mis- and Disinformation – The Fact Checking Choke Point

The Committee also heard evidence with respect to the fact check checking processes run by the social media platforms. Each of the major platforms gave evidence that once content was identified as misinformation through these fact checking processes, platform specific measures were taken to limit the extent to which the reach of this content could be amplified. Despite this, none of the social media platforms gave any commitments about either the expected time frames in which these fact checks would occur, nor the resources that would be allocated to the task. In the context of a month long election campaign, misinformation that took weeks to be fact checked under current processes has the potential to cause significant public harm before the platforms took action.
Given that the risks to social cohesion and democratic integrity from mis- and dis-information are amplified during periods of intense political debate, Labor members were concerned that Meta was unable to provide advice on the time taken to fact check information in the context of the 2022 election.
Meta advised the Committee that is unable even to report on the average time taken to fact check mis- and dis-information as:
[O]ur fact checkers operate independently it’s really up to them to apply their good judgements and their editorial techniques in order to undertake whatever research they need to in order to verify a claim.3
Meta was also unable to provide answers on the time taken to fact-check and demote a particularly egregious piece of disinformation from the 2019 election campaign. It is easy to be cynical about the social media platforms intent to improve the timeliness of their fact checking process when they do not appear to be even measuring their own performance in this regard.
Labor Members request that social media platforms set public benchmarks for the performance of their fact checking and misinformation demotion processes in the context of the 2022 Federal Election.

Regulating Misinformation in Australia – Release the Report, Minister!

The inconsistency of the platforms’ response to mis- and dis-information has been compounded by the ambiguity of the regulatory context in Australia.
Despite the ACCC Digital Platforms Inquiry recommending an enforceable industry code on disinformation and misinformation in July 2019, the ACMA still hasn’t been formally empowered by the government to act in relation to these issues.
Instead, the Digital Industry Group Inc published a voluntary industry code, the Australian Code of Practice on Disinformation and Misinformation on 22 February 2021. Many witnesses who appeared before the committee expressed reservations about the limited obligations and scope for review and enforcement action created under this code.
ACMA provided the Minister for Communications, Urban Infrastructure, Cities and Arts with a report on the impacts of misinformation in Australia, encompassing the impacts of misinformation during the COVID-19 pandemic and in the context of Australian electoral processes, on 30 June 2021. The ACMA also expressed a view about the adequacy of the measures included in the voluntary industry misinformation code in this report.
Despite the salience of this ACMA report, to both the ongoing COVID-19 pandemic and the imminent Federal Election, the Minister has been ‘considering’ this report for nearly 9 months now.
Labor Members recommend that the Minister for Communications, Urban Infrastructure, Cities and the Arts immediately release the ACMA’s misinformation report and respond to the ACMA’s recommendations.

The Broader, Societal Harms of Social Media Platforms – Hate Speech and Violent Radicalisation

As outlined in the Committee Report, Australia’s existing online safety laws are currently focused on harms directed towards individuals. However, Labor members are of the view that greater attention should be paid to the broader, societal harms caused by social media including particularly harms to our social cohesion and to our democracy. A glaring gap in the existing Australian regulatory regime compared to other nations is dealing with hate speech targeting groups.
This is particularly concerning in the context of this report being handed down at the time of the third anniversary of the Christchurch terrorist atrocity, in which a 28-year-old Australian man entered two mosques in the New Zealand city of Christchurch and murdered 51 people and injured 40.
While the Abhorrent Violent Material legislation introduced after the Christchurch terrorist attack regulates materials that are the product of violent extremism, we have yet to address the online material that normalises hate and radicalises people to commit these acts of real-world violence.
The Committee heard evidence that the current online safety regime was not adequately equipped to deal with online abuse directed at groups of people rather than individuals. It further heard that abuse directed at groups on the basis of their ethnicity and religion – most commonly at people of Muslim and Jewish faith – was a particular problem to which there was little recourse.
The Committee heard from Ms Rita Jabri Markwell of the Australian Muslim Advocacy Network on the impact this type of abuse, and the subsequent failure of government to address it in the wake of the Christchurch attack, saying “it's made us feel really lonely. I don't know how else to describe it. It's kind of like you don't matter”.4
The status quo was exemplified by Ms Markwell’s experience in trying to get former Senator Fraser Anning’s account removed from Facebook. The Australian Muslim Advocacy Network successfully brought a vilification action against former Senator Fraser Anning under the Queensland Anti-Discrimination Act. Mr Anning’s page had been publishing a constant stream of islamophobia and content promoting the ‘great replacement’ hate narrative that inspired the Christchurch terrorist and a series of right-wing extremists before him. In response, the tribunal ordered the removal of 141 hate artefacts from Mr Anning’s page. Despite this, Facebook did not remove Mr Anning’s Facebook page.5
The Online Hate Prevention Institute and the Executive Council of Australian Jewry also spoke about working with platforms and ISPs to have hate content, like terrorist manifestos, removed. They told the Committee that they had difficulty getting eSafety and the Australian Communications and Media Authority to assist to remove such content from the internet.6
Civil society groups and industry are in rare agreement on the need for action on this issue. In submissions to the inquiry, Reset Australia - a policy think-tank that advocates for technology policy reform - and DIGI - the industry lobby group for major technology platforms including Facebook, Twitter, and Google – have both called for such reforms.
Rest Australia noted in their submission that:
Expanding the definitions of harms (and risks) addressed in Australia’s regulatory framework would better protect Australian communities and society at large. This means tackling mis and disinformation, and explicitly addressing hate speech.7
The platforms too have called for action in their individual capacities. Twitter noted in their submission that:
Conversely, Australia continues to utilise a very limited definition of hate speech under the Racial Discrimination Act 1975 (Cth) that is limited to race-based speech or behaviour, and does not include a number of the aforementioned categories, including sexual orientation, disability-based, religious-based or gender-based speech.8
Mr Josh Machin of Meta stated in evidence to the Committee that:
We'd encourage Australian policymakers to look at what could be done in relation to hate speech online and offline. State based discrimination laws are obviously intended to provide remedies for particular individuals who can show some disadvantage that they've received on the basis of that discrimination. I think the difficulty with hate speech is that it's not always targeted at individuals.9
Three years on from the Christchurch atrocity, and with right-wing extremism a growing concern for Australian security agencies, we need to address online hate speech.
Labor members consider that Australia’s online safety framework should be updated to enable action to be taken against group hate speech.
Labor members of the Committee are also of the view that further work needs to be done to improve the government’s efforts to share information and collaborate with industry to reduce exposure to terrorist violent and extremist content.
In evidence to the Committee the Department of Home Affairs stressed the importance of “constructive collaboration between government, the community and industry”.10 This is consistent with Recommendation 4.3 of the final report of the Australian Taskforce to Combat Terrorist and Extreme Violent Material Online, which commits government to proactively “share with digital platforms (where legally and operationally feasible) indicators of terrorism, terrorist products and depictions of violent crimes.”
Meta, the parent company of Facebook, Instagram, and WhatsApp informed the Committee that:
The Department of Home Affairs has not shared any intelligence specifically about dangerous organisations or individuals with Meta (noting that there are referrals on other issues such as social cohesion and misinformation).11
This was confirmed by the Department of Home Affairs. In response to Questions on Notice the Department of Home Affairs noted that it had shared two academic papers, a literature review, and materials publicly available through the Australian Security Intelligence Organisation’s Outreach portal.12
Labor members of the Committee recommend that the Department of Home Affairs evaluate its practices with respect to notifying social media providers of terrorist violent and extremist content on their platforms, with a view to improving intelligence flows from the Department to social media providers.
Labor members of the Committee recommend that the government immediately release its formal assessment of the actions detailed in the final report of the Australian Taskforce to Combat Terrorist and Extreme Violent Material Online.

The Importance of Multistakeholder Governance Models in Online Safety

As a general observation from the conduct of this inquiry, Labor members wish to underscore the ongoing importance of multistakeholder governance models for all parties with policy responsibility in this space, including governments, regulators and social media platforms.
The Committee heard repeated evidence that the kinds of online harms considered by this inquiry, and the impacts of potential policy and regulatory responses to these harms, manifest differently on different groups within the community.
It is important that groups that have been historically marginalised on the grounds of race, religion, sexuality, gender identity, disability and occupation (including sex workers), are included in multistakeholder governance processes at the dialogue, decision making and implementation stages, to ensure that the impacts of online harms and regulations on these groups are fully appreciated.
This is why Labor moved amendments formalising consultative committee processes at the eSafety Commission as part of the Parliamentary debate on the Online Safety Bill, but this remains an ongoing challenge that requires further attention by all actors in this space.

Consolidating the Online safety regulatory framework

Labor members acknowledge the Committee report’s suggestion that “the OSA should be reviewed in future to ascertain whether it had made an effect in relation to online safety and to examine whether a single regulatory framework would reduce complexity and confusion for providers.”
Labor members note that this is in-line with the 2018 Report of the Statutory Review of the Enhancing Online Safety Act 2015 and the Review of Schedules 5 and 7 to the Broadcasting Services Act 1992 (Online Content Scheme) (‘the Briggs review’).
The Briggs review suggested that:
The majority of submitters to these reviews proposed taking Schedules 5 and 7 out of the Broadcasting Services Act and incorporating them within the Enhancing Online Safety Act after a thorough review to clean up and modernize both pieces of legislation, resulting in a single piece of legislation relating to online services and content. I support this approach most strongly and consider that the legislation should be redrawn into an integrated act which deals systemically with the challenge of online safety in its entirety.13
And further:
that there should be a single new fit for purpose and technology-neutral code of practice. This single code would fulfil a wider purpose than the current codes.14
Labor members welcome the Committee report’s recommendation that Australia’s online safety regulation be consolidated in a single regulatory framework. Such a consolidation would improve the effectiveness of the regime for the Australian public and make compliance with Australian regulatory obligations easier for social media platforms, particularly smaller new entrants. However, Labor members express concern that more than three years after the Briggs’ review, the government has failed to follow through on this commitment to consolidation of online safety regulatory regimes, and has instead continued to add additional parallel and overlapping regulatory frameworks.

Coordination between Regulators and Law Enforcement

It has been over three years since Ms Lynelle Briggs AO noted that “even though there are memorandums of understanding between the eSafety Office and all state and territory police services, the Department of Home Affairs’ submission to these reviews suggests that there is sufficient uncertainty around these arrangements.”15
Labor members are concerned that the evidence heard by this inquiry suggests that this recommendation has been ignored.
In the Senate Legal and Constitutional Affairs Legislation Committee, the eSafey Commissioner expressed the view that “it is up to the law enforcement agencies themselves, which I note have billions of dollars in funding, hundreds of thousands of workers to train, local police, and know what resources are available to them.”16
However, the eSafety Commissioner has failed to update these memoranda, noting in evidence to the Legal and Constitutional Affairs Legislation Committee of the Senate the “Our MOUs don’t go to the kinds of conversations and collaboration that we have [with NSW police].”17
Labor members recommend that eSafety immediately review its arrangements with state and territory police forces, to prevent further cases from “fall[ing] through the cracks” without receiving services from the responsible Commonwealth agency.

Technology Policy Coordination

Labor members note that the Committee heard repeated evidence about the impact that duplicative and parallel policy development processes were having on the sector. Google told the committee that it was currently participating in nine separate regulatory processes between the Government and industry that address cyber safety and privacy. Meta told the committee that there had been 18 major Government or parliamentary inquiries or consultations impacting digital platforms over the last three years.18
Beyond the immediate scope of this inquiry, technology journalist Stilgherrian has publicly tracked dozens of ‘cyber related’ bills, regulatory instruments, governmental and parliamentary inquiries, and policy consultations at the Commonwealth level in 2021 alone.
While there is clearly a need for significant policy action to address the substantial and evolving harms in this space, it is clear that the lack of process co-ordination is undermining the effectiveness of regulation and creating real costs.
The Committee heard evidence numerous stakeholders, corporate and civil society alike, that this has resulted in:
An unnecessary resource burden being imposed on stakeholders responding to these processes;
Inconsistent objectives being pursued by policy makers and regulators, undermining policy effectiveness and complicating compliance; and
Recommendations of lengthy policy development processes being rejected or forgotten and ultimately never implemented.
Unfortunately, an ad hoc, ‘announcement-led’ approach in a complex and rapidly evolving area has led to a series of technology policy coordination issues.
Technology Policy Coordination Issues
Online Safety Act
It took the Morrison government two years and four months to introduce the Online Safety Bill into the Parliament after it was recommended by the Briggs Review in October 2018.
During this two-year delay, the eSafety Commissioner has highlighted the way that online harms spiked during the pandemic lockdowns across CSAM, cyber bullying and adult cyber abuse.
Age verification
There are currently five parallel and duplicative age/identity verification schemes under development from four different departments and regulators.
Social Media (Anti-Trolling) Bill
The Bill sits outside existing and ongoing state defamation reform process and state Attorneys-General were not consulted prior to the release of the exposure draft.
Esafety Commissioner ‘concerned’ and described the marketing of the Bill as an online safety initiative as ‘not ideal’ and causing confusion among victims.
The PM repeatedly stated in December that the bill would be ‘implemented’ or ‘dealt with’ by Parliament in February. It has not yet been brought on for debate.
Social Media Online Safety Select Committee Inquiry
Announced by the Prime Minister on 1 December 2021 with a reporting date of 15 February 2022.
As this reporting date was just three weeks after the Online Safety Act came into force, the Committee was unable to evaluate the effectiveness of the new regulatory framework as part of its work.
Digital Platforms Inquiry
More than 2 ½ years after the report was released by the ACCC in July 2019, much of the report is still unimplemented with important recommendations relating to consumer privacy, disinformation, ad restrictions, digital literacy, and consumer choice yet to be actioned.
ACMA Disinformation Code
The ACCC’s Digital Platforms Inquiry recommended an enforceable code to counter disinformation in July 2019. It took the government a year and a half to deliver a voluntary opt-in code, designed by the industry body.
While the ACMA’s provided an evaluation of the code to the Minister 9 months ago, it has not been released or acted upon.
Australian Taskforce to Combat Terrorist and Extreme Violent Material Online
Taskforce report outlining measures to be taken by governments and platforms to combat terrorism in the wake of the Christchurch massacre released in June 2019.
Despite committing to a review of effectiveness of the measures agreed to in the report with a view to regulating if further action was required, no assessment of these measures has been released in the more than two years since.
Privacy Act Review & Online Privacy Code
While both the economy wide Privacy Act Review and the Online Privacy Code were recommended in the ACCC Platforms Report in July 2019, these reform processes are currently being undertaken at the same time.
Given the reliance of the Online Privacy Code on the Australian Privacy Principles, any changes to the APPs resulting from the Privacy Act Review would immediately render the Code outdated and inconsistent.
AHRC Human Rights and Technology Final Report
Released in May 2021, the AHRC Human Rights and Technology report was the result of three years of consultation. The report recommended a stronger regulatory regime for the use of AI.
Government has not responded to the report and has indicated that it has no plans to review its voluntary AI ethics framework.
While the Committee heard that there is now a plethora of internal government policy coordination committees on these matters and a series of ad hoc, bilateral engagement forums between regulators, it is clear that these processes are not preventing the emergence of duplicative and inconsistent policy development processes.
The Department of Infrastructure, Transport, Regional Development and Communications informed the committee in response to questions on notice that despite the current volume of activity in this space, the Agency Heads Committee on Online Safety has only met once since the start of 2021 and the eSafety Advisory Committee has only met three times in that same period.19
It is worth comparing the current approach of the Commonwealth to technology policy coordination with that of financial sector regulation. In the financial sector, the Council of Financial Regulators operates as a co-ordinating body with a terms of reference established under a memorandum of cooperation. The CFR charter provides that it is established to pursue a shared objective (the stability of the Australian financial system) and provides a forum to pursue this objective through:
Identification of important issues and trends;
Information exchange and coordination where members responsibilities overlap;
Harmonisation efforts to align reporting and regulatory requirements;
Coordinating international engagement with peer bodies overseas.
Australian technology policy would benefit from a similar institution (potentially including the OAIC, eSafety, ACMA, ACCC, ACSC, and relevant Government Departments) to assist with the identification of emerging issues, coordination of regulatory processes and alignment of policy objectives.
Labor members note the announcement made near the conclusion of this inquiry that the ACCC, ACMA, the Australian Information Commissioner and the eSafety Commissioner intend to form a Digital Platform Regulators Forum to coordinate their actions. After the extensive evidence of repeated failures of technology policy coordination heard by this committee, this is a welcome development.
However, it must be recognised that the Federal government itself has been the source of the bulk of the policy coordination failures and process duplication in recent years. The success of any technology policy coordination institution will therefore depend on the involvement and buy in of Commonwealth technology policy development agencies, and not just the regulators.
Labor Members recommend that the Government consider the establishment of a Council of Technology Regulators, modelled on the Council of Financial Regulators, to coordinate and align technology policy making in Australia.
Mr Tim Watts MP
Deputy Chair
Labor Member for Gellibrand
Ms Sharon Claydon MP
Labor Member for Newcastle

  • 1
    Ms Lucinda Longcroft, Director, Government Affairs and Public Policy, Google Australia and New Zealand, Committee Hansard, 20 January 2022, p. 6.
  • 2
    International Fact-Checking Network, ‘An open letter to YouTube’s CEO from the world’s fact-checkers’, Poynter, 12 January 2022, available at: https://www.poynter.org/fact-checking/2022/an-open-letter-to-youtubes-ceo-from-the-worlds-fact-checkers/ (accessed 15 March 2022).
  • 3
    Mr Josh Machin, Head of Public Policy, Meta, Committee Hansard, 2 March 2022, p. 25.
  • 4
    Ms Rita Jabri-Markwell, Adviser, Australian Muslim Advocacy Network, Committee Hansard, 22 December 2021, p. 26.
  • 5
    Ms Rita Jabri-Markwell, Adviser, Australian Muslim Advocacy Network, Committee Hansard, 22 December 2021, p. 21.
  • 6
    Dr Andre Oboler, Chief Executive Officer and Managing Director, Online Hate Prevention Institute, Committee Hansard, 22 December 2021, pp 13-14.
  • 7
    Reset Australia, Submission 12, p. 6.
  • 8
    Twitter, Submission 50, p. 8.
  • 9
    Mr Josh Machin, Head of Public Policy, Meta, Committee Hansard, 2 March 2022, p. 18.
  • 10
    Mr Brendan Dowling, First Assistant Secretary, Digital and Technology Policy Division, Department of Home Affairs, Committee Hansard, 1 February 2022, p. 1.
  • 11
    Meta, Submission 49.1, p. 26.
  • 12
    Department of Home Affairs, Submission 40.1, p. 8.
  • 13
    Lynette Briggs AO, Report of the Statutory Review of the Enhancing Online Safety Act 2015 and the Review of Schedules 5 and 7 to the Broadcasting Services Act 1992 (Online Content Scheme), available at: https://www.infrastructure.gov.au/sites/default/files/briggs-report-stat-review-enhancing-online-safety-act2015.pdf (accessed 18 December 2021), p. 8.
  • 14
    Lynette Briggs AO, Report of the Statutory Review of the Enhancing Online Safety Act 2015 and the Review of Schedules 5 and 7 to the Broadcasting Services Act 1992 (Online Content Scheme), available at: https://www.infrastructure.gov.au/sites/default/files/briggs-report-stat-review-enhancing-online-safety-act2015.pdf (accessed 18 December 2021), p. 14.
  • 15
    Lynette Briggs AO, Report of the Statutory Review of the Enhancing Online Safety Act 2015 and the Review of Schedules 5 and 7 to the Broadcasting Services Act 1992 (Online Content Scheme), available at: https://www.infrastructure.gov.au/sites/default/files/briggs-report-stat-review-enhancing-online-safety-act2015.pdf (accessed 18 December 2021), p. 26.
  • 16
    Ms Julie Inman Grant, eSafety Commissioner, Office of the eSafety Commissioner, Inquiry into the Social Media (Anti-Trolling) Bill 2022, Senate Legal and Constitutional Affairs Legislation Committee, Committee Hansard, 10 March 2022, p. 18.
  • 17
    Ms Julie Inman Grant, eSafety Commissioner, Office of the eSafety Commissioner, Inquiry into the Social Media (Anti-Trolling) Bill 2022, Senate Legal and Constitutional Affairs Legislation Committee, Committee Hansard, 10 March 2022, p. 18.
  • 18
    Meta, Submission 49, p. 4.
  • 19
    Department of Infrastructure, Transport, Regional Development and Communications, Submission 44.1, pp 6-19.

 |  Contents  |