Additional comments from Senator David Pocock

Additional comments from Senator David Pocock

Introduction

1.1This committee inquiry attracted a wide range of deeply considered submissions, from a selection of stakeholders notable for their diversity, offering important insights on the fast emerging application of Artificial Intelligence (AI).

1.2I would thank everyone who took the time to submit to the inquiry and those who provided evidence at the public hearings.

1.3I especially thank the committee secretariat for their diligent work in a complex policy arena.

1.4I also echo the committee’s concerns with the testimony of certain witnesses, notably Google, Meta and Amazon, who lacked transparency and were not forthcoming in the information they provided to the committee and in response to Senators’ questions. The continued disdain with which multinational tech and social media corporations treat sovereign governments, and their attempts to better regulate them, is of growing concern.

1.5Their behaviour underscores the pitfalls in policy and regulatory frameworks that put the onus on social media companies to assess and regulate content.

1.6It also highlights why further transparency, especially when it comes to algorithm and large language model content sources, is both essential and urgent.

Recommendations

1.7I support all of the Chair’s recommendations but believe some need to go further, faster.

1.8The government’s interim response[1] to The Safe and responsible AI in Australia report was released in January this year. A voluntary AI Safety Standard[2] was released in September, when the term of the AI expert group also concluded.[3] Consultation on mandatory guardrails for safe and responsible AI concluded at the beginning of October. With the growing uptake of AI, legislation to mandate how AI is used in high-risk settings needs to be an urgent priority for government.

Recommendation 1

1.9That the government prioritise introduction of legislation to introduce mandatory guardrails for the safe and responsible use of AI in the next sitting period.

1.10The first recommendation of the committee’s interim report was that, ahead of the next federal election, the government implement voluntary codes relating to watermarking and credentialing of AI-generated content.

1.11As we enter the final sitting week for 2024—and potentially for this parliamentary term—no such legislation has been introduced.

1.12The second recommendation in the committee’s interim report has also not been acted on. This was a recommendation that the Australian Government undertake a thorough review of potential regulatory responses to AI-generated political or electoral deepfake content, including mandatory codes applying to the developers of AI models and publishers including social media platforms, and prohibitions on the production or dissemination of political deepfake content during election periods, for legislative response prior to the election of the 49th Parliament of Australia.

1.13In response to questioning by Senator Shoebridge and I about the use of deepfakes in electoral material, former Australian Electoral Commissioner Tom Rogers told the hearing:

It is absolutely happening at an accelerated rate, particularly as our understanding of this technology increases…I think this is an issue for democracies globally, not just for Australia. We are witnessing this globally.

1.14But the Commissioner, referencing the electoral act and its capacity to regulate the use of deepfakes, went on to say:

The purpose of the act isn't to regulate the content of political ads, as it's currently set up—that would be a content issue—so long as it was appropriately authorised. And we could work out who did the authorisation. That would meet the requirements of section 321, which is the authorisation section. It wouldn't fall foul of section 329, which has been very narrowly cast by the courts.[4]

1.15AI and mis and disinformation are threatening democracies around the world. Australia is not immune, but we are clearly underprepared.

Recommendation 2

1.16That the government prioritise consideration of the Electoral Legislation Amendment (Electoral Communications) Bill 2024 with amendments that would see the use of artificial intelligence banned in electoral matter with effect prior to the next federal election.

1.17The committee heard strong evidence about the need for a national AI safety centre to address the growing concerns around AI safety and to ensure that Australia remains at the forefront of AI research and development while prioritising safety and ethics.

1.18Mr Greg Sadler, the Chief Executive Officer to the Good Ancestors Project, commended the establishment of the National AI Centre but said a more safety-focused centre was needed:

I think in the bureaucracy we've got great institutes like the National AI Centre that's focused on the adoption of AI, but you can see why they have that competing requirement with safety. We think that an Australian AI safety institute on the UK model that's tasked specifically to think about these frontier safety risks with next generation models is helpful.[5]

Recommendation 3

1.19That the government fund a national AI safety centre in the next federal budget.

1.20Another pressing issue identified during the inquiry was the need for immediate action to protect copyright, including:

establishing a clear and robust framework for protecting copyright;

implementing a national copyright education program;

establishing a national registry for copyright works; and

strengthening enforcement mechanisms to prevent copyright infringement.

1.21The Australian Society of Authors recommends that the Australian Government establish a clear and robust framework for protecting copyright in the digital age, including measures to prevent copyright infringement and ensure fair compensation for creators.

1.22The Copyright Agency submits that the Australian Government should implement a national copyright education program to raise awareness about the importance of respecting copyright and the consequences of copyright infringement.

1.23Australian artists are particularly at risk from AI and the Music Council of Australia recommends that the Australian Government establish a national registry for copyright works, to provide a centralised database for creators to register their works and to facilitate the identification of infringing content.

1.24In the same vein the Australian Recording Industry Association submits that the Australian Government should strengthen enforcement mechanisms to prevent copyright infringement, including increasing penalties for copyright infringement and providing greater resources for law enforcement agencies to investigate and prosecute copyright crimes.

Recommendation 4

1.25That the government urgently bring forward a bill to amend the Copyright Act to strengthen prohibitions on copyright infringement, clarify how copyright law applies to generative AI and, separately, fund the establishment of a copyright register and greater enforcement and compliance activities.

Senator David Pocock

Member

Independent Senator for the Australian Capital Territory

Footnotes

[1]DISR, Safe and responsible AI in Australia consultation: Australian Government’s interim response, January 2024.

[2]DISR, Voluntary AI Safety Standard: Guiding safe and responsible use of artificial intelligence in Australia, September 2024.

[4]Mr Tom Rogers, Electoral Commissioner, Australian Electoral Commission, Committee Hansard, 20May 2024, p. 27.

[5]Mr Greg Sadler, Chief Executive Officer, Good Ancestors Project, Committee Hansard, 16 August 2024, p. 51.