Dissenting report from Senator the Hon James McGrath and Senator the Hon Linda Reynolds CSC

Dissenting report from Senator the Hon James McGrath and Senator the Hon Linda Reynolds CSC

Coalition Members of the Select Committee’s Reply to the Final Report’s Recommendations

Introduction

1.1The Coalition members of the Select Committee on Adopting Artificial Intelligence (the committee) hold that the governance of artificial intelligence (AI) is one of the 21st century’s greatest public policy challenges.

1.2Nevertheless, the Coalition members of the committee hold that any AI policy framework ought to safeguard Australia’s cyber security, intellectual property rights, national security, and democratic institutions without infringing on the potential opportunities that AI presents in relation to job creation and productivity growth.

1.3AI presents an unprecedented threat to Australians’ cyber security and privacy rights. AI technologies, especially large language models (LLMs), such as Chat-GPT, are trained on substantial amounts of data in order to be able to generate outputs. That is, in order for a LLM, like Chat-GPT, to gain a predictive capacity, the LLM needs to be ‘fed’ significant quantities of data to enable the technology to develop its own text, images or videos.

1.4One risk of AI LLMs is that these models become the amalgamation of the data that they are fed. As a result, if the information going into an AI LLM is biased or prejudicial, there is a significant risk that the LLM would then replicate such biases and discrimination on a mass scale.

1.5However, one of the greatest risks associated with modern advancements in AI LLMs is the ‘inappropriate collection and use of personal information as well as the leakage and unauthorised disclosure or de-anonymisation of personal information’.[1]

1.6With little-to-no domestic regulation of LLMs, especially those owned and operated by multinationals such as Meta, Google, and Amazon, the storage and utilisation of significant amounts of private data on its users is a real risk. However, when asked about the extent to which these organisations use private data of their users in the development of their AI models, these organisations provided very unclear responses. Indeed, Meta did not even answer questions about whether it used private messages sent through Messenger or WhatsApp in its generation of its LLM, Meta AI.[2]

1.7Notwithstanding the severe privacy considerations related to this type of conduct, as LLMs have not yet matured, there is a significant risk that private information on certain users may unintentionally form the basis of future outputs. Such a risk to the cybersecurity of the Australian people is unprecedented.

1.8Similarly, AI presents a significant challenge not just to Australia’s creative industries, but to the entire intellectual property rights structure in Australia more broadly. As the Final Report highlights, ‘a significant issue in relation to copyright arises where copyrighted materials are used to ‘train’ AI models’.[3]

1.9Indeed, the data that LLMs require to acquire predictive capacity, including images and text, are often extracted from the internet with no safeguards as to whether this data is owned by another individual or entity.

1.10When Meta, Amazon, and Google were asked whether they use copyrighted works in training their LLMs, they either did not respond, stated that the development of LLMs without copyrighted works is not possible or stated that they had trained their LLMs on so much data that it would be impossible to even know.[4] These potential violations of Australia’s copyright laws represent only the beginning of the threat that AI generation poses to the ongoing management of intellectual property rights in Australia.

1.11The Department of Home Affairs highlighted the severe national security risks presented by AI in its submission to the inquiry.[5] Due to the recent exponential improvements in AI capabilities, coupled with the unprecedented level of publicly available personal and sensitive information on many Australians, foreign actors now have the ability to develop AI capabilities to ’target our networks, systems and people.’[6] That is, foreign actors could gain the ability to target specific Australians through AI capabilities trained on their own private and sensitive data. The ability for foreign and/or malicious actors to use sophisticated AI technology for scamming and phishing represents a significant threat to Australia’s national security.

1.12As this inquiry into AI occurs in the context of the two-year anniversary of the public release of Chat-GPT, these threats have been clear and in the public domain for 24 months. Yet the Federal Government has seemingly done absolutely nothing to deal with these threats to Australia’s cyber security, intellectual property rights, and national security across this entire two-year period.

1.13Indeed, 10 months ago, in January 2024, the Department of Industry, Science and Resources (DISR) stated that ’existing laws do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur’.[7] And yet absolutely nothing has happened over these 10 months.

1.14The Federal Government has neglected its responsibility to deal with any of the threats that the exponential growth of the AI industry poses to the Australian people and their entities.

1.15The Coalition members of the committee hold that any AI policy framework ought to safeguard Australia’s cyber security, intellectual property rights, national security, and democratic institutions without infringing on the potential opportunities that AI presents in relation to job creation and productivity growth. The Coalition members of the committee apply this position to assess the Final Report’s recommendations.

Recommendation 1

1.16That the Australian Government introduce new, whole-of-economy, dedicated legislation to regulate high-risk uses of AI, in line with Option 3 presented in the government’s introducing mandatory guardrails for AI in high-risk settings: proposals paper.

1.17Though the Coalition members of the committee do not necessarily oppose the introduction of an AI Act, the Coalition members of the committee note that such whole-of-economy guardrails should only be used as a last resort.

1.18Option 3 of the Government’s Introducing mandatory guardrails for AI in high-risk settings: proposals paper calls on a whole-of-government approach to deal with AI, including the introduction of a new cross-economy AI Act as the regulatory framework to mandate guardrails of AI.

1.19In response to this proposition, the Final Report notes that the Financial Services Council (FSC) stated that the AI industry should not be ’unduly burdened with red tape, particularly where industry-specific regulation already exists to mitigate the risks’.[8]

1.20Similarly, ’the Governance Institute of Australia (GIA) drew attention to existing statutory frameworks that could be adapted to regulate AI, including the Corporations Act 2001, the Privacy Act 1988 and the Australian Consumer Law within the Competition and Consumer Act 2010. The GIA recommended that the government review the effectiveness of these existing schemes for regulating AI.’[9]

1.21Likewise, the Digital Industry Group Incorporated (DIGI) noted that ‘many uses of AI systems in Australia are already subject to regulatory frameworks’. Before enacting any AI-specific laws, DIGI urged consideration be given to clarifying and strengthening the adequacy of existing regulatory frameworks for regulation of AI.[10]

1.22The Coalition members of the committee note the submissions by the FSC, GIA, and DIGI and thus hold that any development of an AI Act should only be legislated to fill regulatory gaps that are unable to be addressed through amendments to existing legislative frameworks.

1.23That is, as an AI Act would invariably infringe on private sector productivity, especially at a time where the Australian economy’s productivity growth is near-stagnant, such an Act should only be considered if absolutely necessary.

1.24As such, the Coalition members of the committee hold that an AI Act should only be used to fill regulatory gaps that were unable to be addressed through AI regulatory schemes developed through amendments to existing laws and legislative frameworks.

Recommendation 2

1.25That, as part of the dedicated AI legislation, the Australian Government adopt a principles-based approach to defining high-risk AI uses, supplemented by a non-exhaustive list of explicitly defined high-risk AI uses.

1.26Two possible methods of classifying high-risk AI use are through a principles-based approach or through a list-based approach. Though Recommendation 2 suggests a blend between the principles-based approach and a non-exhaustive list-based approach, this Final Report does not delve into this Recommendation in great detail.

1.27However, the Coalition members of the committee hold that, if a principles-based approach was sufficiently robust, a non-exhaustive list of other high-risk AI uses would be redundant as the principles-based approach would already capture any such high-risk uses.

Recommendation 3

1.28That the Australian Government ensure the non-exhaustive list of high-risk AI uses explicitly includes general-purpose AI models, such as large language models (LLMs).

1.29Similar to the response to Recommendation 2, if a sufficiently robust principles-based approach is taken to defining what is and what is not high-risk AI, then a non-exhaustive list would not be required in the first instance.

Recommendation 4

1.30That the Australian Government continue to increase the financial and non-financial support it provides in support of sovereign AI capability in Australia, focusing on Australia’s existing areas of comparative advantage and unique First Nations perspectives.

1.31Consistent with many of the submissions to the inquiry, the Coalition members of the committee note the importance of developing a robust sovereign AI capability in Australia.

Recommendation 5

1.32That the Australian Government ensure that the final definition of high-risk AI clearly includes the use of AI that impacts on the rights of people at work, regardless of whether a principles-based or list-based approach to the definition is adopted.

1.33The Coalition members of the committee do not hold that all uses of AI by ‘people at work’ need to be characterised or treated as being ‘high-risk’.

1.34Such red tape would only infringe on the productivity benefits or water down the legislative requirements for actual high-risk cases of AI.

1.35In its Safe and responsible AI in Australia consultation: interim response, DISR highlighted the importance of ’minimising compliance costs for businesses that do not develop or use high-risk AI’.[11] The Coalition members of the committee agree with the department’s position in this case.

1.36With forecasts predicting that AI could create 200,000 new jobs and contribute up to $115 billion annually to Australia’s economy,[12] the minimisation of red tape compliance burdens is essential to fully embracing the benefits of AI.

1.37Blanket sector-wide restrictive regulations, as suggested in Recommendation 5, would hinder this objective. The Coalition members of the committee oppose this recommendation.

Recommendation 6

1.38That the Australian Government extend and apply the existing work health and safety legislative framework to the workplace risks posed by the adoption of AI.

1.39The Coalition members of the committee support the extension and application of the existing work health and safety legislative framework to workplace risk posed by the adoption of AI.

1.40However, the Coalition members of the committee only support this recommendation on the proviso that the ‘workplace risks’ of the adoption of AI are legitimate workplace risks to health and safety.

1.41Several submitters to the inquiry were overly liberal with their descriptions of the workplace risks posed by the adoption of AI. One submitter categorised the utilisation of AI for the purposes of ’keystroke monitoring and email monitoring’ as being ‘dehumanising, invasive and incompatible with fundamental rights’.[13] The Coalition members of the committee do not hold this view.

1.42Rather, the Coalition members of the committee hold that the work health and safety legislative framework ought to only apply to the adoption of AI where there is a legitimate threat to work health and safety.

Recommendation 7

1.43That the Australian Government ensure that workers, worker organisations, employers and employer organisations are thoroughly consulted on the need for, and best approach to, further regulatory responses to address the impact of AI on work and workplaces.

1.44The Coalition members of the committee support this recommendation.

1.45The Coalition members of the committee support consultation between the Federal Government and workers, worker organisations, employers and employer organisations when developing public policy in relation to AI.

1.46However, so long as the relevant legislative frameworks are being followed, the Federal Government should not impose requirements or sanctions on private businesses to consult with the Federal Government or their employees on how they want to innovate their businesses using AI.

Recommendation 8

1.47That the Australian Government continue to consult with creative workers, rightsholders and their representative organisations through the Copyright and Artificial Intelligence Reference Group on appropriate solutions to the unprecedented theft of their work by multinational tech companies operating within Australia.

1.48The Coalition members of the committee oppose the intentional or unintentional breach of the Copyright Act 1968 by multinational technology companies using privately copyrighted work or data for the purposes of training their LLMs.

Recommendation 9

1.49That the Australian Government require the developers of AI products to be transparent about the use of copyrighted works in their training datasets, and that the use of such works is appropriately licenced and paid for.

1.50The Coalition members of the committee note the opacity of some of the multinational developers of general-purpose AI models, especially Meta, Google and Amazon. The Coalition members of the committee call on all developers to be upfront with their utilisation of copyrighted works for their LLMs.

Recommendation 10

1.51That the Australian Government urgently undertake further consultation with the creative industry to consider an appropriate mechanism to ensure fair remuneration is paid to creators for commercial AI-generated outputs based on copyrighted material used to train AI systems.

1.52The Coalition members of the committee support this recommendation.

Recommendation 11

1.53That the Australian Government implement the recommendations pertaining to automated decision-making in the review of the Privacy Act, including Proposal 19.3 to introduce a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made.

1.54The Coalition members of the committee note that the Attorney General’s Department (AGD) is currently completing a consultation paper on the policy and legal framework for the use of automated decision-making by Government.[14] The Coalition members of the committee will hold off on a final position until this consultation process has completed.

Recommendation 12

1.55That the Australian Government implement recommendations 17.1 and 17.2 of the Robodebt Royal Commission pertaining to the establishment of a consistent legal framework covering automated decision-making in government services and a body to monitor such decisions. This process should be informed by the consultation process currently being led by the Attorney-General’s Department and be harmonious with the guardrails for high-risk uses of AI being developed by the Department of Industry, Science and Resources.

1.56Similar to Recommendation 11, the Coalition members of the committee note that this recommendation is subject to an ongoing consultation process in the AGD. However, the Coalition members of the committee note that any automated decision making (ADM) systems in government ought to be harmonious with the guardrails for high-risk uses of AI if such uses are consistent with the principles-based approach to defining high-risk AI.

Recommendation 13

1.57That the Australian Government take a coordinated, holistic approach to managing the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value for Australians and is in the national interest.

1.58The Coalition members of the committee support this recommendation.

Conclusion

1.59If this inquiry illustrated anything, it re-affirmed the view that the governance of AI is an intractable public policy problem.

1.60The inquiry has also demonstrated that AI poses an unprecedented risk to Australia’s cyber security, intellectual property rights, national security, and democratic institutions. Though it is essential that the Federal Government minimise compliance costs for businesses that do not develop or use high-risk AI, the Federal Government must act to address the significant risks that AI poses to Australia’s security and its institutions of governance.

1.61The Federal Government’s complete inaction on any AI-related policymaking whatsoever despite its own admission 10 months ago that its ‘existing laws do not adequately prevent AI-facilitated harms’ is a disgrace.[15]

1.62Nevertheless, the Coalition members of the committee would welcome the opportunity to work with the Government on tackling public policy challenges associated with the governance of AI in our contemporary society.

Coalition Members of the Select Committee’s Reply to the Interim Report’s Recommendations

Introduction

1.63The Coalition members of the committee hold that any electoral changes to improve Australia’s democracy ought to be assessed on the following four core principles:

Fair, open and transparent elections;

Equal treatment of all political participants;

Freedom of political communication and participation, without fear of retribution; and

Recognising freedom of thought, belief, association and speech as fundamental to free elections.

1.64Australia’s success as a liberal democracy is reliant on the effective operation of the Australian Electoral Commission (AEC), and the Federal government more broadly, to satisfy and uphold these four principles.

1.65Ensuring that Australians have continued faith in the electoral system is paramount to Australians’ faith in its institutions of government.

1.66The Coalition members of the committee’s response to the five recommendations proposed in the Select Committee on Adopting AI’s Interim Report are guided by these four core principles.

1.67Recommendations 1 to 4 of the Interim Report largely argue for the need for mandatory credentialling and/or prohibitions on the dissemination of electoral matter developed using AI.

1.68Recommendation 1 recommends that, ahead of the next federal election, the government implement voluntary codes relating to watermarking and credentialling of AI-generated content.

1.69Recommendation 2 recommends that the Australian Government undertake a thorough review of potential regulatory responses to AI-generated political or electoral deepfake content, including mandatory codes applying to the developers of AI models and publishers including social media platforms, and prohibitions on the production or dissemination of political deepfake content during election periods, for legislative response prior to the election of the 49th Parliament of Australia.

1.70Recommendation 3 recommends that laws restricting the production or dissemination of AI-generated political or electoral material be designed to complement rather than conflict with the mandatory guardrails for AI in high-risk settings, the recently introduced disinformation and misinformation reforms, and foreshadowed reforms to truth in political advertising.

1.71Recommendation 4 recommends that the Australian Government ensure that the mandatory guardrails for AI in high-risk settings also apply to AI systems used in an electoral or political setting.

1.72Please note that the Coalition members of the committee’s comments relate to Recommendations 1 to 4 holistically.

1.73In response to the Interim Report in October, the Coalition members of the committee largely noted that they would reserve their final position on these recommendations until the United States’ policy response to AI is holistically assessed following the US election.

1.74The Coalition members of the committee agree with the sentiment in the Final Report that:

Following the US election, the committee notes that AI appears not to have had a significant impact on the course or outcome of the electoral contest, and there were relatively few reports of the use of deepfakes or other AI-generated content designed to sow political disinformation or influence the minds of voters.[16]

1.75As such, the Coalition members of the committee hold that, though Australia’s regulatory structures do need to be reviewed, the large absence of political disinformation through AI in the recent US election suggests that this is unlikely to be an imminent risk to Australia’s democracy.

1.76One of the greatest difficulties surrounding the implementation of voluntary codes or outright prohibitions relating to watermarking and credentialling of AI-generated content is that such codes would require a clear and dynamic definition of AI.

1.77Indeed, for example, the Federal Government’s Electoral Legislation Amendment (Electoral Communications) Bill 2024 includes provisions that would require specific additional authorisations for any written, visual or audio electoral or referendum matter that is created or modified using digital technology.

1.78Under this proposal, for any electoral matter in the form of TV or radio advertisements, stickers, fridge magnets, leaflets, how-to-vote cards, etc., there would need to be a specific spoken or written authorisation stating that ‘the content of this advertisement was substantially or entirely created or modified using digital technology.’

1.79The Bill’s Explanatory Memorandum provides examples of visual and audio electoral matter created or modified using digital technology by providing the examples of ‘deepfake videos depicting then-Prime Minister Rishi Sunak being promoted via paid social media posts, and voice-cloning falsely depicting then-Opposition Leader Sir Keir Starmer making disparaging remarks about staff.’ However, the Bill does not provide a definition for ‘digital technology’ other than that it includes ‘artificial intelligence’, and the Bill also does not define ’artificial intelligence’.[17]

1.80However, this Final Report defines AI as ‘an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming.’[18]

1.81Yet, even if this definition were to fill the absence of a definition in this proposed Bill, this term could apply to a very broad understanding, where most electoral matter would be covered, including ubiquitous technologies such as predictive text or image adjustments.

1.82As such, the requirement to prohibit or require explicit watermarking or authorisations for content created or modified using digital technology or AI would only lead to people and entities opting to authorise all electoral and referendum matter to state that it was created or modified using digital technology to avoid the possible consequences of omitting such an authorisation.

1.83The Coalition members of the committee oppose recommendations 1 and 3, as the Coalition members of the committee do not support the dystopian prohibitions on freedom of speech or the ill-thought through authorisation requirements included in the Electoral Legislation Amendment (Electoral Communications) Bill 2024.

1.84However, the Coalition members of the committee agree with recommendations 2 and 4, noting the importance of a bipartisan and thorough review of potential regulatory responses to AI-generated political or electoral content and that the mandatory guardrails for AI nationally are emulated across electoral and political settings.

Recommendation 14

1.85The committee recommends that the government examine mechanisms, including education initiatives, to improve AI literacy for Australians, including parliamentarians and government agencies, to ensure Australians have the knowledge and skills needed to navigate the rapidly evolving AI landscape, particularly in an electoral context.

1.86While the Coalition members of the committee do not oppose this recommendation it is particularly important in the electoral context that any AI education programmes are designed following extensive consultation with the opposition.

Conclusion

1.87Unlike the theme of the Interim Report, the Coalition members of the committee hold that freedom of speech is not a mere constitutional guardrail, but that freedom of speech is integral to the success of our liberal democracy.

1.88This is why the Coalition members of the committee strongly oppose the dystopian reforms set out in the laws that purport to adjudicate truth in political advertising in the government’s Electoral Communications Bill.

1.89Yet it is unsurprising that the Labor government are seeking to develop further dystopian mechanisms to control the Australian public. Indeed, this proposal plays into the consistent dystopian vision that the Labor party has for our country. A vision of less freedom, greater executive secrecy, and less transparency.

1.90Such a vision has lingered on broad display consistently through the Labor party’s term of government. Whether it be the Labor government’s appalling approach to answering questions on notice, consistent refusals to satisfactorily respond to orders for the production of documents, forcing stakeholders to sign non-disclosure agreements to be included in consultations or creating a handbook for officials on how to avoid answering questions at senate estimates, this government has unswervingly favoured secrecy and duplicity over transparency and accountability.

1.91As such it is unsurprising that the Labor party are now attempting to use further vehicles to censor the Australian public through laws that purport to adjudicate truth in political advertising.

1.92The Coalition members of the committee are concerned that, should the government introduce a rushed regulatory AI model with prohibitions on freedom of speech in an attempt to protect Australia’s democracy, the cure will be worse than the disease.

1.93The Coalition members of the committee would welcome the opportunity to work with the government on balancing how our freedom of speech can be protected in an AI world.

Senator the Hon James McGrath

Member

LNP Senator for Queensland

Senator the Hon Linda ReynoldsCSC

Member

Liberal Senator for Western Australia

Footnotes

[1]Select Committee on Adopting Artificial Intelligence, Final report, November 2024, p. 15.

[2]Meta, Answers to questions on notice (59), 27 September 2024 (received 24 October 2024), pp 1-9.

[3]Select Committee on Adopting Artificial Intelligence, Final report, November 2024, p. 84.

[4]Select Committee on Adopting Artificial Intelligence, Final report, November 2024, pp 84-85.

[5]Department of Home Affairs (DHA), Submission 55, p. 5.

[6]DHA, Submission 55, p. 5.

[7]Department of Industry, Science and Resources (DISR), Safe and responsible AI in Australia consultation: Australian Government’s interim response, January 2024, p. 18.

[8]Select Committee on Adopting Artificial Intelligence, Final report, November 2024, p.41.

[9]Select Committee on Adopting Artificial Intelligence, Final report, November 2024, p.41.

[10]Select Committee on Adopting Artificial Intelligence, Final report, November 2024, p.41.

[11]DISR, Safe and responsible AI in Australia consultation: Australian Government’s interim response, January 2024, p. 13.

[12]Mr Steven Worrall, Corporate Vice-President, Microsoft Pty Ltd, Committee Hansard, 16 August 2024, p. 35.

[13]Victorian Trades Hall Council, Submission 114, p. 5.

[14]Attorney-General’s Department (AGD), Use of automated decision-making by government: Consultation paper, November 2024.

[15]DISR, Safe and responsible AI in Australia consultation: Australian Government’s interim response, January 2024, p. 18.

[16]Select Committee on Adopting Artificial Intelligence, Final report, November 2024, p. 2.

[17]Electoral Legislation Amendment (Electoral Communications) Bill 2024, Explanatory Memorandum, p. 12.

[18]Select Committee on Adopting Artificial Intelligence, Final report, November 2024, p. 4.