Chapter 2 - Impacts of AI on democracy

Chapter 2 Impacts of AI on Democracy

Overview

2.1This chapter considers the potential impacts of AI technology on democratic and electoral processes in Australia, and potential policy responses to mitigate the risk of AI technology being used to adversely influence or undermine electoral and democratic processes in Australia.

2.2This chapter sets out the evidence received by the inquiry to date regarding:

the potential for AI technology, and particularly generative AI, to influence electoral processes and undermine public trust and confidence in Australian democracy more generally; and

policy options for mitigating the risks of AI technology in relation to electoral and democratic processes.

2.3As noted in Chapter 1, the committee’s final report will update these interim findings with any lessons or insights arising from the conduct of the US election in relation to:

the extent to which AI-generated content impacts on the election process; and

the effectiveness of any policy responses employed to address any instances of AI-generated disinformation and misinformation identified throughout the course of the election.

Impacts of AI on democracy

Threat of AI-generated content

2.4The rapid growth of generative AI has raised concerns that the technology could be used to erode public trust in Australia’s democratic institutions and processes. With over half the world’s population voting in elections in 2024, controversies around AI-generated fictitious content intended to deceive the public, create or exacerbate bias or influence public opinion or sentiment corrosively has become increasingly prevalent.

2.5AI-generated content, including ‘deepfakes’ involving images, audio or video manipulated to create fictitious depictions of people, can also be used to try to damage an individual’s standing or reputation—a threat that is particularly real for political candidates and incumbents contesting elections. Due to the ease of producing and spreading disinformation and misinformation through social media platforms, it is becoming more difficult for the public to know what information to trust, which in turn harms the capacity for informed decision making.

2.6In light of such concerns both overseas and in Australia, the impact of AI on the integrity of democracies and wider public discourse is a key challenge for policymakers. Many inquiry participants submitted that the issue of how AI is adopted, used and regulated in Australia is inextricably linked to the health of Australia’s democracy. In particular, they emphasised that AI is facilitating disinformation and misinformation, including by making it harder to detect while being easier to disseminate quickly through multiple channels.[1]

Opportunities and risks of AI

2.7However, while concerns about misuse of AI technology, and particularly generative AI, are legitimate, the development of disruptive new technologies with both positive and negative social impacts has occurred often throughout the course of human history.[2] DrCatherine Foley, Australia’s Chief Scientist, noted:

We can go back to the 15th century with the invention of the printing press by Gutenberg in Germany and look at the impact that had, instead of books and information being written by hand and being closely guarded by religious orders...The printing press also started a social revolution and the democratisation of education. It actually started democracies, and access to information by the general masses. We also saw the beginning of propaganda and leaflets being printed, so the first mis- and disinformation also came out at about that time. From that time, democracies have started, and we've seen, centuries later, that this new version of democratising technology is going to have a similar disruption.[3]

2.8In this regard, AI technology offers both opportunities and risks in respect of democracy and political discourse. For example, Dr Darcy Allen, Professor Chris Berg and Dr Aaron Lane from the RMIT Blockchain Innovation Hub, noted that AI technology can be used to help voters better understand political debates, legislation and policy proposals, and to undertake data analysis. They submitted, ‘rather than undermining the democratic process, these tools can be used as a positive development to create a more informed and engaged Australian electorate’.[4]

2.9Similar opportunities of AI are highlighted in a 20204 European Parliamentary Research Service paper:

AI can serve to educate citizens in the principles of democratic life, whether by gaining knowledge about a policy issue or getting familiar with a politicians stance…Moreover, specially designed AI tools could update citizens on how policies in which they have an interest are evolving and empower them to better express their opinions when addressing governments and politicians. Civic debate could improve thanks to the capability of AI to manage political conversations in chat rooms. AI could automatically summarise participants’ opinions, moderate the debate by identifying tensions and nudging them away from attacks and insults, and even act as a consensus builder.[5]

2.10The United Nations has noted that the potential benefits of AI-powered chatbots include providing real-time information about polling locations, candidate platforms and voting procedures, making the electoral process more accessible and transparent for those engaged in political discourse.[6]

2.11In addition, political parties in many countries have employed big data analytics and AI in political campaigns and elections to formulate detailed voter profiles, employing behavioural and psychometric data to categorise voters into interest groups for targeted political messaging.[7]

2.12However, opportunities for positive uses of AI must be weighed against its inherent risks. In considering the potential uses of AI for electoral management bodies, the International Institute for Democracy and Electoral Assistance (International IDEA) observed:

The advancing capabilities of AI systems hold the potential to enhance electoral accessibility, protect information integrity and facilitate logistical planning. Systems can help [electoral management bodies]… algorithmically streamline voter list management, counteract the spread of electoral disinformation, and detect anomalies in election results. On the flipside…[this] could entail unpredictable and harmful consequences. This uncertain fate in part stems from a lack of interpretability in the operational processes behind AI computations, so called ‘black boxes’, which obfuscates why models produce specific results. A lack of understanding risks resulting in erroneous output (generative AI ‘hallucinations’), inconspicuous discrepancies and discriminatory biases which may seriously undermine the fairness of elections if implemented without considerate human oversight.[8]

2.13According to the United Nations Educational, Scientific, and Cultural Organisation’s (UNESCO) Guide for Electoral Practitioners, AI has the potential to improve the efficiency and accuracy of elections by, for example, engaging with voters through personalised communication tailored to individual preferences and behaviour.[9] However, the use of AI technology in such contexts also entails certain risks:

…AI has great potential for enhancing independent journalism, campaigning, and supporting electoral processes in general. Algorithms have a positive impact when used to reduce the visibility or remove content that discriminates or incites hate and violence. However, the use of AI might entail the risk of blocking legitimate forms of expression, limiting the circulation of legitimate content, democratic debate, and pluralism during electoral periods as algorithms cannot fully assess all content, such as detecting all semantic nuances of communication (e.g. ironic remarks, jokes, etc.).[10]

2.14The submission of the Australian National University Tech Policy Design Centre summarised the potential benefits and risks associated of AI on elements of democracy, which are reproduced in Figure 2.1 below. The centre noted that realising the opportunities of AI while mitigating its significant risks would involve a balanced approach that considered both of these aspects ‘when establishing legislation, regulation and governance frameworks for AI technologies in Australia’.[11]

Figure 2.1Impact of risks and opportunities of AI technologies on elements of democracy

Source: Australian National University Tech Policy Design Centre, Submission 68, p. 4.

AI potential for political disinformation

2.15The potential for generative AI to facilitate the creation and dissemination of disinformation in political contexts was a significant issue raised in evidence received by the committee.

2.16As noted above, the current state of AI technology allows it to be easily used to generate realistic though artificial content intended to deceive the public; create bias; influence public opinion or sentiment; and harm individual reputations, creating a threat to the integrity of electoral processes and democracies in Australia and around the world.

2.17Associate Professor Shumi Akhtar from the University of Sydney outlined the various ways that AI-generated content could be employed to broadly undermine public trust and destabilise social discourse:

The proliferation of fake content can erode public trust in media, government, and other critical institutions. When people struggle to discern real from fabricated information, their foundational trust deteriorates, undermining democratic governance. Additionally, AI-driven content algorithms can intensify societal polarisation by reinforcing echo chambers on social media, further destabilising democratic discourse.

2.18The Department of Home Affairs observed that the current capabilities and accessibility of generative AI allow for malicious actors to rapidly produce significant volumes of content at low cost.[12] This could include traditional forms of disinformation, such as fake news articles and misleading posts on social media platforms,[13] as well as deepfakes through the generation of text, audio and video content. The increasing quality of AI-generated content means that it is less likely to be recognised as fake by humans or be detected by automated systems.[14] Advancements in the technology also mean that few images are required to generate deepfakes and synthetic content.[15]

2.19The Department of Home Affairs and a number of other submitters emphasised the potential for AI-generated content in political contexts to spread narratives intended to influence public perceptions of candidates and their positions on certain issues.[16] Associate Professor Shumi Akhtar from the University of Sydney observed:

Generative AI also enables targeted political campaigns and interest groups to manipulate public perception subtly and powerfully, skewing democracy towards those who wield these technologies.[17]

2.20The New South Wales Council for Civil Liberties submitted:

AI’s ability to mass generate fake online accounts…can create an illustration of support for policies or people, which can sway and mislead the political preference of real people.[18]

2.21The risk of AI generated political disinformation is not only that it can be used to influence the outcome of political debates or contests, but also that the uncertainty it creates can lead to an erosion of public trust in and engagement with politics more generally. ANU Tech Policy Design Centre observed:

As the effectiveness of generative AI models rapidly improves and their use increases, there is also a risk that the ability of average citizens to tell the difference between authentic and synthetic media in their information sphere will become increasingly under strain. For some voters, this growing noise and perpetual uncertainty regarding authenticity can lead to a sense of apathy, disengagement, and distrust of democratic processes.[19]

2.22Similarly, the University of Sydney and UTS commented on the use of AI-generated content to manipulate public opinion on a massive scale, leading to a ‘weakening of the foundational trust that democracy relies on to function effectively’.[20]

2.23The Australian Broadcasting Corporation (ABC) commented on the ease of producing AI-generated news content and its potential to erode trust in genuine news.

…there is a potentially significant risk that the application of AI…technology to news and other media content may lead to a significant increase in the volume of misinformation and disinformation in circulation. This, in turn may erode the quality of information available to the Australian public.

A “flooding of the zone” with AI-generated misinformation in this way can be expected to make truthful and accurate information harder to find or identify. Equally, as the quantity of untrustworthy information grows, there is a real possibility that the public will become more sceptical about all information, potentially allowing wrongdoers to reap a “liars dividend” by successfully declaring well-founded accusations brought against them to be “fake news”.[21]

2.24The Department of Home Affairs submitted that the potential for AI to undermine political stability could augment traditional methods of foreign interference and manipulation (FIMI):

With the assistance of AI, FIMI can be created and disseminated at unprecedented speed and scale, in multiple languages; and often at a low cost. Foreign governments could use AI to create coordinated and inauthentic influence campaigns that are designed to foster widespread misinformation, incite protests, exacerbate cultural divides and weaken social cohesion, covertly promote foreign government content, target journalists or dissidents and influence the views of Australians on key issues.[22]

AI bias

2.25In addition to the potential use of AI to create and spread political disinformation, AI models are recognised as having the potential for bias, which is also relevant to the consideration of the technology’s potential to influence or affect political and electoral processes.

2.26Research from the University of Washington, Carnegie Mellon University and Xi’an Jiaotong University, published in 2023, tested the political leanings of 14 different large language models (LLMs). It found a distribution across the political compass, with LLMs such as GPT-4 positioned as libertarian left-leaning, and LLMs such as Meta’s LLaMA positioned as authoritarian right-leaning.[23]

2.27AI bias could thus have a general political influence through biased responses to political questions submitted by users. It could also have a more direct influence where, for example, AI tools are developed for specific purposes, such as predicting election outcomes.[24]

2.28The submission of the IEEE Society on Social Implications of Technology Australia noted the potential for the inherent risk of AI bias to be exploited or ‘hacked’:

…altered or ‘poisoned’ information could be deliberately created and stored online in prepopulated websites, social media or databases. If this poisoned data is then incorporated into a large language model (LLM), then there is an increased risk of biassed or skewed output results, which could be used to support extremist viewpoints, influence political opinion or to exploit vulnerable user groups.[25]

Deepfakes

2.29As noted above, deepfakes are AI-generated image, video or audio files that can create convincingly realistic yet deceptive content. In effect, the current state of AI technology allows a user to create a fake video of a person saying or doing almost anything, limited only by their creativity and the footage of the subject they can source.[26]

2.30The use of deepfakes in connection with elections or political discourse raises particular concerns to the extent that their use could influence or change the outcomes of political contests. The Inter-Parliamentary Union (IPU) has observed that deepfakes can be easily employed for political gain. Incumbent or aspiring politicians might, for example, use deepfakes as part of political campaigns to discredit opponents or influence public opinion.[27] A range of other actors, including foreign entities, could also seek to employ deepfakes in political contexts to achieve various ends.

2.31The Deakin Law School indicated that the use of deepfakes for political purposes is particularly problematic in the context of women’s participation in public life, noting that online harassment via use of deepfakes will have a higher cost for women manifesting as not just attacks on female competency but also a cultural rejection of women.[28] This view was shared by the IPU and others, including Professor Clare McGlynn, a legal expert in online abuse who has stated:

…deepfake porn has been used against women in the public eye and women politicians as a way of harassing and abusing them, minimising their seriousness. It has definitely been weaponised against women politicians.[29]

2.32Deepfakes could pose issues if strategically deployed close to elections, taking advantage of the limited time for responsible actors to counter disinformation and correct the record.[30] Mr Andrew Ray has observed:[31]

Given the shift to longer periods of pre-polling in Australia (and other democracies), the release of a deepfake within this period or just before election day will make it extremely challenging for politicians to respond before any votes are cast.[32]

2.33The Centre for Emerging Technology and Security (CETas), a research centre based in the United Kingdom (UK), has mapped the intended effects of potential AI threats at different stages of the election process as follows:

Pre-election (distrust): AI-enabled influence operations at earlier stages of the election period focus on undermining the reputation of targeted political candidates or shaping voter attitudes on specific campaign issues.

Polling period (disrupt): activities closer to polling day focus on polluting and congesting the information space, to confuse voters over specific elements of the election campaign or the voting process.

Post-election (discredit): after the polls close, operations are designed to erode confidence in the integrity of the election outcome, for instance via allegations of electoral fraud. This also undermines longer-term public trust in democratic processes.[33]

2.34The Australian Electoral Commission (AEC) submission noted the use of deepfakes in connection with recent elections in other countries, including India, Indonesia, Pakistan and the United States.

2.35Mr Tom Rogers, AEC Electoral Commissioner, advised that in 2024:

Prior to the US New Hampshire presidential primary in January this year, a robocall, reported to have likely used AI voice cloning technology impersonating US President Joe Biden, urged voters to skip the primary election. In Pakistan, jailed former Prime Minister Imran Khan claimed party election victory in a video created using AI. In India, an AI-generated video of deceased former Tamil Nadu Chief Minister and icon in Indian cinema, M Karunanidhi, praised the leadership of his son and current Tamil Nadu Chief Minister ahead of elections in May. Prior to the February Indonesian election, a deepfake of deceased former President Suharto circulated, endorsing his former political party. Also in Indonesia, AI has been used by candidates in their speechwriting, artwork and campaign materials. Ahead of the South Korean election in April, it is reported that the National Election Commission detected 388 pieces of AI-generated media content, in violation of their newly revised election law, banning the use of political campaign videos using AI-generated deepfakes within 90 days prior to an election.[34]

2.36Other recent examples of the use of AI deepfakes in relation to overseas election processes include:

2.37

Slovakia 2023 election: in September 2023, an audio clip emerged on Facebook purportedly capturing the leader of the Progressive Slovakia party, Michal Šimečka, and journalist Monika Tódová discussing illicit election strategies. The authenticity of the recording was challenged immediately by both parties and was subsequently determined as being synthetic material.[35]

India 2024 election: in April 2024, deepfake videos circulated of two Bollywood actors criticising Prime Minister Narendra Modi for asking people to vote for the opposition Congress party in the country’s general election; and saying that Modi had failed to keep campaign promises and address critical economic issues during his two terms as prime minister.[36]

United States 2024 election: in 2024, former President, Donald Trump posted a number of deepfakes including apparent parodies involving an image of Kamala Harris and video of Elon Musk, and images of Taylor Swift and her fans showing support for his presidential campaign.[37]

Regulation of AI in relation to elections in Australia

2.38In its September 2024 proposals paper on introducing guardrails on the development and use of AI in high-risk settings, the Australian government noted that its consultations on safe and responsible AI have shown that Australia’s current regulatory system is ‘not fit for purpose to respond to the distinct risks that AI poses’.[38]

2.39The use of AI to influence the outcomes of elections, or to undermine public confidence in electoral processes, is one of the most significant risks of AI technology for democracy. The EU Artificial Intelligence Act (AI Act), which came into force in August 2024, provides:

AI systems intended to be used to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda should be classified as high-risk.[39]

2.40In this regard, evidence to the inquiry suggests that regulatory gaps exist in relation to addressing the potential for AI to be used to affect or influence political and electoral processes, as well as undermine trust in social institutions more generally.

Powers and role of the Australian Electoral Commission

2.41In terms of specific regulation of elections in Australia, the Australian Electoral Commission (AEC) submission stated:

…the AEC is concerned about the current lack of potential legislative tools and (AEC) internal technical capabilities to enable us to detect, evaluate and respond to information manipulation about the electoral process generated by that technology.[40]

2.42The AEC noted that its existing powers under the Commonwealth Electoral Act 1918 (the Electoral Act) provide it with only ‘limited powers to investigate or take action’ in relation to attempts to disrupt a federal election by using AI,[41] with the existing powers directed at the integrity of the electoral process and having no application to the political content of communications of electoral matter.[42] Currently, the primary legislative control is the requirement for communications of electoral matter in relation to federal elections to be authorised under section 321D of the Electoral Act. This provision deals not with the content of political matter, but with the narrower question of ensuring appropriate authorisation of political content. This means that AI-generated content would not breach section 321D provided it carried the requisite authorisation.[43]

2.43In addition, AI-generated content could also constitute an offence under section329 of the Electoral Act if that content is communicated during an election period and is likely to mislead or deceive an elector in relation to the casting of a vote. However, as there is no legislative requirement for AI-generated electoral content matter to be labelled as such, the AEC noted that its ability to identify such AI content ‘will depend on the support of others’,[44] noting, for example, the difficulties of identifying deepfakes discussed above.

2.44Mr Tom Rogers, AEC Electoral Commissioner, noted that section 329 does not capture communications about political matters—for example, a communication concerning the view of one political party about the actions of another—such that AI-generated content that is not misleading about the voting process is not captured by the provision.[45] In addition, judicial interpretation of the scope of section 329 has been very narrow.[46] Consequently, AI-generated content that was, for example, misleading about the political view of a candidate would fall outside the powers of the AEC to intervene or take action, leaving it to the person affected to take action such as seeking injunctive relief.[47]

2.45The enforcement of current offences under the Electoral Act depends on a hybrid public-private regime, which allows both the AEC and candidates in an election to seek an injunction to prevent the contravening conduct.[48] Remedies available to individuals impacted by political deepfakes also exist under intellectual property and tort law, via personal actions to have deepfakes taken down and to seek damages for related loss or injury.[49] Mr Andrew Ray has observed that, rather than private law actions, public law protections are required:[50]

…what is needed is an ongoing injunction restraining the publication or republication of the relevant political deepfake. Given that deepfakes can be easily re-uploaded, a further remedy should be available: the ability to request or compel a public correction of the record by the party responsible for publishing the deepfake.[51]

2.46Mr Rogers advised that the narrow scope of the AEC’s current powers reflect its historical role as concerned with the integrity of the election process rather than the question of the truth or accuracy of political statements. In this regard, the AEC had neither the legislative power nor technical capacity to address AI-generated content in the context of the electoral process more broadly.

…the AEC does not possess the legislative tools or internal technical capability to deter, detect or then adequately deal with false AI-generated content concerning the election process, such as content that covers where to vote, how to cast a formal vote and why the voting process may not be secure and trustworthy[52]

2.47However, the AEC was pursuing and involved with a range of non-legislative options for seeking to address disinformation and misinformation around election campaigns, including a national digital literacy ‘Stop and Consider’ campaign; the Electoral Integrity Assurance Taskforce, which monitors elections campaigns; establishment of an internal AEC Defending Democracy Unit with a focus on working with social media companies; and the establishment of a Reputation Management System, focused on the reputation of the electoral system. The AEC was also strongly engaged with social media platforms to actively debunk disinformation and misinformation about the electoral process.[53]

2.48Mr Rogers pointed to a range of other policy options implemented in overseas jurisdictions that could help to address risks of the use of AI-generated material in the context of Australia’s electoral processes. For example:

…a national digital literacy campaign, additional legislation, voluntary and mandatory codes of practice for technology companies, mandatory watermarking or credentialing or AI generated electoral content, and voluntary codes of conduct for candidates and political parties to effectively be lawful during election campaigns, such as in the case in India and Canada.[54]

Regulating deepfakes in relation to elections in Australia

2.49As noted above, deepfakes pose a particularly significant risk to electoral processes. A number of submitters to the inquiry indicated their support for government to consider legal changes to specifically address the issue of AI-generated deepfakes in the context of elections. Per Capita’s Centre for the Public Square, for example, stated:

…[it is] clear that the current regulation and frameworks specifically around election material…[are] insufficient in tackling AI misinformation and there should be strong restrictions and guidelines around…[the election] period.[55]

2.50Reset.Tech submitted that a possible regulatory approach to mitigating the risk posed to electoral processes by deepfakes could be to place obligations directly on to the operators of AI or social media platforms:

A more immediate solution…could be to require generative AI platforms to prevent the creation of deepfakes involving Australian parliamentary candidates.[56]

2.51In this regard, a number of submitters pointed to the example of South Korea, which introduced an election-related legislative ban on deepfakes in the lead up to its general election in 2024.[57]

2.52In December 2023, South Korea amended its Public Official Election Act to, inter alia, ban the production, editing, distribution, screening or posting of deepfakes for election campaigning purposes for 90 days leading up to its 10 April 2024 general election. Local reporting in South Korea indicated that 388 cases of deepfakes in violation of the new law had been identified during the election and taken down.[58]

2.53Professor Edward Santow, from the UTS Human Technology Institute, noted that there had been, effectively, three elements to the South Korean approach, being the changes to the relevant laws, the resourcing of the regulator to enforce the new laws, and the ability to closely engage with the relevant social media companies to effect rapid takedowns of the deepfake material identified as being in breach of the law.[59]

2.54The United States offers other examples of legislative responses to the problem of deepfakes in the context of elections. A bill introduced to the US Senate in December 2023, the Protect Elections from Deceptive AI Act, if passed would prohibit individuals and certain entities from knowingly distributing materially deceptive election related AI-generated content; and allow affected candidates to seek injunctive relief or damages.[60] In September 2024, the California Legislature passed legislation banning deepfakes related to elections, which requires social media platforms to remove deceptive material 120 days prior to and 60 days after elections, and political campaigns to disclose if their contain AI-generated content.[61]

2.55However, some submitters noted that such legal restrictions or prohibitions on AI-generated content in Australia would need to be carefully designed.

2.56MsLucinda Longcroft, Director of Government Affairs and Public Policy for Google Australia and New Zealand, for example, noted that any such law should not be able to be applied in a partisan or political way.[62] An illustration of this concern was provided by the UNSW Allens Hub for Technology, Law and Innovation, which cited the case study of the introduction of ‘stringent internet and new media’ regulations in India in recent years:

…the response from the [Indian] State was marked by more centralisation of regulatory power, an increase in executive power, promulgation of emergency power provisions and increased censorship on digital media companies (including independent media outlets and journalists) and intermediaries. Some have even argued that India’s regulatory response to online misinformation violates international human rights law.[63]

2.57Legal responses to regulating deepfakes also gives rise to considerations around ensuring that such laws do not overly restrict freedom of expression. MrsLorraine Finlay, Human Rights Commissioner at the Australian Human Rights Commission, for example, observed:

…[while] disinformation generated by AI can have negative impacts on human rights, social cohesion and democratic processes…a careful balance must always be struck to ensure adequate protection…while also providing robust safeguards to ensure the protection of freedom of expression and the preservation of Australia's democratic values.[64]

2.58In Australia, any such law would also need to avoid infringing upon the constitutional implied right to freedom of political communication. In this regard, Mr Andrew Ray has noted:

Regulating disinformation raises significant free speech concerns…Holding large technology and media platforms accountable for content could lead to unintended effects around freedom of expression, harming rather than protecting democratic institutions. Proposed regulations should therefore be carefully analysed through the framework of the implied freedom of political communication, ensuring that any new laws are proportionate and tailored to the threat they seek to prevent.[65]

2.59Noting the need for such laws to strike a careful balance, the UNSW Allens Hub for Technology, Law and Innovation urged government to move slowly to enable Australia’s legal and regulatory response to take account of experiences overseas.[66]

2.60Mrs Finlay stated that legal definitions in any new legislation to capture disinformation need to be precise so as not to place an undue burden on freedom of expression. In addition, there would need to be appropriate accountability and transparency to ensure visibility of whether the law was operating in a way that citizens could easily see how the legislation was being applied.[67]

2.61More generally in relation to human rights, Human Rights Watch recommended that any proposed AI regulations or policies:

…[r]espect, protect, and promote human rights throughout the development and use of AI. Such regulations or policies should also ensure that all stakeholders—government, companies, organisations, and individuals—refrain from or cease the development of or use of AI that are inconsistent with international and national human rights laws or that pose undue risks to the enjoyment of human rights.[68]

Other options for addressing the threat of AI to democracy

2.62In addition to considerations around the powers of the AEC and directly regulating deepfakes, the evidence to the inquiry included a range of other options or policy responses for addressing the potential of AI-generated content to influence and undermine electoral processes and democratic discourse

Watermarking

2.63Watermarking is the embedding into the output of an AI model a recognisable and unique signal to identify the content as AI generated.

2.64Watermarking in the context of AI most commonly refers to watermarking that is invisible to the human eye, but able to read or recognised by a computer program or algorithm designed to identify such watermarking.

2.65Watermarking can also refer to visible watermarks placed on AI-generated content such as text, image or video, to alert the consumer of the information about the provenance of the material.

2.66A number of submitters supported a requirement for watermarking of AI-generated content.[69] The Tech Council of Australia (TCA), for example, submitted:

TCA supports the use of watermarking as a method for identifying AI-generated content. Watermarking allows for information to be embedded directly into content, even when an image undergoes some modifications. AI labelling and watermarking helps users know when an AI system is being used or where content has been generated, modified or informed by AI.[70]

2.67The Australian Human Rights Commission submitted:

…digital watermarks for AI-generated content should be adopted as a priority. Additional consideration should also be given to how synthetic content can affect elections and the role that watermarking could play in mitigating any adverse impacts.[71]

2.68The evidence of large social media companies indicated that they are implementing watermarking in their products. The Google submission stated that the company was committed to ensuring that AI-generated content from its products contained embedded watermarking; and noted that it was developing a tool for watermarking AI-generated images.[72] In addition, Google policies require election advertising to prominently disclose if material is AI-generated or has been digitally altered.[73]

2.69Similarly, Microsoft advised that it was ‘actively exploring’ watermarking to ‘help users quickly determine if an image or video is AI generated or manipulated’.[74] Mr Lee Hickin, Microsoft Chief Technology Officer, stated:

…everything that is created by AI imagery needs to have some means of being tagged or watermarked or have a chain of evidence attached to it.[75]

2.70Amazon advised that, while images generated by its Titan image-generation tool contained watermarks, the development of its watermarking technology is ‘incomplete’.[76]

2.71However, despite general support for watermarking and the progress in developing robust watermarking techniques, many submitters observed that, regardless, watermarking is not foolproof to prevent the misuse of deepfakes and AI-generated content. The Tech Council of Australia (TCA), for example, noted the potential for ‘bad actors’ to attempt to create false watermarks.[77]

2.72Bad actors could also find ways to remove or produce AI-generated material without watermarking. The IEEE Society on Social Implications of Technology observed that, to be effective, watermarking would have to be implemented strategically:

…legitimate content creators should be asked to watermark their content and increasing pressure should be (gradually) introduced that force digital platforms to exclude or deprioritise content that fails to meet authentication standards. By providing great confidence for those engaging with political digital media content, it may be possible to improve public trust in online sources and digital content.[78]

2.73The Tech Council of Australia (TCA) also noted that watermarking is not ‘a silver bullet or panacea’ to the problem of disinformation and deepfakes:

…[Watermarking] should be one tool, within a broader set of tools, that will enable greater transparency for AI-generated or AI-modified outputs…It will be important to combine multiple methods, including the use of authentication and verification technologies, while also uplifting public capacity for critical digital literacy and awareness.[79]

2.74A significant limitation of invisible watermarking is also that, while it provides a technical means of identifying AI-generated content, it does not provide the casual or un-inquiring viewer of the content the ability to assess the provenance of the material as does a visible watermark.

2.75In this regard, Google noted that the identification of AI-generated content within its products relies on contextual information or creator-provided context rather than watermarking.[80]

2.76Meta explained that, if content on its platforms was determined as creating ‘a particularly high risk of materially deceiving the public on a matter of importance’, it may add a visible watermark to that content. Also, in some cases advertisements related to social issues, elections or politics were required to disclose the use of AI-generated material.[81]

Voluntary measures

Owners of AI models and digital platforms

2.77A number of submitters to the inquiry discussed voluntary approaches by the owners of AI models and social media platforms to mitigating the potential risks of AI-generated content to democratic and electoral processes.

2.78Professor Toby Walsh, Chief Scientist of the AI Institute at the University of New South Wales, noted that large social media platforms had a particular responsibility for the potential harms of AI, stating:

…deepfakes, on their own, are not particularly harmful…It’s the fact that social media platforms put them in front of millions of eyeballs that causes the harm…we have to hold the social media platforms a bit more accountable.[82]

2.79Similarly, the Australian National University School of Cybernetics observed:

The ubiquity of social media platforms that make possible the creation of such networks has assisted individuals and groups in spreading fake news, enabling the spread of their disinformation and misinformation at unprecedented speed, reaching more network participants and remaining longer in the public domain. In the political landscape, digital platforms have become conduits for the dissemination of false information and propaganda during electoral processes, exerting influence over voter behaviours and posing a threat to democratic principles.[83]

2.80A 2022 UNESCO guide for electoral practitioners notes that social media platforms are often not directly liable for the content posted on their sites:

…social media platforms are often protected from liability in many jurisdictions. They are often considered primary aggregators or carriers of content produced by others—rather than publishers—and therefore hold no editorial responsibility.[84]

2.81DIGI submitted that the need for flexibility and adaptability in regulatory frameworks for AI technology requires consideration of voluntary commitments by the AI industry, particularly at the international level.[85] It stated:

Exploration of voluntary principles and frameworks will allow for a rapid and internationally coordinated approach to AI governance. It also has the potential to strengthen economic opportunities at a global scale by ensuring Australia does not lag behind or diverge from the approaches of its major trade partners.[86]

2.82DIGI cited the US example of voluntary commitments made by a number of ‘major technology companies’ at a 2023 White House summit relating to three AI governance principles of safety, security and trust. These included a commitment related to user notice initiatives to help consumers better understand when they are interacting with an AI system or AI-generated content.[87]

2.83DIGI also pointed to the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, an election-specific voluntary scheme involving ‘commitments to deploy technology countering harmful AI-generated content meant to deceive voters’.[88] The accord was signed in February 2024 by twenty-five technology companies including, for example, Amazon, Google, Meta, Microsoft and OpenAI.[89] The Microsoft submission described the purpose of the accord as being:

…to combat video, audio, and images that fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders or that provide false information about voting…[in order to] frustrate the ability for bad actors to create deepfakes, while simultaneously simplifying the process for users to identify authentic content.[90]

2.84The evidence of owners of AI models and social media platforms emphasised a range of voluntary efforts they are undertaking to prevent, detect and address disinformation, particularly in connection with electoral processes. Google, for example, submitted that it had:

…[leaned] heavily into detection, into ensuring provenance or the watermarking of our technologies and then into media literacy as well as our engagement with the AEC and where we detect deepfake materials and political bias which has no place on our platforms, we take down robustly and rapidly.[91]

2.85Google advised that it also controlled access to election information in its products. Its generative AI chatbot Gemini, for example, would not answer questions about a number of election related topics, and prompt users to instead search on Google to receive a ‘diversity of information’ on their query. It also ensured that high-quality, factual information is provided for critical aspects like voting information’.[92]

2.86Microsoft noted its efforts to collaborate with electoral bodies, both in Australia and other countries, to ensure that their services would not be used to distribute deepfakes and misleading AI generative content:

We work very closely with electoral bodies…around the world, as well as here, in Australia, with the AEC and other agencies that are involved in ensuring the integrity of the election process.[93]

2.87Adobe highlighted its work founding the Content Authenticity Initiative and associated open standards body, the Coalition for Content Provenance and Authenticity, or C2PA, which has developed an open standard for provenance technology called Content Credentials. Mr John Mackenny, Director of Public Sector Strategy (Asia Pacific) for Adobe, explained:

Content Credentials allow creators and publishers to attach information to a piece of content—such as their names, dates and the tools used to create it—that travel with the content so that when people see it they know exactly where the content came from and what happened to it along the way.[94]

2.88However, some submitters questioned the effectiveness of voluntary measures undertaken by owners of AI models and social media platforms. Reset.Tech, for example, submitted that there was insufficient visibility of the measures undertaken by digital platforms to address the risks of AI and that regulation is required:

Digital platforms, such as social media, gaming and messaging services, are making decisions on a minute-to-minute basis on how they are managing the risks posed by generative AI, but they are not compelled to “show their workings” on the issue to Australian regulators, nor are they compelled to make changes to their current policies. How platforms manage the risks associated with generative AI content needs to be addressed through comprehensive regulation.[95]

2.89Per Capita’s Centre for the Public Square noted that the Australian Code of Practice on Disinformation and Misinformation, a voluntary code signed by a number of large social media and AI companies in 2021, had proven ineffectual’.[96]

Political parties and candidates

2.90Voluntary measures for political parties and candidates were also suggested by some submitters.

2.91The ANU Tech Policy Design Centre noted that politicians had an important role to play in maintaining the integrity of elections. It recommended the development of a pledge for transparent and democratic use of AI in campaigning for politicians, by which they could publicly disclaim any use of AI in their advertising in order to maintain trust and engagement in the election process.[97]

2.92Mr Tom Rogers, AEC Electoral Commissioner, noted that some jurisdictions overseas had implemented voluntary code of conduct requiring political parties and candidates to declare if information has been generated through AI.[98]

Digital, media and information literacy

2.93A number of submitters suggested that strategies to increase digital, information and media literacy (henceforth, digital literacy) are needed to help Australians better identify and assess AI-generated disinformation and misinformation. Digital literacy may be understood as encompassing those competencies that enable individuals to effectively and safely navigate complex information and communications environments.

2.94The Australian Library and Information Association estimated that currently some 30 percent of Australian adults have low levels of digital literacy. Substantial disparities in levels of digital literacy exist between various groups of Australians. For example, persons aged between 56-74, with a low level of education, living with a disability, on low incomes, or living in regional Australia are far more likely to have a low level of digital literacy.[99] Additionally, the Australian Digital Inclusion Alliance submitted that almost one quarter of Australians, approximately 23.6 percent, are digitally excluded.[100] The Australian Library and Information Association suggested that AI may exacerbate existing digital literacy gaps.[101]

2.95Many inquiry participants suggested that increasing digital literacy would have a positive impact on the capacity of people to identify and assess AI-generated content.[102] Mr Rogers, for example, indicated that research suggested that high levels of digital literacy do support people’s ability to critically assess online information.[103] Mr Rogers pointed to the effectiveness of the AEC’s ‘Stop and Consider’ national digital literacy campaign around the Voice referendum:

For that campaign, we're getting about 20 per cent recognition in the community, and, of that 20 per cent of people, a large number tell us that consuming that information has changed the way that they then view information from sources on the web. It's a highly cost-effective campaign. It costs 2c per elector.[104]

2.96Microsoft and Adobe noted that digital literacy was important for people of all ages, and particularly for those of voting age.[105] Adobe submitted that government had a critical role in promoting digital literacy, such as through public safety campaigns to educate people that they cannot trust everything they see and hear online, and in relation to the use of available tools to verify the authenticity of online content.[106]

2.97More fundamentally, submitters called for a foundational approach to digital literacy through investment in the ‘education system to improve levels of ‘AI and social media literacy’.[107] The ANU Tech Policy Design Centre, for example, called for a strong focus on education to equip Australians of all ages with the necessary digital literacy and critical skills.[108]

2.98Dr Alexia Maddox, Dr Stuart Evans and Professor Bernadette Walker Gibbs at La Trobe University noted that by helping individuals to understand how AI technologies work, including their limitations and potential biases, digital literacy education supports their ability to critically evaluate the credibility and accuracy of AI-generated content. Accordingly, they argued that digital literacy should be a key component of education to ensure that ‘learners are prepared to navigate the challenges and opportunities of an AI embedded future, while also safeguarding democracy and trust in institutions’.[109]

2.99This sentiment was echoed by a number of participants in the inquiry.[110] The La Trobe University submission, for example, described digital literacy as ‘our primary defence against what is a significant attack [from AI-generated content] on democracy, trust and fact-based journalism’. Accordingly, it called for investment in and expansion of digital literacy courses for students, educators and the general public to improve the ability of Australians to identify threats, scams, and manipulation based on AI-generated content.[111]

Committee view

2.100The evidence to the inquiry to date reflects the understanding, both in Australia and globally, that the current state of AI technology brings with it significant risks in relation to the conduct of electoral processes.

2.101Evidence of the use of AI-generated content in the context of a number of overseas elections in 2024 suggests there is a near certainty that the upcoming federal election in Australia will be subject to similar attempts at spreading disinformation.

2.102The committee notes that, as acknowledged by the government and numerous submitters to the inquiry, there are significant regulatory gaps in Australia’s capacity to respond effectively to the use of AI in the context of electoral processes.

2.103The committee notes the Albanese Government is already engaging in a range of processes to remedy some of these regulatory gaps, including:

The Introducing mandatory guardrails for AI in high-risk settings: proposals paper released by the Minister for Industry and Science on 4 September 2024, which proposes mandatory guardrails for AI in high-risk settings, including where there is a ‘risk of adverse impacts to the broader Australian economy, society, environment and rule of law.’[112]

The Australian Government’s interim response to the Safe and responsible AI in Australia consultation released on 17 January 2024, which states the Government will work ‘with industry to develop options for voluntary labelling and watermarking of AI-generated materials.’[113]

The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 introduced by the Minister for Communications on 12 September 2024, which introduces new obligations for digital platforms and new powers for the Australian Communications and Media Authority to deal with disinformation and misinformation material, including where material is AI-generated and is of a political nature.

The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 passed on 21 August 2024, which strengthens offences targeting the creation and non-consensual sharing of sexually explicit material online, including material that has been created or altered using AI technology.

The Joint Standing Committee on Electoral Matters (JSCEM) inquiry into civics education, engagement and participation in Australia, which was referred by the Special Minister of State and explicitly includes consideration of the impact of AI in its terms of reference.

Proposals announced in July 2022 to regulate truth in political advertising, which have been held up by the Coalition for reasons set out by Coalition Senators and Members in their dissenting report on the JSCEM’s Inquiry into the 2022 Federal Election.[114]

2.104The proposals paper for introducing mandatory guardrails for AI in high-risk settings asks a number of questions for further consultation, including whether the scope of high-risk AI settings should be principles-based or list-based, and includes explicit reference to the list-based approach contained within Annex III of the European Union’s Artificial Intelligence Act. For the purposes of this report, the relevant category of high-risk AI is:

AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.[115]

2.105Whether the Australian government opts for a principles-based or list-based approach to defining high-risk AI settings following the ongoing consultation period, it is clear that the mandatory guardrails must be applied to AI systems that are used in an electoral or political setting.

2.106The committee heard that the powers of the Australian Electoral Commission (AEC) support its historical role of ensuring the integrity of the election process, but that it currently lacks the legal basis and technical capability to deal adequately with AI-generated disinformation concerning the election process. While some inquiry participants suggested empowering the AEC to regulate AI-generated political content in elections, the committee is unconvinced of the desirability and effectiveness of such a significant expansion to the AEC’s historical remit.

2.107In particular, the committee notes the evidence of Mr Tom Rogers, the AEC Commissioner, who stated that he sees ‘a real risk of ruining the AEC’s neutrality’ if the AEC is required to enforce laws pertaining to truth or what does or does not constitute disinformation, outside of the narrow scope of AI-generated material ‘that misleads citizens about the act of voting – where to vote, when to vote, how to cast a formal vote, the fact that the voting process is secure.’ Mr Rogers went on to say, ‘we’ve never been involved in that, whether it’s generated by AI or any other process, and we don’t want to be involved in that process.’[116]

2.108The committee has heard legislative proposals for prohibiting misleading or political AI-generated content in the context of elections, drawing on legislative measures proposed or enacted in countries overseas, such as the US and South Korea. While the committee acknowledges evidence that a legislative response of this type could be an effective approach in Australia, it notes that any such legislative scheme must be carefully considered to avoid potential bias in its application, and to ensure that it does not impinge on the right to freedom of expression and the implied right to freedom of political communication. While the committee therefore encourages the government to consider implementing a legislative response to the use of AI-generated content in the context of elections, it recognises that there is likely insufficient time for a properly designed scheme to be thoroughly consulted on and enacted in time for the next federal election.

2.109The potentially varied nature of the perpetrators of AI-generated electoral disinformation and misinformation, including political parties and candidates themselves, third-party campaign or lobby groups, lone domestic actors, and malicious foreign actors, in addition to the difficulties establishing the provenance and intent behind AI-generated disinformation, presents further challenges to regulation.

2.110The committee also considered a range of non-legislative and voluntary responses that are being or could be undertaken by government, political parties and the large technology firms that own AI and social media platforms. These include measures such as: digital literacy campaigns to empower citizens to recognise and assess AI-generated content; voluntary codes of conduct for political candidates, parties and digital platforms; and watermarking of AI-generated content.

2.111The committee acknowledges that some non-legislative or voluntary measures will undoubtedly play a significant role in addressing the potentially negative impacts of AI on electoral and democratic processes. In particular, the committee considers that digital literacy education for all ages is critical to ensure that Australians have the knowledge and tools required to properly assess AI-generated information and make informed decisions at elections.

2.112The committee also acknowledges the voluntary approaches undertaken to date by significant AI industry companies, such as commitments to voluntary codes, watermarking of AI-generated content and engagement with electoral bodies. In the absence of mature legislative responses from governments to the relatively recent and rapidly evolving phenomenon of generative AI, the committee appreciates that such measures are providing some mitigation of the risks of AI to election processes.

2.113Despite this, the committee is unconvinced that voluntary codes or undertakings by AI industry participants and political actors will ultimately be effective to deliver sufficiently comprehensive and robust schemes for measures such as watermarking or credentialling of AI-generated content, or its use by political candidates and parties in elections.

2.114Accordingly, the committee believes that regulation, potentially building upon the temporary prohibition on AI-generated political or electoral content during election periods in jurisdictions such as South Korea, will be important to consider moving forward.

2.115However, it is equally important that such a scheme is not rushed, nor introduced without appropriate consultation, without broad support across the political spectrum, or without sufficient notice and community education. There are unanswered questions surrounding a South Korea-style prohibition including: how the legitimate use of AI to inform or educate, or for parody or satire, could be sufficiently protected; how such reforms would interact with the Constitution’s implied freedom of political communication; and who would enforce the laws noting the reluctance of the AEC to adopt this role.

2.116Noting the long list of reforms already underway, as set out at paragraph 2.101, it is important that laws restricting the creation or dissemination of AI-generated political or electoral content co-exist and do not conflict with these reforms, particularly the long-mooted proposals to legislate truth in political advertising laws, and the proposed mandatory guardrails around the use of AI in high-risk settings.

2.117In light of these complications, the committee considers that the government should develop voluntary codes relating to watermarking and credentialling of AI-generated content in time for the next federal election, and conduct further investigation into more prescriptive reforms. This should be supplemented by further work on improving Australians’ AI and digital literacy.

Recommendation 1

2.118The committee recommends that, ahead of the next federal election, the government implement voluntary codes relating to watermarking and credentialling of AI-generated content.

Recommendation 2

2.119The committee recommends that the Australian Government undertake a thorough review of potential regulatory responses to AI-generated political or electoral deepfake content, including mandatory codes applying to the developers of AI models and publishers including social media platforms, and prohibitions on the production or dissemination of political deepfake content during election periods, for legislative response prior to the election of the 49th Parliament of Australia.

Recommendation 3

2.120The committee recommends that laws restricting the production or dissemination of AI-generated political or electoral material be designed to complement rather than conflict with the mandatory guardrails for AI in high-risk settings, the recently introduced disinformation and misinformation reforms, and foreshadowed reforms to truth in political advertising.

Recommendation 4

2.121The committee recommends that the Australian Government ensure that the mandatory guardrails for AI in high-risk settings also apply to AI systems used in an electoral or political setting.

Recommendation 5

2.122The committee recommends that the government examine mechanisms, including education initiatives, to improve AI literacy for Australians, including parliamentarians and government agencies, to ensure Australians have the knowledge and skills needed to navigate the rapidly evolving AI landscape, particularly in an electoral context.

Senator Tony Sheldon

Chair

Labor Senator for New South Wales

Footnotes

[1]Free TV, Submission 136, p. 2; Mr Tom Rogers, Electoral Commissioner, Australian Electoral Commission (AEC), Proof Committee Hansard, 20 May 2024, pp 28-29.

[2]Ms Zoe Hawkins, Head of Policy Design, Tech Policy Design Centre, Australian National University (ANU), Proof Committee Hansard, 20 May 2024, p. 60; Dr Catherine Foley, Australia’s Chief Scientist, Australian Government, Proof Committee Hansard, 20 May 2024, p. 14.

[3]Dr Catherine Foley, Australia’s Chief Scientist, Australian Government, Proof Committee Hansard, 20 May 2024, p. 14.

[4]Dr Darcy Allen, Professor Chris Berg and Dr Aaron Lane, Submission 21, p. 8.

[5]Michael Adam and Clotilde Hocquard, European Parliamentary Research Service, European Parliament, Artificial intelligence, democracy and elections, October 2023, p. 3.

[6]United Nations, Can artificial intelligence (AI) influence elections?, 7 June 2024, https://unric.org/en/can-artificial-intelligence-ai-influence-elections/ (accessed 5 September 2024).

[7]The Parliamentarian, Issue One: Artificial Intelligence, disinformation and Parliament, 6 March 2024, p. 32.

[8]Cecilia Hammer, International Institute for Democracy and Electoral Assistance (International IDEA), Smart Elections: is AI the Next Wave in Electoral Management?, 20 May 2024, https://www.idea.int/news/smart-elections-ai-next-wave-electoral-management (accessed 29 August 2024).

[9]Robert Krimmer, Armin Rabitsch, Rast’o Kužel, Marta Achler, Nathan Licht, UNESCO, Elections in digital times: a guide for electoral practitioners, 2022, p. 34.

[10]Robert Krimmer, Armin Rabitsch, Rast’o Kužel, Marta Achler, Nathan Licht, UNESCO, Elections in digital times: a guide for electoral practitioners, 2022, p. 34.

[11]Australian National University Tech Policy Design Centre, Submission 68, p. 3.

[12]Department of Home Affairs, Submission 55, p. 6.

[13]Department of Home Affairs, Submission 55, p.6.

[14]International IDEA, Artificial Intelligence for Electoral Management, 29 April 2024, p. 42, Artificial Intelligence for Electoral Management (idea.int) (accessed 27 August 2024).

[15]Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW Law Journal, Volume 44(3), 2021, p.986.

[16]Department of Home Affairs, Submission 55, p. 6; Free TV, Submission 136, p. 8; Reset Tech, Submission 148, p. 3; University of Technology, Submission 62, p. 5; Dr Catherine Foley, Chief Scientist, Australian Government, Proof Committee Hansard, 20 May 2024, p. 17; Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW Law Journal, Volume 44(3), 2021 p.987.

[17]Associate Professor Shumi Akhtar, Submission 131, p. 3.

[18]New South Wales Council for Civil Liberties, Submission 113, p. 13.

[19]ANU Tech Policy Design Centre, Submission 68, p. 4.

[20]UTS, Submission 62, p. 6.

[21]ABC, Submission 117, p. 2.

[22]Department of Home Affairs, Submission 55, pp 4-5.

[23]Shangbin Feng et al, From Pretraining Data to Language Models to Downstream Tasks:Tracking the Trails of Political Biases Leading to Unfair NLP Models, available at https://aclanthology.org/2023.acl-long.656.pdf (accessed 11 September 2024); see also, Melissa Heikkila, AI language models are rife with different political biases, MIT Technology review, available at https://www.technologyreview.com/2023/08/07/1077324/ai-language-models-are-rife-with-political-biases/ (accessed 11 September 2024).

[24]The Parliamentarian, Artificial Intelligence, Disinformation and Elections, Issue One, 2024, p. 33.

[25]IEEE Society on Social Implications of Technology, Submission 128, p. 2.

[26]Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW Law Journal, Volume 44(3), 2021 p.983.

[27]Inter-Parliamentary Union (IPU), Dangers of Deepfakes for Parliamentarians, 19 February 2024, https://www.ipu.org/news/news-in-brief/2024-02/dangers-deepfakes-parliamentarians (accessed 11 September 2024).

[28]Deakin Law School, Submission 110, p. 9.

[29]Inter-Parliamentary Union (IPU), Dangers of Deepfakes for Parliamentarians, 19 February 2024, https://www.ipu.org/news/news-in-brief/2024-02/dangers-deepfakes-parliamentarians (accessed 11 September 2024).

[30]The Parliamentarian, Issue One: Artificial Intelligence, disinformation and Parliament, 6 Mary 2024, p.32.

[31]Inter-Parliamentary Union (IPU), Dangers of Deepfakes for Parliamentarians, 19 February 2024, https://www.ipu.org/news/news-in-brief/2024-02/dangers-deepfakes-parliamentarians (accessed 11 September 2024); Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW Law Journal, Volume 44(3), 2021 p.1008.

[32]Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW Law Journal, Volume 44(3), 2021 p.987.

[33]Sam Stockwell, Megan Hughes, Phil Swatton and Katie Bishop, CETas, AI-Enabled Influence Operations: The Threat to the UK General Elections, May 2024, p. 10.

[34]AEC, Submission 60, p. 1.

[35]Stanford University, Artificial Intelligence Index Report 2024, p. 206.

[36]Aditya Kalra, Munsif Vengattil and Dhwani Pandya, Deepfakes of Bollywood stars spark worries of AI meddling in India election, Reuters, 22 April 2024, https://www.reuters.com/world/india/deepfakes-bollywood-stars-spark-worries-ai-meddling-india-election-2024-04-22/ (accessed 2 September 2024).

[37]Nick Robins-Early, Trump posts deepfakes of Swift, Harris and Musk in effort to shore up support, The Guardian, 20 August 2024, https://www.theguardian.com/us-news/article/2024/aug/19/trump-ai-swift-harris-musk-deepfake-images (accessed 2 September 2024).

[38]Department of Industry, Science and Resources, Safe and responsible AI in Australia: Proposals paper for introducing mandatory guardrails for AI in high-risk settings, September 2024, p. 2.

[39]EU Artificial Intelligence Act, recital 62.

[40]Australian Electoral Commission, Submission 60, p. 1.

[41]Australian Electoral Commission, Submission 60, p. 1.

[42]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, pp 27 and 32.

[43]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 26.

[44]Australian Electoral Commission, Submission 60, p. 1.

[45]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 26.

[46]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 27.

[47]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 33.

[48]Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW Law Journal volume 44(3), 2021, p.1001.

[49]Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW Law Journal volume 44(3), 2021, p.1001.

[50]Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW Law Journal volume 44(3), 2021, p.1001.

[51]Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW Law Journal volume 44(3), 2021, p. 1010; For details regarding false political advertising laws in the ACT and SA see: Section 113, Electoral Act 1985 (SA); section 297A, Electoral Act 1992 (ACT).

[52]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 22.

[53]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 25.

[54]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 22.

[55]Per Capita’s Centre for the Public Square, answer to question on notice (21 May 2024), received 6 June 2024, p. [2].

[56]Reset.Tech, Submission 148, p. 4.

[57]See, for example, Mr John Galligan, General Manager, Corporate External and Legal Affairs, Microsoft, Proof Committee Hansard, 16 August 2024, p. 40.

[58]Associate Professor Andrew Meares, ANU College of Engineering, Computing & Cybernetics, ANU, answers to questions on notice (20 May 2024), received 13 June 2024, p. 2.

[59]Professor Edward Santow, Director, Policy and Governance, Human Technology Institute, UTS, Proof Committee Hansard, 21 May 2024, p. 20.

[60]United States Congress, S. 2770—118 Congress (2023–2024): Protect Elections from Deceptive AI Act, https://www.congress.gov/bill/118th-congress/senate-bill/2770 (accessed 5 September 2024).

[61]ABC News, California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI, 1September 2024.

[62]Ms Lucinda Longcroft, Director, Government Affairs and Public Policy, Australia and New Zealand, Google, Proof Committee Hansard, 16 August 2024, p. 21.

[63]UNSW Allens Hub for Technology Law and Innovation, Disability Innovation Institute at UNSW, UNSW Allens Hub, Submission 104, p. 6.

[64]Mrs Lorraine Finlay, Human Rights Commissioner, Australian Human Rights Commission, Proof Committee Hansard, 20 May 2024, p. 42.

[65]Mr Andrew Ray, Disinformation, Deepfakes and Democracies, UNSW La Journal, Volume 44(3), 2021, p.983.

[66]UNSW Allens Hub for Technology Law and Innovation, Disability Innovation Institute at UNSW, UNSW Allens Hub, Submission 104, p. 6.

[67]Mrs Lorraine Finley, Human Rights Commissioner, Australian Human Rights Commission, Proof Committee Hansard, 20 May 2024, p. 45

[68]Human Rights Watch, Submission 236, [p. 4].

[69]See, for example, Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 23; Professor Genevieve Bell AO, Vice-Chancellor and President, ANU, Founder and Inaugural Director, School of Cybernetics, Proof Committee Hansard, 20 May 2024, p.35; Mrs Lorraine Finley, Australian Human Rights Commissioner, Australian Human Rights Commission, Proof Committee Hansard, 20 May 2024, p. 44; IEEE Society on Social Implications of Technology, Submission 128, p. 4; BSA Software Alliance, Submission 19, p. [8].

[70]Tech Council of Australia, answers to questions on notice, 21 May 2024 (received 19 June 2024).

[71]Australian Human Rights Commission, Submission 71, p. 9.

[72]Google, Submission 145, p. 21.

[73]Google, answers to questions on notice 16 August 2024, (received 6 August 2024), p. 3.

[74]Microsoft, Submission 158, p. 9.

[75]Mr Lee Hickin, AI Technology and Policy Lead Asia, Microsoft Pty Ltd, Proof Committee Hansard, 16 August 2024, p. 40.

[76]Ms Nicole Foster, Director, Global AI/ML Public Policy, Amazon Web Services, Proof Committee Hansard, 16 August 2024, p. 16.

[77]Tech Council of Australia, answers to questions on notice, 21 May 2024 (received 19 June 2024).

[78]IEEE Society on Social Implications of Technology, Submission 128, p. 4.

[79]Tech Council of Australia, answers to questions on notice, 21 May 2024 (received 19 June 2024).

[80]Ms Tulsee Doshi, Product Director, Responsible AI, Google, Proof Committee Hansard, 16 August 2024, p. 26.

[81]Meta, Submission 220, p. 18.

[82]Professor Toby Walsh, Chief Scientist, AI Institute, University of New South Wales, Proof Committee Hansard, 21 May 2024, p. 35.

[83]Australian National University School of Cybernetics, answers to questions on notice 20 May 2024, (received 13 June 2024).

[84]UNESCO, Elections in Digital Times: A Guide for Electoral Practitioners, 2022, pp 123-124.

[85]DIGI, Submission 155, p. 7.

[86]DIGI, Submission 155, p. 8.

[87]DIGI, Submission 155, p. 7.

[88]DIGI, Submission 155, p. 8.

[89]Meta, Submission 220, p. 21; Microsoft, Submission 158, p. 10.

[90]Microsoft, Submission 158, p. 10.

[91]Ms Lucinda Longcroft, Director, Government Affairs and Public Policy, Australia and New Zealand, Google, Proof Committee Hansard, 16 August 2024, p. 21.

[92]Ms Tulsee Dooshi, Product Director, Responsible AI, Google, Proof Committee Hansard, 16 August 2024, p. 26.

[93]Mr John Galligan, General Manager, Corporate External and Legal Affairs, Microsoft Pty Ltd, Proof Committee Hansard, 16 August 2024, p. 39.

[94]Mr John Mackenny, Director of Public Sector Strategy, Asia Pacific, Adobe, Proof Committee Hansard, 16 July 2024, p. 21.

[95]Reset Tech, Submission 148, p. 10.

[96]Per Capita’s Centre for the Public Square, answer to question on notice (21 May 2024), received 6 June 2024, p. [2].

[97]ANU Tech Policy Design Centre, Submission 68, p. 2.

[98]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 27.

[99]Australian Library and Information Association, Submission 183, p. 2.

[100]Australian Digital Inclusion Alliance, Submission 175, p. 1.

[101]Australian Library and Information Association, Submission 183, p. 2.

[102]See, for example: ANU Tech Policy Design Centre, Submission 68, p. 5; New South Wales Council for Civil Liberties, Submission 113, p. 6; Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, p. 20. May 2024, p. 29; Professor Genevieve Bell AO, Vice-Chancellor and President, ANU, Founder and Inaugural Director, School of Cybernetics, Proof Committee Hansard, 20 May 2024, p.35.

[103]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 29.

[104]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 25.

[105]Mr John Galligan, General Manager, Corporate External and Legal Affairs, Microsoft Pty Ltd, Proof Committee Hansard, 16 August 2024, p. 39.

[106]Adobe, Submission 24, p. 5.

[107]Mr David Masters, Head of Global Public Policy, Atlassian, Proof Committee Hansard, 17 July 2024, p. 24.

[108]ANU Tech Policy Design Centre, Submission 68, p. 5; Ms Zoe Hawkins, Head of Policy Design, Tech Policy Design Centre, ANU, Proof Committee Hansard, 20 May 2024, p. 60.

[109]Dr Alexia Maddox, Dr Stuart Evans and Professor Bernadette Walker Gibbs, Submission 188, p. [6].

[110]See for example: Regional Universities Network, Submission 49, p. 5; Australian Academy of Technological Sciences and Engineering, Submission 39, p. 4; Australian Computer Society, Submission 56, p. 3; Computing Research and Education Association (CORE), Submission 50, p. [2]; Accenture, Submission 97, p. 5; Dr Cat Kutay (Yugambeh), Dr Yakub Sebastian and Dr Yan Zhang, Submission 34, p. [1].

[111]La Trobe University, Submission 186, p. 4.

[112]Department of Industry, Science and Resources, Safe and responsible AI in Australia: Proposals paper for introducing mandatory guardrails for AI in high-risk settings, September 2024, p. 19.

[113]Department of Industry, Science and Resources, Safe and responsible AI in Australia, Interim Response, 17 January 2024, p. 6.

[114]Joint Standing Committee on Electoral Matters, Conduct of the 2022 federal election and other matters, November 2023, pp 205-224. See also: https://www.theguardian.com/australia-news/2022/oct/26/liberal-party-opposes-labors-truth-in-political-advertising-and-spending-cap-laws.

[115]EU Artificial Intelligence Act, Annex III: High-Risk AI Systems Referred to in Article 6(2), available at https://artificialintelligenceact.eu/annex/3/ (accessed 11 September).

[116]Mr Tom Rogers, Electoral Commissioner, AEC, Proof Committee Hansard, 20 May 2024, p. 23.