Chapter 3 - Risks and responses

  1. Risks and responses
    1. The emergence of generative artificial intelligence (GenAI) as an educational tool has brought safety, wellbeing, and other concerns, with it. The inherent challenges presented by GenAI affect all users, including students and educators. It is essential to be aware of risks pertaining to the technology itself, its use, and the data, in order to manage them.
    2. Some of these challenges, which are all linked to safety, wellbeing, and security in various ways, include:
  • online safety and adverse impacts on personal development
  • overreliance on GenAI
  • mis- and disinformation
  • algorithmic bias and data-driven profiling
  • data capturing practices by educational technology (EdTech) companies
  • transparency, and the commercial interests of EdTech companies
  • data security, privacy and copyright.
    1. Many of these risks stand to disproportionately impact vulnerable groups, including children, Aboriginal and Torres Strait Islander students, female students, and students from culturally and linguistically diverse populations.[1] Female students as well as students from culturally and linguistically diverse populations may be particularly affected due to being mispresented.[2]

Context of safety and wellbeing

3.4The Committee heard that people commonly do not feel safe when using artificial intelligence (AI). According to KomplyAi, on average, Australians distrust GenAI technology more than people in most countries.[3] Professor Nicholas Davis, Industry Professor of Emerging Technology and Co-Director of the Human Technology Institute at the University of Technology Sydney (UTS), also commented that:

…in terms of where we are today, from my discussions with teachers, schools, parents and others, as much anecdotally or more anecdotally than anything that is purely systematic, we're in a place where people are more scared and more confused than they were, rather than having deeper levels of clarity and understanding.[4]

3.5Looking at the broader context, the Australian Government has been active in rolling out reforms regarding human safety and wellbeing with respect to technology. The Online Safety Act 2021 (Cth) (OSA) gives the eSafety Commissioner a suite of regulatory powers to protect Australians from online harm. The eSafety Commissioner claims that under the OSA they can remove abusive and harmful content, take enforcement action against those who fail to comply, and develop industry codes that cover the eight sections of the online industry.[5] The OSA is under review by the Department of Infrastructure, Transport, Regional Development, Communications and the Arts with a report due to the Minister for Communications by 31 October 2024.[6]

3.6Further, in June 2024, theCriminal Code Amendment (Deepfake Sexual Material) Bill 2024was tabled ‘to strengthen laws targeting the creation and non-consensual dissemination of sexually explicit material online, including material created or altered using generative AI, including deepfakes’.[7]

3.7The eSafety Commission expressed concerned about the potential for GenAI to amplify cyberbullying and cyber abuse. This is due to GenAI’s ‘capability to produce ‘human-like’ interaction combined with novel high quality personalised content’. Although certain GenAI products have minimum age requirements to use them, generally 13 or 18 years of age, companies like OpenAI are unlikely to adequately protect minors who use them regardless. This is underpinned by their assertion that they receive reports about cyberbullying from children as young as eight on social media platforms despite their minimum age requirements.[8]

3.8Mobile phones have been banned in all Australian public schoolsas the Australian Government hopes it will improve student, as well as teacher, wellbeing, and reduce cyberbullying.[9] The Independent Education Union of Australia (IEUA) stated that the manipulation and setting up of Facebook sites and pages to bully students and teachers is a pervasive issue, but that schools should have policies in place to manage social media bullying. The IEUA cites the removal of mobile phones in school as a means to address this.[10] However, the Queensland University of Technology raises concerns about nation-wide bans on the use of mobile phones in schools; citing equity concerns for students experiencing disadvantage.[11]

3.9An emerging concern is the introduction of facial recognition technology in the classroom. Kristen Migliorini, Founder and Chief Executive Officer of KomplyAi, claimed there is a risk of the use of facial recognition technology to monitor student behaviour and concentration levels.[12] Facial recognition technology has previously been deployed in schools in Sweden to take student attendance. While this did save time, it meant that teachers were not interacting with students as a means to find out what was happening in their lives as it removed that informal structure. Deployment of facial recognition technology was done to alleviate teacher workload, but was banned by a Swedish court over data protection concerns.[13]

Chatbots

3.10GenAI driven chatbots give rise to various safety and wellbeing concerns for students. There are risks that GenAI could be trained on adult and inappropriate content that is incorporated into datasets that can generate content.[14] Independent Schools Australia asserted that GenAI tools have the potential to produce highly realistic content such as text, images, or videos that may affect the emotional or psychological wellbeing of students and influence their mental health or emotional stability.[15] Chatbots may have ageinappropriate conversations or display content that is sexual or violent to children. For example, the Australian Science and Mathematics School found one incident of an image based GenAI tool being able to generate sexualised content.[16]

3.11GenAI chatbots come across as having a ‘high level of authority, expertise, and competency’.[17] The Centre for Digital Wellbeing (CDW) raised concerns about the level of oversight in the relationship between a chatbot and child, which could be ‘destructive to that child's mental health and wellbeing’.[18] This is because the user may not be able to discern the limits of knowledge of the application, and the dataset that underpins the chatbot, and this may disproportionately affect children and young people.[19]

3.12GenAI chatbots may present with ‘human-like’ qualities to children, including mimicking common conversational traits that imply a personal or trusted relationship with the student.[20] Dr James Curran, Chief Executive Office of the Grok Academy, highlighted that the models are built to be conversational tools which makes detecting where they have wavered from the prompt difficult. Dr Curran further explained that it is important to remember that a user is having a conversation with a system that trained on the entirety of the internet, and a system that is skilled at predicting what the next most useful word will be.[21] There are further concerns about the ethical development of GenAI and how a chatbot directly engages with children when it uses biased data scraped from the internet.[22]

3.13Chatbots can provide mental health and wellbeing advice, which has both advantages and disadvantages. The eSafety Commission explained that an AI chatbot can provide timely and relevant advice on mental health and wellbeing by offering referral services and reporting harm and abuse.[23] The Australian Academy of Technological Sciences & Engineering (ATSE) raised concerns about GenAI and mental health interventions:

There is an emerging risk that generative AI tools are interacting conversationally with users around mental health and wellbeing. This leads to risks that students may be encouraged to talk to an AI system rather than a human. While for some students discussing mental health issues with an AI may make them more comfortable to seek help for mental health issues, some students may be less likely to access timely interventions, might receive poor advice, or mental ill health may even be exacerbated by such interactions.[24]

3.14The Committee heard that chatbots may be able to report and respond to concerns for the welfare and safety of children and young people. This may include ‘seeking help or making disclosures about experiences, events, or circumstances impacting their safety, health, mental health or wellbeing’.[25]

3.15Pymble Ladies’ College (PLC) stated that GenAI can be used in a socio-emotional learning context to help students in understanding and managing their emotions. PLC contended that the technology can ‘track emotional progress over time and suggest techniques to manage emotions’ ‘provide interactive scenarios where students can practice emotional responses’, and ‘provide resources for self-help and coping strategies when it identifies emotional distress’.[26]

3.16However, PLC also stated that GenAI’s understanding, and interpretation of human emotion, can be limited and lead to incorrect suggestions from the technology.[27] Under the EU Artificial Intelligence Act (EU AI Act), the use of GenAI to detect emotion falls under the 'not acceptable at all' category in schools.[28]

3.17Evolved Reasoning provided a concrete example of how a GenAI tool could both help and adversely affect a child’s wellbeing. It explained that a school child may be given a GenAI tool called SARAH, which can help with their homework, check-in, and provide guidance. SARAH would have the ability to detect what the student is good at, poor at, and provide a rating to the teacher and parents. SARAH may well stay with the child through to high school and then into their career.[29] As demonstrated, this may help students to have a supportive and affirmative voice by their side through their schooling; but they may learn that the world is full of people who say, ‘good job’ or ‘go ahead.’ SARAH is also then drawing on a ‘fairly homogenous and limited dataset and a restricted worldview that's generated as a result of that dataset’.[30]

Dependency on GenAI

3.18Many submissions raised concerns that students and educators might over rely on GenAI, and that this would have flow on effects. Students from The Grange P–12 College shared with the Committee that they wanted to determine how they use the technology.[31] They stated that GenAI should be used as a secondary resource to supplement evidence rather than substitute it, and that all evidence should be corroborated.[32] One student said:

But I think any use of ChatGPT should be just guidance and not a crutch. We should utilise the other resources that we have, such as textbooks, our teachers and even other sources on the internet. If we are using ChatGPT in schools, it's important to emphasise that we shouldn't rely solely on that and that we should double check it.[33]

3.19Australian Council of State School Organisations (ACSSO) also asserted that GenAI tools should be used as a supporting resource and not as a substitute for face-to-face learning and in-person interactions.[34] If used as a supporting resource, GenAI could potentially enhance learning and the role of the teachers. Whereas if students rely too heavily on GenAI, it could detract from teachers’ roles, even threatening to replace them.[35]

3.20The National Tertiary Education Union (NTEU) did not consider GenAI an appropriate replacement for staff as the technology did not ‘engage [students] in critical thinking, [or] produce genuine creativity or innovation’, and human staff are still required to monitor GenAI outputs.[36] As GenAI is trained on data and all data is historical, ‘an over-reliance on AI may limit innovation, insight, and discovery’. As such, Tertiary Education Quality and Standards Agency (TEQSA) considered it crucial to scaffold the introduction of GenAI technology throughout a student’s education journey so that they develop critical thinking skills needed to progress.[37]

3.21An over-reliance on GenAI can also adversely affect students’ problem-solving skills, interpersonal skills, and decision-making skills, and lead to complacency and disengagement from teaching material.[38] This may hamper human capacity through the reduction of individual capabilities and could risk the mass production of AI generated content.[39] A related issue is the tendency for GenAI to ‘produce plausible but incorrect responses’ and join discrete concepts in a logical manner. This may affect student learning and understanding, especially if they rely solely on GenAI.[40]

3.22The Committee heard that if students become dependent on GenAI, students may be deterred from using and building their skills that require effort and time.[41] Monash DeepNeuron and the Victorian Association for the Teaching of English pointed to the example of the normalisation of spellcheck and grammar checks and the proliferation of applications such as Grammarly.[42] Monash DeepNeuron asserted that the use of spelling and grammar checkers can lead to a decline in fundamental spelling and grammar skills as they reduce student surface errors, but do not correct errors on a cognitive level.[43] Rather, these skills need to be cultivated through project-based learning, inquiry-based approaches, and real-world problem-solving activities that demonstrate the limitations of the technology.[44]

3.23It is therefore important to implement a balanced curriculum and foster skills such as collaboration, critical thinking, and creativity that GenAI cannot replicate.[45] Teachers should carefully monitor these activities to ensure the development of such skills amongst students.[46]

Mis- and disinformation

3.24The ability of GenAI to proliferate mis- and disinformation on their platforms was identified as a risk. Misinformation poses a risk to the health and safety of individuals and society more broadly through the dissemination of ‘made-up news articles, doctored images and videos, false information shared on social media, and scam advertisements. It becomes disinformation when misinformation is deliberately spread to cause ‘confusion and undermine trust in governments or institutions’.[47]

3.25The Committee heard that mis- and disinformation can foster distrust and biases between people and cultures, leading to poor outcomes for students.[48] The spread of misinformation within school and wider communities can affect students’ wellbeing and their understanding of current events.[49] Furthermore, Monash DeepNeuron stated that when misinformation is used for propaganda and other political purposes, it can radicalise GenAI users.[50]

3.26Another concern related to mis- and disinformation is the proliferation of deepfakes and that GenAI can create them. A deepfake is a ‘digital photo, video, or sound file of a real person that has been edited to create a false depiction of them doing or saying something’.[51] The Australian Human Rights Commission (AHRC) submitted that GenAI can be corrupted for misuse by generating ‘high-quality, cheap and personalised content, including for harmful purposes’ to generate deepfakes.[52] These tools have the potential to cause significant harms and can be used to exploit, harass, ridicule, and spread mis- and disinformation.[53]

3.27The Commonwealth Department of Education (Commonwealth DoE) has raised concerns about the use of GenAI to create deepfake material and has noted that 70 per cent of Australians aged 18 to 24 years have experienced harassment or abuse online in a 12-month period.[54] The eSafety Commissioner has defined a deepfake as a ‘digital photo, video or audio file of a real person that has been manipulated to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say’. The eSafety Commissioner cautions that GenAI tools allow the ability to produce deepfakes with greater ease and at scale, which could result in serious and widespread harm to educators.[55]

3.28On the proliferation of deepfake apps, Associate Professor Erica Southgate asserts:

Deepfake apps will pose significant challenges to schools and other educational institutions as they are weaponised for bullying, harassment, and deception. The rapid human and bot spread of deep fakes will probably surpass the damage already occurring with student online bullying and will adversely affect staff who are targeted and the ethical culture of the educational institution. The anonymity through which deep fakes can be created will exacerbate the issue.[56]

3.29Furthermore, GenAI cannot separate fact from fiction, nor truth from disinformation or stories from news.[57] AI tools can also ‘hallucinate’ content and produce factual errors in generated content. This includes fabricated moments in history and inaccurate scientific information and facts.[58] The Tech Council of Australia emphasised that this is why GenAI models should not be considered ‘intelligent’, reiterating that they work on a predicative basis and trained data.[59]

3.30Students with insufficient knowledge or skills may be unable to interpret opinions expressed as fact, from experts or amateurs, are at risk of accepting misinformation at face value, especially if they trust AI-generated information.[60] Biased content in and of itself can further promote misinformation within student cohorts.[61]

3.31The CDW used Finland as an example of combatting disinformation. Finland has a strong focus on combatting disinformation which specialises in developing digital literacy capabilities and a healthy relationship with technology. This is embedded in every part of the school curriculum from K–12.[62]

3.32The AHRC recommended that the use of GenAI to create deceptive or malicious content in education settings be prohibited, and that policies be developed to ensure content verification so that individuals can accurately identify GenAI content. The AHRC further noted that these reforms would be insufficient if there were no digital literacy education and training that teaches GenAI users to identify false or manipulated content and to engage with technology responsibly and ethically.[63] The Australian Library and Information Association (ALIA) recommended implementing a program to monitor GenAI outputs in education settings and for GenAI developers to commit to improving their algorithms in response to the findings.[64]

3.33TEQSA made the following recommendations to the Committee, including:

  • the need for ‘transparent disclosure of the training data and algorithms that underpin educational products so that they can be genuinely evaluated by government and educational institutions to ensure they are free of bias’ with the onus on EdTech companies to make the information intelligible
  • the need for ‘developers to ensure that they are mindful of, and seek to eliminate, bias and discrimination through the data the model is trained on, the design of the model and its suggested applications’
  • a requirement for educational administrators and institutions to ensure models and their applications are evaluated for bias and that their use is governed by institutional policies, and that adherence is monitored.[65]

Transparency

3.34Several submissions raised concerns about the lack of transparency in GenAI applications and how this may affect student welfare. Issues relating to data sources, built-in surveillance in the platforms, costs and the commercialisation of the data, and applications of the EdTech were identified. There is a need to ensure that there is transparency in the gathering and aggregation of data, and how that may influence user decisions.[66]

3.35Professor Davis explained issues of transparency:

Finally, on the behemoth point, the reason we have that competition problem is often that we don't have transparency about a level playing field in terms of outcomes and standards for what actually works. Secondly, a lot of tech companies are subsidising the use of services through the use of data at the back end—data broking, data leveraging and other areas. It goes back to the Privacy Act review and really protecting children's data from secondary use. Thirdly, we need transparency on the true cost of systems over time. At the moment, all our ChatGPT use is being subsidised by investors, stakeholders and one big tech company in the world.[67]

3.36The Commonwealth DoE noted a lack of information about the development and commercialisation of GenAI models, which affects Government’s ability to understand its potential effects.[68] There is currently a lack of transparency about the ‘scientific’, and ‘pedagogic logic’ that is behind the model or what data it has been trained on. Similarly, Dr Jose-Miguel Bello y Villarino, Senior Research Fellow, ARC Centre of Excellence for Automatic Decision-Making and Society, asserted that there needs to be some transparency about what GenAI developers have embedded into the application, and what is missing.[69] If there is little transparency on the sources and algorithms it may lead to a ‘veneer of objectivity’ of large language models (LLMs), which can make students naive to quality and bias issues.[70]

3.37The Committee heard that when there is a lack of transparency in GenAI models, it makes it difficult for users including children, teachers, and parents to understand how the technology functions. This can make it challenging for users to understand how they arrived at specific outputs, which can lead to challenges about developer accountability, concealed bias, discrimination, and errors. This can also lead to trust in the AI model without the critical judgement needed to confront biases and false information that can be prevalent on the applications. When there is a lack of transparency in the decision making process, ‘it becomes difficult to assess whether the system is making unbiased choices due to its ability to hide biases and discriminatory patterns’. Without transparency, there is no external oversight or means of correction.[71]

Algorithmic bias

3.38GenAI systems ‘depend on robust and quality datasets to write, improve, and test algorithms’. This ensures accurate and reasonable outputs and minimises the risks of bias or incompleteness in results.[72] However, GenAI systems are often trained on large, imperfect datasets that can ‘generate predictive outputs based on algorithms’ and ‘systematically reinforce bias and prejudice, historical discrimination, and archaic practices’.[73] Models can reinforce bias and disadvantage by excluding marginalised and underrepresented groups or even overrepresenting some groups if misused or poorly designed.[74]

3.39This issue of misuse perpetuating adverse outputs was highlighted by Dr Alexia Maddox, Senior Lecturer in Pedagogy and Education Futures at La Trobe University:

Again, I would very much reinforce this point: what is the data that these tools are learning on? The fact is that it's not just the data that gets ingested into the tools; it's also the data that people produce when they're using the tools.[75]

3.40The below factors were identified as affecting the quality and accuracy of GenAI inputs and outputs.

  • The age or scope of the dataset, the use of foreign data, such as US-based material or even older material from the public domain. These datasets do not represent a diverse sample and can be exclusionary.[76]
  • Factual inaccuracies where GenAI can produce ‘plausible but incorrect responses’ and its inability ‘to join discrete concepts in ways that appear to be logical’ which may impact student learning.[77]
  • Aboriginal and Torres Strait Islanders are underrepresented in data samples and there are often factual inaccuracies about their cultural practices.[78] This may lead to Aboriginal and Torres Strait Islander students having a ‘poverty of connection to culture’ and a further erasure through the lack of visibility in GenAI datasets.[79]
  • An accreditation or regulatory framework may not standardise the AI tools available or ensure that the training data is ethical and transparent.[80]
    1. ALIA asserted that the majority of datasets have been scraped from the internet and have differing levels of transparency about their content. Content that is scraped from the internet can vary in quality and relevance to educational contexts and is often western-centric.[81] For example, ChatGPT3 was trained with text from the internet (85 per cent total); yet the training sets for ChatGPT4 are not public. Furthermore, users’ data may be scraped to inform GenAI tools, which may lead to inequitable outcomes for models trained on that data.[82]
    2. GenAI also functions on a probabilistic model. The technology produces a ‘probable combination of pixels, words or other medium in response to a specific prompt’, leading to biases in student responses.[83] This means that the AI model learns ‘facts’ based on quantity and not quality of content and outputs.[84]
    3. The Committee heard that students who are exposed to biased GenAI outputs may be at risk of mirroring the misconceptions and stereotypes that are produced by the technology.[85] Even when aware of the bias or stereotype, people may still be receptive to them.[86] Algorithmic bias may entrench or obscure unfairness which may ‘reinforce discriminatory practices and widen educational disparities’.[87] This could lead to adverse outcomes for students in areas including grading and university admissions, and affect personalised learning paths.[88]
    4. Moreover, stakeholders highlighted that GenAI is fallible and multiple submissions included examples of bias produced by the technology, including:
  • ChatGPT has a propensity to perpetuate gender and racial stereotypes; likening men to ‘doctors’ and ‘engineers,’ women to ‘nurses’ and ‘teachers’ and ‘thief’ or ‘criminal’ to people of colour[89]
  • ALIA asked ChatGPT to write a story about two children set in Australia. The tool wrote a piece using two anglicised names and when asked to rewrite the story with different names, continued to provide traditional English-speaking names that are not necessarily representative of modern Australian society[90]
  • when prompted with images of ‘kids soccer team having fun’, it only showed boys playing soccer and having fun.[91]
    1. Conversely, PLC suggested that GenAI technology relies on human feedback for the reinforcement of learning and can be quite circumspect and calibrate answers back to the centre. This was contrasted to platforms such as YouTube and TikTok, which are large algorithmic tools that are vying for the users’ attention. On those platforms, the further down the rabbit hole a user goes, the more biased and extreme content they will be shown.[92]
    2. The AHRC considered it important to address bias in GenAI outputs to ensure that Australia’s education is ‘fair, inclusive and promotes equal opportunities for all students’.[93] The Tech Council of Australia contended that educational institutions can create a knowledge base with trusted sources of information; consider the removal of inappropriate external sources from the tools, with a focus on sensitive topics; and, introduce human review and application of critical thinking skills to identify bias.[94]
    3. The CDW suggested the development of comprehensive legislation for GenAI that leans on international best-practice such as the EU AI Act.[95] Similarly, the AHRC recommended that there should be continual evaluation and validation processes and regular independent auditing to ‘identify and mitigate algorithmic bias’.[96]

EdTech interests

3.48In 2020, the Australian EdTech sector employed 13,000 people and generated $1.6billion in domestic revenue and an additional $600 million from exports to the international market.[97] Submissions expressed concern that EdTech and commercial interests may affect the rollout of GenAI in the Australian education system.

3.49Monash DeepNeuron highlighted that as GenAI services expand, they will become heavily commercialised.[98] In Monash DeepNeuron’s view, the EdTech sector has a history of prioritising commercial interests over student outcomes, which has led to the delivery of content that is ‘poorly tailored to student needs’.[99] There are also risks that GenAI will be controlled by commercial, overseas interests, with commercial or profit-driven motives and who may not address concerns raised by education professionals.[100]

3.50In its submission, the Centre for Research on Education in a Digital Society (CREDS) cited a review of the 100 most frequently used EdTech tools in the US which found that only 26 out of 100 met the threshold for any level of learning. It was noted that poor application development can lead to underuse and poor use, and may not represent Australian values or experiences. As such, it is important that investments in EdTech are underpinned by evidence that the tools will be used to support the outcomes they claim to target.[101]

3.51The Committee heard that children are placed at particular risk if EdTech interests are allowed to grow unfettered. This is because a principal risk of EdTech is the sale or transfer of children’s personal data to third parties or, in the case of GenAI, ‘the use of student search queries being analysed to inform targeted advertising’. The AHRC noted that by a child’s 13th birthday, advertisers will have already gathered more than 72 million data points about them. It is therefore critical that data collected through EdTech not be used for other purposes and children are protected from data surveillance.[102]

3.52There is a prevailing sentiment that Australia has and will need to continue to set a high-quality threshold for EdTech products. Failure to set a sufficiently high threshold will see products sold at the lower quality threshold that they’re already operating at.[103] EdTech will become more advanced, sophisticated and intuitive as the technology grows and more AI components are built into their systems.[104]

3.53Australia is well positioned to integrate GenAI EdTech into the education systems with the Safer Technologies 4 Schools Framework, to which all Australian education ministers have signed up. A number of domestic and international EdTech companies have signed up to be accredited under this program, which operates under the auspices of the Commonwealth DoE.[105]

3.54Dr Curran states it will be important to set strong standards that the industry has to reach to operate in the Australian market.[106] Professor Leslie Loble AM, Industry Professor at the University of Technology Sydney, supported the introduction of standards so that they can compete on quality. If an EdTech company has invested a huge amount of money in a product, they do not want that undercut by someone who has not and who is at the lower end of the quality threshold.[107] Professor Loble recommended that educators ‘must retain authority and control over EdTech used in classrooms’, ensure the use of quality tools, the ‘effective use and integration into teacher-led instruction’, and ensure a strong network of policies, institutions and incentives to shape and govern the EdTech market.[108]

Student data

3.55In its submission, ALIA raised concerns that EdTech products are collecting and monetising student data. ALIA assert that the risk of collection and monetisation of student data will continue to increase given the fast-moving nature of the sector where there is a significant first mover advantage.[109] The NTEU stated that most advanced AI systems are being developed by foreign, for-profit entities that operate with little transparency about the types of data they collect and how it used. This presents a problem with educational institutions engaging external contractors to deliver teaching and student support.[110]

3.56There are also concerns about algorithmic transparency in the grading and assessment of student work by AI systems. A lack of human presence in grading may also make the appeals process unfair and unclear.[111]

3.57Stakeholders suggested ways to create transparency in GenAI use, including:

  • guidelines: transparency and accountability can be emphasised through clear protocols and guidelines which govern the use and reporting of AI-generated outputs[112]
  • transparent data use policies: ‘educational institutions and AI developers should be required to have clear and transparent data use policies’ including how data is collected, how data will be used, how long data will be retained, and what measures are being taken to protect data privacy[113]
  • open access: researchers and developers need to prioritise transparency and explainability by providing clear documentation, sharing methodologies, and engaging in open dialogue.[114] This would allow researchers and the broader public to understand what is happening behind the scenes to determine if the measures in place are suitable to regulate the technology[115]
  • transparency reports: AI-organisations should publish transparency reports detailing how their systems are used, how algorithms function and how they affect users[116]
  • third party evaluations: there should be independent third-party evaluations of AI tools and systems to ensure that they are transparent, fair and accountable.[117]

Data security

3.58The COVID-19 pandemic necessitated the adoption of EdTech products into Australian schools to manage online learning and the establishment of the virtual classroom. The Committee heard that the speed of uptake of EdTech products by schools raises concerns about data privacy and the security of sensitive student information. The CDW asserted that 89 per cent of the EdTech platforms available put children’s safety in danger by ‘monitoring them without their consent and allowing access from or selling the data to third parties’.[118] In its submission, Charles Sturt University reported:

Over four million Australian children’s data may have been compromised in 2022 due to unsolicited cookies integrated into EdTech products used in Australian schools, infringing on their privacy and exposing risks such as lack of informed consent, privacy erosion, and cyber security issues.[119]

3.59The CDW stated that access to children’s data can leave them susceptible to commercial exploitation by exposing them to overt advertising or sponsored content. The adoption of this type of EdTech can be problematic as children under age 12 ‘do not understand the pervasive nature of advertising and children 8 years and under cannot differentiate between content and advertising’, making them susceptible to microtargeted marketing.[120]

3.60The use of GenAI in education raises issues about how data is stored, who can access it, and how it used.[121] For example, data entered into GenAI tools may become the property of the owners of the tools, raising concerns about the privacy and security of the data, something which may be problematic where products build user profiles over a period of time.[122]

3.61If the adoption of GenAI in the classroom becomes compulsory, there may be limited opportunities for teachers, children, or parents to opt-out, or even provide full consent to use of the technology.[123] Most GenAI companies are aware of these issues and have a set an 18+ age restriction for accounts.[124]

3.62Even if students’ data is not sold, students may still be exposed to risk through the continuous gathering of personal data, used to optimise the individual user experience.[125] There are cyber security and data security concerns that Australian schools may be under-resourced, or lack expertise, to address.[126] In its submission, PLC raises concern that additional costs will be needed to manage security measures and encryption.[127]

3.63PLC noted that limited access to data may affect AI's ability to provide personalised learning for students.[128] This is because GenAI tools require sensitive personal data to function effectively such as a student’s personal ID and academic records.[129] Some schools are cautious about integrating GenAI into teaching because of the large datasets required. This is to protect the personal information of students, teachers and other individuals as ‘mismanagement of data can lead to privacy breaches, misuse of information, or unauthorized access, compromising the trust between educational institutions and stakeholders.’[130]

Protecting privacy

3.64The right to privacy is a recognised human right that is becoming increasingly important in a data-centric world. The AHRC stated that GenAI has the capacity to intrude on people’s privacy in new and concerning ways, if not properly regulated.[131] ATSE submitted that issues relating to data privacy are compounded by the different approaches to privacy across Australian jurisdictions and internationally.[132]

3.65Several submissions pointed to the need to review the Privacy Act 1988 (Cth) (the Privacy Act) with a view to strengthening privacy protections for children, particularly in relation to the use of GenAI.[133] Currently, there are no exemptions in Australia’s privacy, consumer protection, or anti-discrimination laws for AI development and deployment.[134]

3.66Professor Davis noted that the Privacy Act is 25 years out of date in certain respects, which may not be conducive to regulating emerging technologies.[135] However, Dr Aaron Lane from the RMIT Blockchain Innovation Hub asserted that Australia does not necessarily need to update Australia’s privacy law, and that it already applies to GenAI.[136] The Privacy Act review by the Attorney-General’s Department (AGD) could establish a ‘robust data protection framework that outlines the rights of students in relation to personal data as well as establishing limitations to the collection, use and retention of data of minors’.[137] The Tech Council of Australia asserted that there is a need to consider arrangements that will apply for foundational and frontier models, domestically and at a global level.[138]

3.67DISR is developing Australia’s position on GenAI. The ATSE suggested that DISR develop enforceable data privacy standards that will help to regulate training and user-inputted data in AI systems.[139] Standards for the safe and secure use of GenAI tools should also seek to establish the storage of personal data, interactions with GenAI and the protection of intellectual property (IP).[140] The AHRC emphasised that standards should:

Expressly protect student data, limit access to sensitive information, and ensure that robust privacy and security measures are in place. Standards should be established to govern the collect, storage and use of personal information in the context of generative AI tools in education.[141]

3.68The AHRC cautioned that introduced standards should not be based on assumptions about what is in the best interests of children. Rather their views should be actively considered as an ‘adult’s interpretation of children’s privacy needs can impede the healthy development of autonomy and independence and restrict children’s privacy in the name of protection’. This can result in overly protectionist agendas which can be potentially harmful to children.[142]

3.69Other measures to protect data and privacy include through encryption and adopting robust security protocols, only collecting necessary data and safeguarding sensitive information, and anonymising information wherever possible.[143] To adequately protect students from cyber security threats, the Cooperative Research Australia stated that the Australian Government can extend the 2020-2030 Australian Cyber Secretary Strategy to protect AI models and education data from cyber threats and misuse.[144] Government can also provide both technical and financial support to educational institutions to protect students from cyber security threats.[145]

3.70It may also be practical to adopt risk-based AI governance practices where appropriate. GenAI can be used in high-risk ways in education such as the automation of decisions that will meaningfully impact a student’s wellbeing, and there should be a baseline expectation that organisations can implement appropriate governance-based safeguards to identify and mitigate these risks.[146]

3.71Educators and administrators have a responsibility to use GenAI tools ethically and responsibly. This includes obtaining appropriate permissions for data usage that will ensure transparency in AI-generated content and to be accountable for the decisions made based on GenAI outputs.[147] Data should only be used for educational purposes, and be protected from unauthorised access.[148]

3.72ACSSO advised that it is important to have strong data protection laws and regulations so that there are transparent practices and individual users are informed about how their data is used.[149] Similarly, the CDW put forward that government can require AI developers and educational institutions to implement secure data storage practices, and strong encryption practices which detail the types of data collected and their purposes, how the data will be used, how long it will be retained, and the measures taken to protect users’ privacy.[150] The IEUA also outlined the need for the sharing of personal data to meet the highest privacy standards by having clear limits on:

  • the type of data to be shared
  • where and how data will be stored
  • the length of time that data may be stored
  • the purpose for retrieving data
  • personnel who can access the data, must be provided to ensure clarity exists for those managing this matter within schools.[151]
    1. The CDW suggested that ‘AI tools used in education should be assessed for their privacy impact by the Office of the Australian Information Commissioner’. Such an assessment could identify potential risks to data privacy and outline mitigation measures to take before implementation of the tool.[152]

Copyright

3.74Throughout the inquiry, the use of copyrighted material was identified as a risk of GenAI use in the Australian education system. The Commonwealth DoE is currently working with AGD to engage with the education sector to manage copyright issues.[153] In December 2023, AGD announced the establishment of the copyright and AI reference group, which will take carriage of this issue.[154] As many stakeholders considered copyright and GenAI, some key themes are described below.

3.75In Australia, for content to be protected by copyright, it must fall into one of eight categories: ‘a work—literary, dramatic, musical or artistic work, or subject matter other than works—a film, sound recording, broadcast or published edition’. It must also be ‘sufficiently ‘original’…be in ‘material form’…and have a sufficient connection to Australia’.[155]

3.76The Copyright Advisory Group noted two main issues with GenAI and copyright. First, in its current form, Australian copyright law does not provide any exceptions that would allow AI platforms to use third party material and datasets for ML. Second, there are issues around how to define the legal status of GenAI outputs and how they can be used in teaching and learning.[156]

3.77The National Copyright Unit has been unable to provide definitive copyright advice due to the lack of clarity on the legal status of GenAI platforms and the processes that used to generate content or modify existing works.[157]

3.78The Committee also heard that there are administrative complexities as obtaining a licence for every input to an AI system would be prohibitive. As such, ‘practical and legal access to rich datasets for the purpose of training AI systems and tools is imperative in order to serve the public interest and mitigate the potential of bias in our AI systems.’[158] Schools use large amounts of digital content that was not intended for commercial exploitation with much of it being made freely available on the internet. Schools are currently expected to pay millions of dollars each year to copy, print, or email material that had no expectation of payment to the copyright owner.[159]

3.79The Australian Publishers Society similarly raised concerns regarding educators using AI without adequate regulatory oversight, copyright and IP issues jeopardising the creation of new Australian learning materials, and the risk posed by unregulated GenAI to the quality, diversity and authenticity of educational content.[160]

3.80As GenAI models are trained by ingesting large amounts of text to produce outputs, the models are reliant on the quality of the training dataset. The Australian Society of Authors advised that OpenAI has admitted that it could not have created AI tools without using copyright materials as input.[161] Stakeholders also observed that AI platforms have used, stolen and pirated content without permission from creators or rightsholders, raising concerns about copyright infringement. It is argued that the tech sector has appropriated creators’ content without payment, and this has the potential to significantly reduce the income of those in Australia’s creative industries, in turn compromising the quality of Australian educational content.[162] The Australian Society of Authors is aware of 130 authors who have had their work used without permission.[163]

3.81Evidence requested by the Committee highlighted concerns about Indigenous Cultural and Intellectual Property (ICIP) and Indigenous Data Sovereignty. Copyright Agency asserted that Aboriginal and Torres Strait Islanders are concerned about ‘maintaining authenticity in relation to their culture, and control over how aspects of their culture is used by others’.[164] GenAI may be used to ‘produce and perpetuate inauthentic and fake art, and appropriate Aboriginal and Torres Strait Islanders’ art, design, stories and culture without reference to Traditional cultural protocols’.[165] The risk that ICIP be incorporated into GenAI models without appropriate attribution or acknowledgement should be minimised.[166]

Committee comment

3.82The Committee heard extensively about a range of serious risks and challenges presented by GenAI in education. These can relate to the technology itself, the ways it is used, and the data inputs and outputs. Key concerns exist around student safety and wellbeing—such as deepfakes and cyberbullying—the potential for overreliance on GenAI, mis- and disinformation, algorithm bias, data protection, and transparency.

3.83There are additional risks and vulnerabilities associated with dealing with minors. Forinstance, the Committee notes the work underway by AGD on privacy and copyright concerns, including in relation to AI, and calls for a focus on children and GenAI as part of this process.

3.84The Australian Government and other key players need to manage these risks as a matter of priority by implementing safeguards and restrictions to protect students and educators. Safety and related concerns are paramount. The Australian Government is already rolling out reforms regarding the safety and wellbeing of children and technology, such as around deep fakes, cyberbullying and the use of mobiles in the classroom. There are also concerns about the security of data, including for student data to not be sold to third parties or landing offshore.

3.85It is clear to the Committee that the Australian Government can play a leadership role in mitigating the challenges that arise from GenAI in education. The Australian Government can identify, coordinate, and help implement compulsory and voluntary guardrails. This includes a focus on the safe, responsible and ethical use of GenAI, the EdTech market—including developers, deployers and end-users—and the technology and data.

3.86The Committee encourages the Australian Government to build a solid network of policies, regulations and incentives to shape and govern the market for GenAI products for the Australian education system. It is possible to regulate GenAI products in the education sector by focussing on a system-wide approach, without requiring sector-specific regulation. Any measures should be aimed at ensuring that EdTech companies and developers are transparent and fair, and held accountable to address significant risks, including algorithmic bias and discrimination, data security and privacy. Ed Tech companies and developers should be able to respond to evidence about what constitutes high-quality educational tools to assist learning and teaching.

3.87There has been an explosion of GenAI tools, and the Committee commends the guidance being developed on how to select appropriate tools for Australia’s education settings. It is important to set strong standards that industry has to meet to operate in the Australian market, and to have robust data protection frameworks.

3.88Everyone has a role to play in safeguarding against risks, from students to educators to institutions. Take for example, algorithmic bias. GenAI systems can produce unfair or discriminatory content and can show partiality in inappropriate contexts, perpetuating societal bias and inhibit critical thinking. It is essential to mitigate risks of bias—and misinformation and disinformation—in AI generated outputs. It is the Committee’s view that educators should be able to teach students the required skills to critique AI generated outputs, and educational providers should undertake regular independent audits of bias in the AI systems employed within their institutions to reduce these risks.

Recommendation 11

3.89The Committee recommends that the Australian Government:

  • regulate EdTech companies and developers through a system-wide risksbased legal framework
  • regulate unacceptable risks and high-risk AI systems in the education sector, mandate guardrails, and give the law extraterritorial effect
  • ensure EdTech companies and developers’ products meet established standards, including through testing and independent quality assurance
  • require EdTech companies and developers to share critical information about how their AI systems are trained, what data it has been trained on, and how algorithms function and affect users
  • require EdTech companies to provide a Gender Impact Assessment to be completed.

Recommendation 12

3.90The Committee recommends that the Australian Government work with AI developers and educational institutions to create robust data protection frameworks. This includes, but is not limited to:

  • outlining students’ and other users’ rights regarding their personal data
  • identifying the measures taken to protect users’ privacy
  • limiting, and getting permissions for, the collection, use, and retention of students’ data, including:
  • that certain types of data be collected
  • that data should only be used for educational purposes
  • that data be protected from unauthorised access and to have strong encryption practices in place
  • where, how, and for how long data can be stored
  • the purpose for retrieving data and who can access the data
  • that users’ data is not stored offshore or sold to third parties.

Recommendation 13

3.91The Committee recommends that the Australian Government work with educational providers to mitigate the risks of algorithmic bias and mis- and disinformation by:

  • training educators to teach students how to critique AI generated outputs
  • mandating that institutional deployers of AI systems in educational settings run regular bias audits and testing
  • prohibiting the use of GenAI to create deceptive or malicious content in education settings
  • completing risk-assessments
  • for example, identifying and seeking to eliminate bias and discrimination through the data the model is trained on, the design of the model and its intended uses
  • mandating to allow independent researchers ‘under the hood’ access to algorithmic information.

Recommendation 14

3.92The Committee recommends that the Australian Government:

  • ensure that the privacy law reforms led by the Attorney-General’s Department include strengthening privacy protections for students, including minors, regarding the use of GenAI
  • encourage the Office of the Australian Information Commissioner to develop an impact assessment measure which can identify the data privacy risks of GenAI tools use in education, and includes pre-deployment measures for implementation of GenAI tools.

Footnotes

[1]Centre for Digital Wellbeing (CDW), Submission 83, p. 8; Federation of Parents and Citizens Associations of NSW (FPCA NSW), Submission 43, p. 5.

[2]School of Education, La Trobe University, Submission 91, p. 8.

[3]Mrs Kristen Migliorini, Founder and Chief Executive Officer, KomplyAi, Committee Hansard, 29 January 2024, p. 19.

[4]Professor Nicholas Davis, Industry Professor of Emerging Technology and Co-Director of Human Technology Institute, University of Technology Sydney (UTS), Committee Hansard, 20 March 2024, p. 3

[5]eSafety Commissioner, Submission 84, pp. 10–11.

[6]Department of Infrastructure, Transport, Regional Development, Communications and the Arts, Statutory Review of the Online Safety Act 2021, April 2024, viewed 30 July 2024.

[7]‘Australian Government targets sexually explicit deepfakes’, Gilbert and Tobin, 26 June 2024, viewed 31July2024.

[8]eSafety Commission, Submission 84, pp. 5–6.

[9]Campbell, M and Edwards, E, ‘We looked at all the recent evidence on mobile phone bans in schools – this is what we found’, The Conversation, 12 March 2024, viewed 30 July 2024.

[10]Ms Veronica Yewdall, Assistant Federal Secretary, IEUA, Committee Hansard, 11 October 2023, pp. 6–7.

[11]Queensland University of Technology, Submission 57, p.5.

[12]Mrs Migliorini, KomplyAi, Committee Hansard, 29January 2024, p. 19.

[13]Professor Kalvervo Gulson, Education Futures Studio, The University of Sydney Policy Lab, Committee Hansard, 30 January 2024, p. 19.

[14]Australian Science and Mathematics School (ASMS), Submission 31, p. 3.

[15]Independent Schools Australia (ISA), Submission 22, p. 10.

[16]ASMS, Submission 31, p. 3.

[17]eSafety Commission, Submission 84, p. 4.

[18]Ms Carla Wilshire OAM, Director, Centre for Digital Wellbeing (CDW), Committee Hansard, 4 October 2023, p. 7.

[19]eSafety Commission, Submission 84, p. 4.

[20]eSafety Commission, Submission 84, p. 4.

[21]Dr James Curran, Chief Executive Officer, Grok Academy, Committee Hansard, 20 March 2024, p. 7.

[22]Ms Wilshire, CDW, Committee Hansard, 4 October 2023, p. 7.

[23]eSafety Commission, Submission 84, p. 5.

[24]Australian Academy of Technology Sciences and Engineering (AATSE), Submission 14, p. 4.

[25]eSafety Commission, Submission 84, pp. 4–5.

[26]Pymble Ladies’ College (PLC), Submission 93, p. 33.

[27]PLC, Submission 93, p. 33.

[28]Professor Davis, UTS, Committee Hansard, 20 March 2024, p. 1.

[29]Dr Michael Kollo, Chief Executive Officer, Evolved Reasoning, Committee Hansard, 15 November 2023, p. 6.

[30]Dr Kollo, Evolved Reasoning, Committee Hansard, 15 November 2023, p. 6; Associate Professor Joanne O’Mara; President, VATE, and Mr Leon Furze, Council Member, VATE, Committee Hansard, 15 November 2023, pp. 9–10.

[31]Deshnysri, Year 10 Student, The Grange P–2 College, Committee Hansard, 13 March 2024, p. 7.

[32]Ean, Year 10 Student and Leo, Year 11 Student, The Grange P-12 College, Committee Hansard, March2024, p. 7.

[33]Amy, Year 12 Student, The Grange P–2 College, Committee Hansard, 13 March 2024, p. 4.

[34]Australian Council of State School Organisations (ACSSO), Submission 25, p. 2.

[35]Mrs Lorraine Finlay, Human Rights Commissioner, Australian Human Rights Commission, Committee Hansard, 4 October 2023, p. 17; Maeve, Year 12 Student, The Grange P–12 College, Committee Hansard, 13March 2024, p. 2.

[36]National Tertiary Education Union (NTEU), Submission 52, p. 5.

[37]Tertiary Education Quality and Standards Agency (TEQSA), Submission 33, p. 6.

[38]University of Technology Sydney Centre for Research on Education in a Digital Society (UTS CREDS), Submission 19, p. 10; Monash DeepNeuron, Submission 75, p.5; PLC, Submission 93, p. 8; Dr Pethigamage Perera, Submission 7, p. 4.

[39]UTS CREDS, Submission 19, p. 11.

[40]Australasian Academic Integrity Network (AAIN), Submission 58, p. 9.

[41]Monash DeepNeuron, Submission 75, p. 5.

[42]Victorian Association for the Teaching of English, Submission 10, p. 5.

[43]Monash DeepNeuron, Submission 75, p. 5.

[44]ACSSO, Submission 25, p. 4.

[45]PLC, Submission 93, p. 8.

[46]ACSSO, Submission 25, p. 4.

[47]‘Online Misinformation’, Australian Communications and Media Authority, 7 February 2024, viewed 24April2024.

[48]UTS CREDS, Submission 19, p. 12.

[49]ISA, Submission 22, p. 10.

[50]Monash DeepNeuron, Submission 75, p. 4.

[51]Department of Education (DoE), Submission 48, p. 8.

[52]Australian Human Rights Commission (AHRC), Submission 65, p. 10.

[53]DoE, Submission 48, p. 8.

[54]DoE, Submission 48, pp. 8–9.

[55]eSafety Commissioner, Submission 84, p. 2.

[56]Associate Professor Erica Southgate, Submission 72, p. 5.

[57]Grok Academy, Submission 94, p. 3.

[58]Tech for Social Good (TFSG), Submission 32, p. 6.

[59]Tech Council of Australia, Submission 90, pp. 4–5.

[60]Grok Academy, Submission 94, p. 3; Independent Education Union of Australia (IEUA), Submission 26, p. 4.

[61]Monash DeepNeuron, Submission 75, p. 4.

[62]Ms Wilshire, CDW, Committee Hansard, 4 October 2023, p. 8.

[63]AHRC, Submission 65, p. 11.

[64]Australian Library and Information Association (ALIA), Submission 51, p. 15.

[65]TEQSA, Submission 33, p. 9.

[66]South Australia Department for Education (SA DFE), Submission 2, p. 6.

[67]Professor Davis, UTS, Committee Hansard, 6 September 2023, p. 9.

[68]DoE, Submission 48, p. 8.

[69]Dr Jose-Miguel Bello y Villarino, Senior Research Fellow, ARC Centre of Excellence for Automatic Decision-Making and Society, The University of Sydney, Committee Hansard, 30 January 2024, p. 19; ARC Centre of Excellence for the Digital Child, Submission 13, p. 5.

[70]Edith Cowan University, Submission 17, p.3.

[71]CDW, Submission 83, p. 8.

[72]Copyright Advisory Group (CAG), Submission 36, p. 8.

[73]AHRC, Submission 65, p. 10; Monash DeepNeuron, Submission 75, p.4.

[74]AHRC, Submission 65, p. 10; CAG, Submission 36, p.8; TEQSA, Submission 33, p. 7.

[75]Dr Alexia Maddox, Senior Lecturer in Pedagogy and Education Futures, School of Education, La Trobe University, Committee Hansard, 9 November 2023, p. 17.

[76]CAG, Submission 36, p. 8.

[77]AAIN, Submission 58, p. 9.

[78]AAIN, Submission 58, p. 9; Ms Ine Beerens, Senior Manager, Centre for Digital Wellbeing (CDW), Committee Hansard, 4 October 2023, p. 7.

[79]School of Education, La Trobe University, Submission 91, p. 9.

[80]Ms Beerens, CDW, Committee Hansard, 4 October 2023, p. 7.

[81]ALIA, Submission 51, p. 8; Tech Council of Australia, Submission90, p. 5.

[82]UTS CREDS, Submission 19, p. 13.

[83]ALIA, Submission 51, p. 9.

[84]ARC Centre of Excellence for the Digital Child, Submission 13, p. 3.

[85]Monash DeepNeuron, Submission 75, p.4.

[86]ALIA, Submission 51, p. 9.

[87]FPCA NSW, Submission 43, p. 5.

[88]PLC, Submission 93, p. 7.

[89]Monash DeepNeuron, Submission 75, p. 4.

[90]ALIA, Submission 51, p. 9.

[91]Mrs Migliorini, KomplyAi, Committee Hansard,29 January2024, p. 18.

[92]Mr Anthony England, Director, Innovative Learning Technologies, Pymble Ladies’ College, Committee Hansard, 29 January 2024, p. 3.

[93]AHRC, Submission 65, p. 10.

[94]Tech Council of Australia, Submission 90, p. 5.

[95]Ms Wilshire, CDW, Committee Hansard, 4 October 2023, p. 5.

[96]AHRC, Submission 65, p. 11.

[97]UTS CREDS, Submission 19, p. 5.

[98]Mr Nicholas Chan, Education Lead, Monash DeepNeuron, Committee Hansard, 9 November 2023, p. 22.

[99]IEUA, Submission 26, p. 3.

[100]Mr Kevin Bates, Federal Secretary, Australian Education Union, Committee Hansard, 2 November 2023, p.1; IEUA, Submission 26, p. 3.

[101]UTS CREDS, Submission 19, p. 5.

[102]AHRC, Submission 65, p. 11.

[103]Dr Curran, Grok Academy, Committee Hansard, 20 March 2024, pp.10–11.

[104]Ms Julie Birmingham, First Assistant Secretary, Teacher and Learning Division, DoE (DoE), Committee Hansard, 13 September 2023, p. 5.

[105]Ms Sally Webster, K–12 Schools Industry Lead, Australia and New Zealand, Amazon Web Services, Committee Hansard, 29 November 2023, p. 8.

[106]Dr Curran, Grok Academy, Committee Hansard, 20 March 2024, pp.10–11.

[107]Professor Leslie Loble AM, Industry Professor, University of Technology Sydney (UTS), Committee Hansard, 20 March 2024, pp. 10–11.

[108]Professor Loble, UTS, Submission 49, p. 4.

[109]ALIA, Submission 51, p. 9.

[110]NTEU, Submission 52, p. 7.

[111]NTEU, Submission 52, p. 7.

[112]University of South Australia, Submission 29, p. 3.

[113]CDW, Submission 83, p. 9.

[114]ACSSO, Submission 25, p. 5.

[115]Ms Kelly Tallon, Manager, Regulatory Policy and Strategy, Office of the eSafety Commissioner, Committee Hansard, 4 October 2023, p. 11.

[116]CDW, Submission 83, p. 9.

[117]CDW, Submission 83, p. 9.

[118]CDW, Submission 83, p. 6.

[119]Charles Sturt University, Submission 98, p. 2

[120]CDW, Submission 83, p. 6.

[121]PLC, Submission 93, p. 7.

[122]SA DFE, Submission 2, pp. 5–6.

[123]CDW, Submission 83, pp. 6–7.

[124]AATSE, Submission 14, p. 4.

[125]AHRC, Submission 65, p. 9.

[126]CDW, Submission 83, p. 7; AHRC, Submission 65, p.9.

[127]PLC, Submission 93, p. 7

[128]PLC, Submission 93, p. 7

[129]TFSG, Submission 32, p. 6.

[130]ISA, Submission 22, p. 10; Cooperative Research Australia (CRA), Submission 88, p. 6.

[131]AHRC, Submission 65, p. 9.

[132]AATSE, Submission 14, p. 4.

[133]AHRC, Submission 65, p. 9.

[134]Mr Ryan Black, Head of Policy and Research, Tech Council of Australia, Committee Hansard, 11October2023, p. 3.

[135]Professor Davis, UTS, Committee Hansard, 6 September 2024, p. 1.

[136]Dr Aaron Lane, RMIT Blockchain Innovation Hub, Committee Hansard, 9 November 2023, p. 20.

[137]CDW, Submission 83, p. 7.

[138]Mr Black, Tech Council of Australia, Committee Hansard, 11October2023, p. 3.

[139]AATSE, Submission 14, p. 4.

[140]AAIN, Submission 58, p. 9.

[141]AHRC, Submission 65, p. 9.

[142]AHRC, Submission 65, p. 6.

[143]PLC, Submission 93, p. 7; ACSSO, Submission25, p. 10.

[144]CRA, Submission 88, p. 7.

[145]CDW, Submission 83, p. 7.

[146]Amazon Web Services, Submission 85, p. 6.

[147]ISA, Submission 22, p. 10.

[148]CRA, Submission 88, pp 6-7.

[149]ACSSO, Submission 25, p. 5.

[150]CDW, Submission 83, p. 7.

[151]IEUA, Submission 26 Attachment A, p. 4.

[152]CDW, Submission 83, p. 7.

[153]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, p. 6.

[154]Mr Chris Davern, Assistant Secretary, Strategic Policy Branch, Strategy, Data and Measurement Division, Corporate and Enabling Services Group, DoE DoE, Committee Hansard, 6 March 2024, p. 6.

[155]Australian Copyright Council, Submission 69, p. 2.

[156]CAG, Submission 36, p. 4.

[157]CAG, Submission 36, p. 5.

[158]CAG, Submission 36, p. 7.

[159]CAG, Submission 36, p.13.

[160]The Australian Publishers Association (APA), Submission 101, pp. 1–2.

[161]Australian Society of Authors (ASA), Submission 102, p. 2.

[162]Copyright Agency, Submission 80, pp. 1–2; APA, Submission 101, pp. 2–3.

[163]ASA, Submission 102, p. 2.

[164]Copyright Agency, Submission 80, p. 2

[165]ASA, Submission 102, p. 4.

[166]AAIN, Submission 58, p. 9.