- Risks and responses
- The emergence of generative artificial intelligence (GenAI) as an educational tool has brought safety, wellbeing, and other concerns, with it. The inherent challenges presented by GenAI affect all users, including students and educators. It is essential to be aware of risks pertaining to the technology itself, its use, and the data, in order to manage them.
- Some of these challenges, which are all linked to safety, wellbeing, and security in various ways, include:
- online safety and adverse impacts on personal development
- overreliance on GenAI
- mis- and disinformation
- algorithmic bias and data-driven profiling
- data capturing practices by educational technology (EdTech) companies
- transparency, and the commercial interests of EdTech companies
- data security, privacy and copyright.
- Many of these risks stand to disproportionately impact vulnerable groups, including children, Aboriginal and Torres Strait Islander students, female students, and students from culturally and linguistically diverse populations. Female students as well as students from culturally and linguistically diverse populations may be particularly affected due to being mispresented.
Context of safety and wellbeing
3.4The Committee heard that people commonly do not feel safe when using artificial intelligence (AI). According to KomplyAi, on average, Australians distrust GenAI technology more than people in most countries. Professor Nicholas Davis, Industry Professor of Emerging Technology and Co-Director of the Human Technology Institute at the University of Technology Sydney (UTS), also commented that:
…in terms of where we are today, from my discussions with teachers, schools, parents and others, as much anecdotally or more anecdotally than anything that is purely systematic, we're in a place where people are more scared and more confused than they were, rather than having deeper levels of clarity and understanding.
3.5Looking at the broader context, the Australian Government has been active in rolling out reforms regarding human safety and wellbeing with respect to technology. The Online Safety Act 2021 (Cth) (OSA) gives the eSafety Commissioner a suite of regulatory powers to protect Australians from online harm. The eSafety Commissioner claims that under the OSA they can remove abusive and harmful content, take enforcement action against those who fail to comply, and develop industry codes that cover the eight sections of the online industry. The OSA is under review by the Department of Infrastructure, Transport, Regional Development, Communications and the Arts with a report due to the Minister for Communications by 31 October 2024.
3.6Further, in June 2024, theCriminal Code Amendment (Deepfake Sexual Material) Bill 2024was tabled ‘to strengthen laws targeting the creation and non-consensual dissemination of sexually explicit material online, including material created or altered using generative AI, including deepfakes’.
3.7The eSafety Commission expressed concerned about the potential for GenAI to amplify cyberbullying and cyber abuse. This is due to GenAI’s ‘capability to produce ‘human-like’ interaction combined with novel high quality personalised content’. Although certain GenAI products have minimum age requirements to use them, generally 13 or 18 years of age, companies like OpenAI are unlikely to adequately protect minors who use them regardless. This is underpinned by their assertion that they receive reports about cyberbullying from children as young as eight on social media platforms despite their minimum age requirements.
3.8Mobile phones have been banned in all Australian public schoolsas the Australian Government hopes it will improve student, as well as teacher, wellbeing, and reduce cyberbullying. The Independent Education Union of Australia (IEUA) stated that the manipulation and setting up of Facebook sites and pages to bully students and teachers is a pervasive issue, but that schools should have policies in place to manage social media bullying. The IEUA cites the removal of mobile phones in school as a means to address this. However, the Queensland University of Technology raises concerns about nation-wide bans on the use of mobile phones in schools; citing equity concerns for students experiencing disadvantage.
3.9An emerging concern is the introduction of facial recognition technology in the classroom. Kristen Migliorini, Founder and Chief Executive Officer of KomplyAi, claimed there is a risk of the use of facial recognition technology to monitor student behaviour and concentration levels. Facial recognition technology has previously been deployed in schools in Sweden to take student attendance. While this did save time, it meant that teachers were not interacting with students as a means to find out what was happening in their lives as it removed that informal structure. Deployment of facial recognition technology was done to alleviate teacher workload, but was banned by a Swedish court over data protection concerns.
Chatbots
3.10GenAI driven chatbots give rise to various safety and wellbeing concerns for students. There are risks that GenAI could be trained on adult and inappropriate content that is incorporated into datasets that can generate content. Independent Schools Australia asserted that GenAI tools have the potential to produce highly realistic content such as text, images, or videos that may affect the emotional or psychological wellbeing of students and influence their mental health or emotional stability. Chatbots may have age‑inappropriate conversations or display content that is sexual or violent to children. For example, the Australian Science and Mathematics School found one incident of an image based GenAI tool being able to generate sexualised content.
3.11GenAI chatbots come across as having a ‘high level of authority, expertise, and competency’. The Centre for Digital Wellbeing (CDW) raised concerns about the level of oversight in the relationship between a chatbot and child, which could be ‘destructive to that child's mental health and wellbeing’. This is because the user may not be able to discern the limits of knowledge of the application, and the dataset that underpins the chatbot, and this may disproportionately affect children and young people.
3.12GenAI chatbots may present with ‘human-like’ qualities to children, including mimicking common conversational traits that imply a personal or trusted relationship with the student. Dr James Curran, Chief Executive Office of the Grok Academy, highlighted that the models are built to be conversational tools which makes detecting where they have wavered from the prompt difficult. Dr Curran further explained that it is important to remember that a user is having a conversation with a system that trained on the entirety of the internet, and a system that is skilled at predicting what the next most useful word will be. There are further concerns about the ethical development of GenAI and how a chatbot directly engages with children when it uses biased data scraped from the internet.
3.13Chatbots can provide mental health and wellbeing advice, which has both advantages and disadvantages. The eSafety Commission explained that an AI chatbot can provide timely and relevant advice on mental health and wellbeing by offering referral services and reporting harm and abuse. The Australian Academy of Technological Sciences & Engineering (ATSE) raised concerns about GenAI and mental health interventions:
There is an emerging risk that generative AI tools are interacting conversationally with users around mental health and wellbeing. This leads to risks that students may be encouraged to talk to an AI system rather than a human. While for some students discussing mental health issues with an AI may make them more comfortable to seek help for mental health issues, some students may be less likely to access timely interventions, might receive poor advice, or mental ill health may even be exacerbated by such interactions.
3.14The Committee heard that chatbots may be able to report and respond to concerns for the welfare and safety of children and young people. This may include ‘seeking help or making disclosures about experiences, events, or circumstances impacting their safety, health, mental health or wellbeing’.
3.15Pymble Ladies’ College (PLC) stated that GenAI can be used in a socio-emotional learning context to help students in understanding and managing their emotions. PLC contended that the technology can ‘track emotional progress over time and suggest techniques to manage emotions’ ‘provide interactive scenarios where students can practice emotional responses’, and ‘provide resources for self-help and coping strategies when it identifies emotional distress’.
3.16However, PLC also stated that GenAI’s understanding, and interpretation of human emotion, can be limited and lead to incorrect suggestions from the technology. Under the EU Artificial Intelligence Act (EU AI Act), the use of GenAI to detect emotion falls under the 'not acceptable at all' category in schools.
3.17Evolved Reasoning provided a concrete example of how a GenAI tool could both help and adversely affect a child’s wellbeing. It explained that a school child may be given a GenAI tool called SARAH, which can help with their homework, check-in, and provide guidance. SARAH would have the ability to detect what the student is good at, poor at, and provide a rating to the teacher and parents. SARAH may well stay with the child through to high school and then into their career. As demonstrated, this may help students to have a supportive and affirmative voice by their side through their schooling; but they may learn that the world is full of people who say, ‘good job’ or ‘go ahead.’ SARAH is also then drawing on a ‘fairly homogenous and limited dataset and a restricted worldview that's generated as a result of that dataset’.
Dependency on GenAI
3.18Many submissions raised concerns that students and educators might over rely on GenAI, and that this would have flow on effects. Students from The Grange P–12 College shared with the Committee that they wanted to determine how they use the technology. They stated that GenAI should be used as a secondary resource to supplement evidence rather than substitute it, and that all evidence should be corroborated. One student said:
But I think any use of ChatGPT should be just guidance and not a crutch. We should utilise the other resources that we have, such as textbooks, our teachers and even other sources on the internet. If we are using ChatGPT in schools, it's important to emphasise that we shouldn't rely solely on that and that we should double check it.
3.19Australian Council of State School Organisations (ACSSO) also asserted that GenAI tools should be used as a supporting resource and not as a substitute for face-to-face learning and in-person interactions. If used as a supporting resource, GenAI could potentially enhance learning and the role of the teachers. Whereas if students rely too heavily on GenAI, it could detract from teachers’ roles, even threatening to replace them.
3.20The National Tertiary Education Union (NTEU) did not consider GenAI an appropriate replacement for staff as the technology did not ‘engage [students] in critical thinking, [or] produce genuine creativity or innovation’, and human staff are still required to monitor GenAI outputs. As GenAI is trained on data and all data is historical, ‘an over-reliance on AI may limit innovation, insight, and discovery’. As such, Tertiary Education Quality and Standards Agency (TEQSA) considered it crucial to scaffold the introduction of GenAI technology throughout a student’s education journey so that they develop critical thinking skills needed to progress.
3.21An over-reliance on GenAI can also adversely affect students’ problem-solving skills, interpersonal skills, and decision-making skills, and lead to complacency and disengagement from teaching material. This may hamper human capacity through the reduction of individual capabilities and could risk the mass production of AI generated content. A related issue is the tendency for GenAI to ‘produce plausible but incorrect responses’ and join discrete concepts in a logical manner. This may affect student learning and understanding, especially if they rely solely on GenAI.
3.22The Committee heard that if students become dependent on GenAI, students may be deterred from using and building their skills that require effort and time. Monash DeepNeuron and the Victorian Association for the Teaching of English pointed to the example of the normalisation of spellcheck and grammar checks and the proliferation of applications such as Grammarly. Monash DeepNeuron asserted that the use of spelling and grammar checkers can lead to a decline in fundamental spelling and grammar skills as they reduce student surface errors, but do not correct errors on a cognitive level. Rather, these skills need to be cultivated through project-based learning, inquiry-based approaches, and real-world problem-solving activities that demonstrate the limitations of the technology.
3.23It is therefore important to implement a balanced curriculum and foster skills such as collaboration, critical thinking, and creativity that GenAI cannot replicate. Teachers should carefully monitor these activities to ensure the development of such skills amongst students.
Mis- and disinformation
3.24The ability of GenAI to proliferate mis- and disinformation on their platforms was identified as a risk. Misinformation poses a risk to the health and safety of individuals and society more broadly through the dissemination of ‘made-up news articles, doctored images and videos, false information shared on social media, and scam advertisements. It becomes disinformation when misinformation is deliberately spread to cause ‘confusion and undermine trust in governments or institutions’.
3.25The Committee heard that mis- and disinformation can foster distrust and biases between people and cultures, leading to poor outcomes for students. The spread of misinformation within school and wider communities can affect students’ wellbeing and their understanding of current events. Furthermore, Monash DeepNeuron stated that when misinformation is used for propaganda and other political purposes, it can radicalise GenAI users.
3.26Another concern related to mis- and disinformation is the proliferation of deepfakes and that GenAI can create them. A deepfake is a ‘digital photo, video, or sound file of a real person that has been edited to create a false depiction of them doing or saying something’. The Australian Human Rights Commission (AHRC) submitted that GenAI can be corrupted for misuse by generating ‘high-quality, cheap and personalised content, including for harmful purposes’ to generate deepfakes. These tools have the potential to cause significant harms and can be used to exploit, harass, ridicule, and spread mis- and disinformation.
3.27The Commonwealth Department of Education (Commonwealth DoE) has raised concerns about the use of GenAI to create deepfake material and has noted that 70 per cent of Australians aged 18 to 24 years have experienced harassment or abuse online in a 12-month period. The eSafety Commissioner has defined a deepfake as a ‘digital photo, video or audio file of a real person that has been manipulated to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say’. The eSafety Commissioner cautions that GenAI tools allow the ability to produce deepfakes with greater ease and at scale, which could result in serious and widespread harm to educators.
3.28On the proliferation of deepfake apps, Associate Professor Erica Southgate asserts:
Deepfake apps will pose significant challenges to schools and other educational institutions as they are weaponised for bullying, harassment, and deception. The rapid human and bot spread of deep fakes will probably surpass the damage already occurring with student online bullying and will adversely affect staff who are targeted and the ethical culture of the educational institution. The anonymity through which deep fakes can be created will exacerbate the issue.
3.29Furthermore, GenAI cannot separate fact from fiction, nor truth from disinformation or stories from news. AI tools can also ‘hallucinate’ content and produce factual errors in generated content. This includes fabricated moments in history and inaccurate scientific information and facts. The Tech Council of Australia emphasised that this is why GenAI models should not be considered ‘intelligent’, reiterating that they work on a predicative basis and trained data.
3.30Students with insufficient knowledge or skills may be unable to interpret opinions expressed as fact, from experts or amateurs, are at risk of accepting misinformation at face value, especially if they trust AI-generated information. Biased content in and of itself can further promote misinformation within student cohorts.
3.31The CDW used Finland as an example of combatting disinformation. Finland has a strong focus on combatting disinformation which specialises in developing digital literacy capabilities and a healthy relationship with technology. This is embedded in every part of the school curriculum from K–12.
3.32The AHRC recommended that the use of GenAI to create deceptive or malicious content in education settings be prohibited, and that policies be developed to ensure content verification so that individuals can accurately identify GenAI content. The AHRC further noted that these reforms would be insufficient if there were no digital literacy education and training that teaches GenAI users to identify false or manipulated content and to engage with technology responsibly and ethically. The Australian Library and Information Association (ALIA) recommended implementing a program to monitor GenAI outputs in education settings and for GenAI developers to commit to improving their algorithms in response to the findings.
3.33TEQSA made the following recommendations to the Committee, including:
- the need for ‘transparent disclosure of the training data and algorithms that underpin educational products so that they can be genuinely evaluated by government and educational institutions to ensure they are free of bias’ with the onus on EdTech companies to make the information intelligible
- the need for ‘developers to ensure that they are mindful of, and seek to eliminate, bias and discrimination through the data the model is trained on, the design of the model and its suggested applications’
- a requirement for educational administrators and institutions to ensure models and their applications are evaluated for bias and that their use is governed by institutional policies, and that adherence is monitored.
Transparency
3.34Several submissions raised concerns about the lack of transparency in GenAI applications and how this may affect student welfare. Issues relating to data sources, built-in surveillance in the platforms, costs and the commercialisation of the data, and applications of the EdTech were identified. There is a need to ensure that there is transparency in the gathering and aggregation of data, and how that may influence user decisions.
3.35Professor Davis explained issues of transparency:
Finally, on the behemoth point, the reason we have that competition problem is often that we don't have transparency about a level playing field in terms of outcomes and standards for what actually works. Secondly, a lot of tech companies are subsidising the use of services through the use of data at the back end—data broking, data leveraging and other areas. It goes back to the Privacy Act review and really protecting children's data from secondary use. Thirdly, we need transparency on the true cost of systems over time. At the moment, all our ChatGPT use is being subsidised by investors, stakeholders and one big tech company in the world.
3.36The Commonwealth DoE noted a lack of information about the development and commercialisation of GenAI models, which affects Government’s ability to understand its potential effects. There is currently a lack of transparency about the ‘scientific’, and ‘pedagogic logic’ that is behind the model or what data it has been trained on. Similarly, Dr Jose-Miguel Bello y Villarino, Senior Research Fellow, ARC Centre of Excellence for Automatic Decision-Making and Society, asserted that there needs to be some transparency about what GenAI developers have embedded into the application, and what is missing. If there is little transparency on the sources and algorithms it may lead to a ‘veneer of objectivity’ of large language models (LLMs), which can make students naive to quality and bias issues.
3.37The Committee heard that when there is a lack of transparency in GenAI models, it makes it difficult for users including children, teachers, and parents to understand how the technology functions. This can make it challenging for users to understand how they arrived at specific outputs, which can lead to challenges about developer accountability, concealed bias, discrimination, and errors. This can also lead to trust in the AI model without the critical judgement needed to confront biases and false information that can be prevalent on the applications. When there is a lack of transparency in the decision making process, ‘it becomes difficult to assess whether the system is making unbiased choices due to its ability to hide biases and discriminatory patterns’. Without transparency, there is no external oversight or means of correction.
Algorithmic bias
3.38GenAI systems ‘depend on robust and quality datasets to write, improve, and test algorithms’. This ensures accurate and reasonable outputs and minimises the risks of bias or incompleteness in results. However, GenAI systems are often trained on large, imperfect datasets that can ‘generate predictive outputs based on algorithms’ and ‘systematically reinforce bias and prejudice, historical discrimination, and archaic practices’. Models can reinforce bias and disadvantage by excluding marginalised and underrepresented groups or even overrepresenting some groups if misused or poorly designed.
3.39This issue of misuse perpetuating adverse outputs was highlighted by Dr Alexia Maddox, Senior Lecturer in Pedagogy and Education Futures at La Trobe University:
Again, I would very much reinforce this point: what is the data that these tools are learning on? The fact is that it's not just the data that gets ingested into the tools; it's also the data that people produce when they're using the tools.
3.40The below factors were identified as affecting the quality and accuracy of GenAI inputs and outputs.
- The age or scope of the dataset, the use of foreign data, such as US-based material or even older material from the public domain. These datasets do not represent a diverse sample and can be exclusionary.
- Factual inaccuracies where GenAI can produce ‘plausible but incorrect responses’ and its inability ‘to join discrete concepts in ways that appear to be logical’ which may impact student learning.
- Aboriginal and Torres Strait Islanders are underrepresented in data samples and there are often factual inaccuracies about their cultural practices. This may lead to Aboriginal and Torres Strait Islander students having a ‘poverty of connection to culture’ and a further erasure through the lack of visibility in GenAI datasets.
- An accreditation or regulatory framework may not standardise the AI tools available or ensure that the training data is ethical and transparent.
- ALIA asserted that the majority of datasets have been scraped from the internet and have differing levels of transparency about their content. Content that is scraped from the internet can vary in quality and relevance to educational contexts and is often western-centric. For example, ChatGPT3 was trained with text from the internet (85 per cent total); yet the training sets for ChatGPT4 are not public. Furthermore, users’ data may be scraped to inform GenAI tools, which may lead to inequitable outcomes for models trained on that data.
- GenAI also functions on a probabilistic model. The technology produces a ‘probable combination of pixels, words or other medium in response to a specific prompt’, leading to biases in student responses. This means that the AI model learns ‘facts’ based on quantity and not quality of content and outputs.
- The Committee heard that students who are exposed to biased GenAI outputs may be at risk of mirroring the misconceptions and stereotypes that are produced by the technology. Even when aware of the bias or stereotype, people may still be receptive to them. Algorithmic bias may entrench or obscure unfairness which may ‘reinforce discriminatory practices and widen educational disparities’. This could lead to adverse outcomes for students in areas including grading and university admissions, and affect personalised learning paths.
- Moreover, stakeholders highlighted that GenAI is fallible and multiple submissions included examples of bias produced by the technology, including:
- ChatGPT has a propensity to perpetuate gender and racial stereotypes; likening men to ‘doctors’ and ‘engineers,’ women to ‘nurses’ and ‘teachers’ and ‘thief’ or ‘criminal’ to people of colour
- ALIA asked ChatGPT to write a story about two children set in Australia. The tool wrote a piece using two anglicised names and when asked to rewrite the story with different names, continued to provide traditional English-speaking names that are not necessarily representative of modern Australian society
- when prompted with images of ‘kids soccer team having fun’, it only showed boys playing soccer and having fun.
- Conversely, PLC suggested that GenAI technology relies on human feedback for the reinforcement of learning and can be quite circumspect and calibrate answers back to the centre. This was contrasted to platforms such as YouTube and TikTok, which are large algorithmic tools that are vying for the users’ attention. On those platforms, the further down the rabbit hole a user goes, the more biased and extreme content they will be shown.
- The AHRC considered it important to address bias in GenAI outputs to ensure that Australia’s education is ‘fair, inclusive and promotes equal opportunities for all students’. The Tech Council of Australia contended that educational institutions can create a knowledge base with trusted sources of information; consider the removal of inappropriate external sources from the tools, with a focus on sensitive topics; and, introduce human review and application of critical thinking skills to identify bias.
- The CDW suggested the development of comprehensive legislation for GenAI that leans on international best-practice such as the EU AI Act. Similarly, the AHRC recommended that there should be continual evaluation and validation processes and regular independent auditing to ‘identify and mitigate algorithmic bias’.
EdTech interests
3.48In 2020, the Australian EdTech sector employed 13,000 people and generated $1.6billion in domestic revenue and an additional $600 million from exports to the international market. Submissions expressed concern that EdTech and commercial interests may affect the rollout of GenAI in the Australian education system.
3.49Monash DeepNeuron highlighted that as GenAI services expand, they will become heavily commercialised. In Monash DeepNeuron’s view, the EdTech sector has a history of prioritising commercial interests over student outcomes, which has led to the delivery of content that is ‘poorly tailored to student needs’. There are also risks that GenAI will be controlled by commercial, overseas interests, with commercial or profit-driven motives and who may not address concerns raised by education professionals.
3.50In its submission, the Centre for Research on Education in a Digital Society (CREDS) cited a review of the 100 most frequently used EdTech tools in the US which found that only 26 out of 100 met the threshold for any level of learning. It was noted that poor application development can lead to underuse and poor use, and may not represent Australian values or experiences. As such, it is important that investments in EdTech are underpinned by evidence that the tools will be used to support the outcomes they claim to target.
3.51The Committee heard that children are placed at particular risk if EdTech interests are allowed to grow unfettered. This is because a principal risk of EdTech is the sale or transfer of children’s personal data to third parties or, in the case of GenAI, ‘the use of student search queries being analysed to inform targeted advertising’. The AHRC noted that by a child’s 13th birthday, advertisers will have already gathered more than 72 million data points about them. It is therefore critical that data collected through EdTech not be used for other purposes and children are protected from data surveillance.
3.52There is a prevailing sentiment that Australia has and will need to continue to set a high-quality threshold for EdTech products. Failure to set a sufficiently high threshold will see products sold at the lower quality threshold that they’re already operating at. EdTech will become more advanced, sophisticated and intuitive as the technology grows and more AI components are built into their systems.
3.53Australia is well positioned to integrate GenAI EdTech into the education systems with the Safer Technologies 4 Schools Framework, to which all Australian education ministers have signed up. A number of domestic and international EdTech companies have signed up to be accredited under this program, which operates under the auspices of the Commonwealth DoE.
3.54Dr Curran states it will be important to set strong standards that the industry has to reach to operate in the Australian market. Professor Leslie Loble AM, Industry Professor at the University of Technology Sydney, supported the introduction of standards so that they can compete on quality. If an EdTech company has invested a huge amount of money in a product, they do not want that undercut by someone who has not and who is at the lower end of the quality threshold. Professor Loble recommended that educators ‘must retain authority and control over EdTech used in classrooms’, ensure the use of quality tools, the ‘effective use and integration into teacher-led instruction’, and ensure a strong network of policies, institutions and incentives to shape and govern the EdTech market.
Student data
3.55In its submission, ALIA raised concerns that EdTech products are collecting and monetising student data. ALIA assert that the risk of collection and monetisation of student data will continue to increase given the fast-moving nature of the sector where there is a significant first mover advantage. The NTEU stated that most advanced AI systems are being developed by foreign, for-profit entities that operate with little transparency about the types of data they collect and how it used. This presents a problem with educational institutions engaging external contractors to deliver teaching and student support.
3.56There are also concerns about algorithmic transparency in the grading and assessment of student work by AI systems. A lack of human presence in grading may also make the appeals process unfair and unclear.
3.57Stakeholders suggested ways to create transparency in GenAI use, including:
- guidelines: transparency and accountability can be emphasised through clear protocols and guidelines which govern the use and reporting of AI-generated outputs
- transparent data use policies: ‘educational institutions and AI developers should be required to have clear and transparent data use policies’ including how data is collected, how data will be used, how long data will be retained, and what measures are being taken to protect data privacy
- open access: researchers and developers need to prioritise transparency and explainability by providing clear documentation, sharing methodologies, and engaging in open dialogue. This would allow researchers and the broader public to understand what is happening behind the scenes to determine if the measures in place are suitable to regulate the technology
- transparency reports: AI-organisations should publish transparency reports detailing how their systems are used, how algorithms function and how they affect users
- third party evaluations: there should be independent third-party evaluations of AI tools and systems to ensure that they are transparent, fair and accountable.
Data security
3.58The COVID-19 pandemic necessitated the adoption of EdTech products into Australian schools to manage online learning and the establishment of the virtual classroom. The Committee heard that the speed of uptake of EdTech products by schools raises concerns about data privacy and the security of sensitive student information. The CDW asserted that 89 per cent of the EdTech platforms available put children’s safety in danger by ‘monitoring them without their consent and allowing access from or selling the data to third parties’. In its submission, Charles Sturt University reported:
Over four million Australian children’s data may have been compromised in 2022 due to unsolicited cookies integrated into EdTech products used in Australian schools, infringing on their privacy and exposing risks such as lack of informed consent, privacy erosion, and cyber security issues.
3.59The CDW stated that access to children’s data can leave them susceptible to commercial exploitation by exposing them to overt advertising or sponsored content. The adoption of this type of EdTech can be problematic as children under age 12 ‘do not understand the pervasive nature of advertising and children 8 years and under cannot differentiate between content and advertising’, making them susceptible to microtargeted marketing.
3.60The use of GenAI in education raises issues about how data is stored, who can access it, and how it used. For example, data entered into GenAI tools may become the property of the owners of the tools, raising concerns about the privacy and security of the data, something which may be problematic where products build user profiles over a period of time.
3.61If the adoption of GenAI in the classroom becomes compulsory, there may be limited opportunities for teachers, children, or parents to opt-out, or even provide full consent to use of the technology. Most GenAI companies are aware of these issues and have a set an 18+ age restriction for accounts.
3.62Even if students’ data is not sold, students may still be exposed to risk through the continuous gathering of personal data, used to optimise the individual user experience. There are cyber security and data security concerns that Australian schools may be under-resourced, or lack expertise, to address. In its submission, PLC raises concern that additional costs will be needed to manage security measures and encryption.
3.63PLC noted that limited access to data may affect AI's ability to provide personalised learning for students. This is because GenAI tools require sensitive personal data to function effectively such as a student’s personal ID and academic records. Some schools are cautious about integrating GenAI into teaching because of the large datasets required. This is to protect the personal information of students, teachers and other individuals as ‘mismanagement of data can lead to privacy breaches, misuse of information, or unauthorized access, compromising the trust between educational institutions and stakeholders.’
Protecting privacy
3.64The right to privacy is a recognised human right that is becoming increasingly important in a data-centric world. The AHRC stated that GenAI has the capacity to intrude on people’s privacy in new and concerning ways, if not properly regulated. ATSE submitted that issues relating to data privacy are compounded by the different approaches to privacy across Australian jurisdictions and internationally.
3.65Several submissions pointed to the need to review the Privacy Act 1988 (Cth) (the Privacy Act) with a view to strengthening privacy protections for children, particularly in relation to the use of GenAI. Currently, there are no exemptions in Australia’s privacy, consumer protection, or anti-discrimination laws for AI development and deployment.
3.66Professor Davis noted that the Privacy Act is 25 years out of date in certain respects, which may not be conducive to regulating emerging technologies. However, Dr Aaron Lane from the RMIT Blockchain Innovation Hub asserted that Australia does not necessarily need to update Australia’s privacy law, and that it already applies to GenAI. The Privacy Act review by the Attorney-General’s Department (AGD) could establish a ‘robust data protection framework that outlines the rights of students in relation to personal data as well as establishing limitations to the collection, use and retention of data of minors’. The Tech Council of Australia asserted that there is a need to consider arrangements that will apply for foundational and frontier models, domestically and at a global level.
3.67DISR is developing Australia’s position on GenAI. The ATSE suggested that DISR develop enforceable data privacy standards that will help to regulate training and user-inputted data in AI systems. Standards for the safe and secure use of GenAI tools should also seek to establish the storage of personal data, interactions with GenAI and the protection of intellectual property (IP). The AHRC emphasised that standards should:
Expressly protect student data, limit access to sensitive information, and ensure that robust privacy and security measures are in place. Standards should be established to govern the collect, storage and use of personal information in the context of generative AI tools in education.
3.68The AHRC cautioned that introduced standards should not be based on assumptions about what is in the best interests of children. Rather their views should be actively considered as an ‘adult’s interpretation of children’s privacy needs can impede the healthy development of autonomy and independence and restrict children’s privacy in the name of protection’. This can result in overly protectionist agendas which can be potentially harmful to children.
3.69Other measures to protect data and privacy include through encryption and adopting robust security protocols, only collecting necessary data and safeguarding sensitive information, and anonymising information wherever possible. To adequately protect students from cyber security threats, the Cooperative Research Australia stated that the Australian Government can extend the 2020-2030 Australian Cyber Secretary Strategy to protect AI models and education data from cyber threats and misuse. Government can also provide both technical and financial support to educational institutions to protect students from cyber security threats.
3.70It may also be practical to adopt risk-based AI governance practices where appropriate. GenAI can be used in high-risk ways in education such as the automation of decisions that will meaningfully impact a student’s wellbeing, and there should be a baseline expectation that organisations can implement appropriate governance-based safeguards to identify and mitigate these risks.
3.71Educators and administrators have a responsibility to use GenAI tools ethically and responsibly. This includes obtaining appropriate permissions for data usage that will ensure transparency in AI-generated content and to be accountable for the decisions made based on GenAI outputs. Data should only be used for educational purposes, and be protected from unauthorised access.
3.72ACSSO advised that it is important to have strong data protection laws and regulations so that there are transparent practices and individual users are informed about how their data is used. Similarly, the CDW put forward that government can require AI developers and educational institutions to implement secure data storage practices, and strong encryption practices which detail the types of data collected and their purposes, how the data will be used, how long it will be retained, and the measures taken to protect users’ privacy. The IEUA also outlined the need for the sharing of personal data to meet the highest privacy standards by having clear limits on:
- the type of data to be shared
- where and how data will be stored
- the length of time that data may be stored
- the purpose for retrieving data
- personnel who can access the data, must be provided to ensure clarity exists for those managing this matter within schools.
- The CDW suggested that ‘AI tools used in education should be assessed for their privacy impact by the Office of the Australian Information Commissioner’. Such an assessment could identify potential risks to data privacy and outline mitigation measures to take before implementation of the tool.
Copyright
3.74Throughout the inquiry, the use of copyrighted material was identified as a risk of GenAI use in the Australian education system. The Commonwealth DoE is currently working with AGD to engage with the education sector to manage copyright issues. In December 2023, AGD announced the establishment of the copyright and AI reference group, which will take carriage of this issue. As many stakeholders considered copyright and GenAI, some key themes are described below.
3.75In Australia, for content to be protected by copyright, it must fall into one of eight categories: ‘a work—literary, dramatic, musical or artistic work, or subject matter other than works—a film, sound recording, broadcast or published edition’. It must also be ‘sufficiently ‘original’…be in ‘material form’…and have a sufficient connection to Australia’.
3.76The Copyright Advisory Group noted two main issues with GenAI and copyright. First, in its current form, Australian copyright law does not provide any exceptions that would allow AI platforms to use third party material and datasets for ML. Second, there are issues around how to define the legal status of GenAI outputs and how they can be used in teaching and learning.
3.77The National Copyright Unit has been unable to provide definitive copyright advice due to the lack of clarity on the legal status of GenAI platforms and the processes that used to generate content or modify existing works.
3.78The Committee also heard that there are administrative complexities as obtaining a licence for every input to an AI system would be prohibitive. As such, ‘practical and legal access to rich datasets for the purpose of training AI systems and tools is imperative in order to serve the public interest and mitigate the potential of bias in our AI systems.’ Schools use large amounts of digital content that was not intended for commercial exploitation with much of it being made freely available on the internet. Schools are currently expected to pay millions of dollars each year to copy, print, or email material that had no expectation of payment to the copyright owner.
3.79The Australian Publishers Society similarly raised concerns regarding educators using AI without adequate regulatory oversight, copyright and IP issues jeopardising the creation of new Australian learning materials, and the risk posed by unregulated GenAI to the quality, diversity and authenticity of educational content.
3.80As GenAI models are trained by ingesting large amounts of text to produce outputs, the models are reliant on the quality of the training dataset. The Australian Society of Authors advised that OpenAI has admitted that it could not have created AI tools without using copyright materials as input. Stakeholders also observed that AI platforms have used, stolen and pirated content without permission from creators or rightsholders, raising concerns about copyright infringement. It is argued that the tech sector has appropriated creators’ content without payment, and this has the potential to significantly reduce the income of those in Australia’s creative industries, in turn compromising the quality of Australian educational content. The Australian Society of Authors is aware of 130 authors who have had their work used without permission.
3.81Evidence requested by the Committee highlighted concerns about Indigenous Cultural and Intellectual Property (ICIP) and Indigenous Data Sovereignty. Copyright Agency asserted that Aboriginal and Torres Strait Islanders are concerned about ‘maintaining authenticity in relation to their culture, and control over how aspects of their culture is used by others’. GenAI may be used to ‘produce and perpetuate inauthentic and fake art, and appropriate Aboriginal and Torres Strait Islanders’ art, design, stories and culture without reference to Traditional cultural protocols’. The risk that ICIP be incorporated into GenAI models without appropriate attribution or acknowledgement should be minimised.
Committee comment
3.82The Committee heard extensively about a range of serious risks and challenges presented by GenAI in education. These can relate to the technology itself, the ways it is used, and the data inputs and outputs. Key concerns exist around student safety and wellbeing—such as deepfakes and cyberbullying—the potential for overreliance on GenAI, mis- and disinformation, algorithm bias, data protection, and transparency.
3.83There are additional risks and vulnerabilities associated with dealing with minors. Forinstance, the Committee notes the work underway by AGD on privacy and copyright concerns, including in relation to AI, and calls for a focus on children and GenAI as part of this process.
3.84The Australian Government and other key players need to manage these risks as a matter of priority by implementing safeguards and restrictions to protect students and educators. Safety and related concerns are paramount. The Australian Government is already rolling out reforms regarding the safety and wellbeing of children and technology, such as around deep fakes, cyberbullying and the use of mobiles in the classroom. There are also concerns about the security of data, including for student data to not be sold to third parties or landing offshore.
3.85It is clear to the Committee that the Australian Government can play a leadership role in mitigating the challenges that arise from GenAI in education. The Australian Government can identify, coordinate, and help implement compulsory and voluntary guardrails. This includes a focus on the safe, responsible and ethical use of GenAI, the EdTech market—including developers, deployers and end-users—and the technology and data.
3.86The Committee encourages the Australian Government to build a solid network of policies, regulations and incentives to shape and govern the market for GenAI products for the Australian education system. It is possible to regulate GenAI products in the education sector by focussing on a system-wide approach, without requiring sector-specific regulation. Any measures should be aimed at ensuring that EdTech companies and developers are transparent and fair, and held accountable to address significant risks, including algorithmic bias and discrimination, data security and privacy. Ed Tech companies and developers should be able to respond to evidence about what constitutes high-quality educational tools to assist learning and teaching.
3.87There has been an explosion of GenAI tools, and the Committee commends the guidance being developed on how to select appropriate tools for Australia’s education settings. It is important to set strong standards that industry has to meet to operate in the Australian market, and to have robust data protection frameworks.
3.88Everyone has a role to play in safeguarding against risks, from students to educators to institutions. Take for example, algorithmic bias. GenAI systems can produce unfair or discriminatory content and can show partiality in inappropriate contexts, perpetuating societal bias and inhibit critical thinking. It is essential to mitigate risks of bias—and misinformation and disinformation—in AI generated outputs. It is the Committee’s view that educators should be able to teach students the required skills to critique AI generated outputs, and educational providers should undertake regular independent audits of bias in the AI systems employed within their institutions to reduce these risks.
3.89The Committee recommends that the Australian Government:
- regulate EdTech companies and developers through a system-wide risks‑based legal framework
- regulate unacceptable risks and high-risk AI systems in the education sector, mandate guardrails, and give the law extraterritorial effect
- ensure EdTech companies and developers’ products meet established standards, including through testing and independent quality assurance
- require EdTech companies and developers to share critical information about how their AI systems are trained, what data it has been trained on, and how algorithms function and affect users
- require EdTech companies to provide a Gender Impact Assessment to be completed.
3.90The Committee recommends that the Australian Government work with AI developers and educational institutions to create robust data protection frameworks. This includes, but is not limited to:
- outlining students’ and other users’ rights regarding their personal data
- identifying the measures taken to protect users’ privacy
- limiting, and getting permissions for, the collection, use, and retention of students’ data, including:
- that certain types of data be collected
- that data should only be used for educational purposes
- that data be protected from unauthorised access and to have strong encryption practices in place
- where, how, and for how long data can be stored
- the purpose for retrieving data and who can access the data
- that users’ data is not stored offshore or sold to third parties.
3.91The Committee recommends that the Australian Government work with educational providers to mitigate the risks of algorithmic bias and mis- and disinformation by:
- training educators to teach students how to critique AI generated outputs
- mandating that institutional deployers of AI systems in educational settings run regular bias audits and testing
- prohibiting the use of GenAI to create deceptive or malicious content in education settings
- completing risk-assessments
- for example, identifying and seeking to eliminate bias and discrimination through the data the model is trained on, the design of the model and its intended uses
- mandating to allow independent researchers ‘under the hood’ access to algorithmic information.
3.92The Committee recommends that the Australian Government:
- ensure that the privacy law reforms led by the Attorney-General’s Department include strengthening privacy protections for students, including minors, regarding the use of GenAI
- encourage the Office of the Australian Information Commissioner to develop an impact assessment measure which can identify the data privacy risks of GenAI tools use in education, and includes pre-deployment measures for implementation of GenAI tools.