Chapter 2 - Integration practices and policies

2. Integration practices and policies

Domestic uptake of GenAI in education

2.1The adoption of generative artificial intelligence (GenAI) in the education system in Australia varies widely at a jurisdictional and institutional level. This is due to a multitude of reasons, such as technical challenges, resource constraints, and attitudes. Initially in Australia, the use of GenAI was banned in certain public and independent schools, and in some universities. Conversely, it was embraced by other schools and higher education (HE) institutions.[1]

2.2Now, students and staff in many schools, TAFEs, and universities, are experimenting with GenAI to perform tasks of differing complexity. Some of these institutions have begun ‘teaching practical courses to prepare students for life in an AI-driven world’.[2] The Committee heard that the most utilised GenAI tool in the Australian education system is ChatGPT, which generates text.[3] There is still a gap, however, in the use of GenAI in educational settings in Australia.

  • Early childhood education and care (ECEC): stakeholders had difficulty in identifying examples of use, indicating that there is no limited use.[4] They drew a distinction between the use of GenAI by children in ECEC, compared to use by educators or other staff who could use it as an educational tool and to reduce administrative burden.[5]
  • Schools: some jurisdictions initially banned GenAI in schools.[6]Since then, South Australia (SA) has held pilot trials of GenAI in schools,[7] as well as New South Wales (NSW).[8] There are basic issues around access to the technology in Northern Territory (NT).[9] The evidence indicates that GenAI is being used by less by younger primary school students than their older counterparts.[10] Grok Academy supported ‘K-12 teachers with free professional learning resources and students with free online and unplugged, self-paced cloud learning resources aligned with the national Curriculum’.[11] Some schools like Pymble Ladies’ College (PLC) are ahead of the curve in adopting and integrating GenAI.
  • HE: evidence suggests discrepancies in the uptake of GenAI in the HE sector, from fairly progressed uptake to none.[12] Some institutions already have their own policies and approaches towards GenAI, like the University of Sydney and the University of Melbourne (UoM), while other universities and affiliated student groups are collaborating on how to best use GenAI in educational settings, such as Monash University (MU) and Monash DeepNeuron.[13] The Tertiary Education Quality and Standards Agency (TEQSA) has been stepping in to promote greater consistency.

Guidance on integration

2.3As GenAI is already being used in the Australian education system by many students and educators, the Committee heard repeated calls for consistent guidelines, policies, and guardrails to help maximise the technology’s benefits whilst mitigating its risks.[14] Educational providers and educators are also asking for support to select appropriate GenAI tools.[15]

Age suitability

2.4During the inquiry, the Committee heard mixed views about whether it was suitable to integrate GenAI tools into education depending on the age of the student. These ranged from:

  • it may be appropriate to restrict use for children in ECEC and in primary school;[16]
  • primary school students could use GenAI if there is age-appropriate training;[17] and
  • it may not be necessary to have age limitations at all.
    1. One key concern raised at both PLC and The Grange P–12 College was the risk of overreliance on GenAI and possible negative impacts on students’ development.[18] Astudent at The Grange P–12 College cautioned against students using artificial intelligence (AI) at a young age, stating:

If we do start implementing AI at early ages, it makes [students] think that they'll have it with them through all the stages of life and it doesn't prepare them for the real world… you have to rely on yourself and your independence.[19]

2.6There are many other considerations on age suitability to use GenAI.

  • Greater risks exist for children, including around vulnerability and safety.[20]
  • There are many privacy risks around students’ personal data, including for profiling and grooming.[21]
  • Children need to ‘develop healthy social-emotional skills and become critical and creative learners before they experience the world of GenAI.[22]
  • Risks around the use of screen time for children.[23]
  • The utility of traditional learning methods and teaching practices.[24]
  • Human interaction, creating relationships with other children and adults, and a play-based curriculum, are central for younger children.[25]
  • Informed consent is required, including from students themselves.[26]
  • Children who do not get parental consent would not have access to GenAI tools like their peers.[27]
    1. The Australian Human Rights Commission (AHRC) has recommended having consistent national guidelines to ensure the responsible and ethical use of GenAI tools in the Australian education system, including guidance on what age is appropriate for students to start using GenAI.[28] Independent Schools Australia (ISA) called for ‘age-appropriate implementation of AI tools in education [that are] evidence-based, reviewed and evaluated’.[29]
    2. The Commonwealth Department of Education’s (Commonwealth DoE) Australian Framework for Generative AI in Schools does not specify at what age students should start employing GenAI. However, it is highly pertinent that there are minimum age requirements to access certain GenAI tools.[30] For example, OpenAI’s website states that ‘ChatGPT is not meant for children under 13, and [it] require[s] that children aged 13 to 18 obtain parental consent before using ChatGPT’.[31] This means that primary school students should not be using it, and high school students can access it if they have consent from their parents or guardians.

ECEC and HE

2.9The Commonwealth DoE advised that there are no current plans to have a taskforce or further framework—like the Australian Framework for Generative AI in Schools—for ECEC or HE.[32] However, there has been activity in ECEC and HE by the eSafety Commissioner, and the national regulators for ECEC and HE, the Australian Children's Education and Care Quality Authority (ACECQA), and TEQSA.

2.10Regarding ECEC, ACECQA conducted a review in December 2023 into child safety arrangements under the National Quality Framework (NQF). It considered whether the NQF was fit-for-purpose in light of the emergency of artificial intelligence (AI).[33] The NQF provides a regulatory scheme of laws and regulations, quality standards, and approved learning frameworks, aimed at protecting children’s safety, health and wellbeing.[34]

2.11ACECQA’s review identified AI as an emerging issue and referred to guidance on GenAI risks for ECEC from the eSafety Commissioner’s GenAI Position Statement.[35] It found that approved providers and educators do not always have the confidence and skills to ensure an child safe online environment, including regarding risks of AI.[36] In early 2024, Education Ministers agreed in principle to ACECQA’s final report recommendations, and the Commonwealth DoE has indicated that the NQF will be updated.[37]

2.12It is also important to highlight that throughout the inquiry, stakeholders had difficulty identifying examples of use of GenAI in ECEC.[38] Greater risks posed by GenAI apply to children in ECEC than older students. There could be a role for educators in ECEC to use it, for instance to reduce the administrative burden.[39]

2.13In the HE sector, institutions have started to implement their own GenAI use policies. Some university peak bodies argued that each institution should take a localised approach, while other institutions sought clear and consistent standards across the sector.[40] TEQSA has provided guidance, as have other key bodies like the Australian Academic Integrity Network (AAIN).[41] TEQSA has been holding conferences and has been publishing materials about GenAI use in higher education settings online.[42] TEQSA’s Chief Commissioner informed all HE providers of a request to provide action plans on how they are addressing risks posed by GenAI, especially risks to integrity and to the award of degrees.[43]

2.14On 25 February 2024, Minister Clare released the Australian Universities Accord Final Report (the Accord). It contains forty-seven recommendations for HE reforms.[44] The Accord noted the rapid development of GenAI in the HE sector, how it challenges ‘traditional approaches to teaching and assessment’, and its potential to improve research productivity.[45] Minister Clare stated that ‘the Accord will… help us build a better and fairer education system where no one is held back, and no one is left behind’.[46]

Australian Framework for Generative AI in Schools

2.15In December 2023, the Australian Government released the Australian Framework for Generative AI in Schools, which came into effect in January 2024. The Australian Framework for Generative AI in Schools encourages the use of GenAI in all Australian schools and aims to guide the responsible and ethical use of GenAI tools to help students, schools, and society to realise the benefits of GenAI while recognising its risks. The Australian Framework for Generative AI in Schools can be used by school leaders, teachers, support staff, service providers, students, parents and guardians, and policy makers.[47]

2.16The National AI in Schools Taskforce, comprising representatives from all jurisdictions, developed this evidence-based guidance. The Australian Framework for Generative AI in Schools will be reviewed every 12 months or as needed and is based on three goals: education outcomes, ethical practices, and equity and inclusion. It contains 25 guiding statements aligning to the following principles:

  • teaching and learning
  • human and social wellbeing
  • transparency
  • fairness
  • accountability
  • privacy, security and safety.[48]
    1. Some of the expert panel members shared their differing views about the Australian Framework for Generative AI in Schools. Dr James Curran, Chief Executive Officer of the Grok Academy, said it was ‘allowing schools to have more principled conversations about where they're going’.[49] Associate Professor Julia Powles, Director of University of Western Australia Tech and Policy Lab, commented that it ‘needs to be a meaningful guiding framework rather than what it is right now...’.[50] Professor Leslie Loble AM, Industry Professor at the University of Technology Sydney (UTS), thought it was a ‘fabulous first step’ and supported further work. However, Professor Loble cautioned against creating an even greater workload for educators, for instance, undertaking quality assurance of GenAI tools.[51]

Implementation and GenAI tools

2.18The Commonwealth DoE advised that the taskforce is creating an implementation plan for the Australian Framework for Generative AI in Schools. States, Territories, and non-government school authorities are responsible for their own education systems and will need to implement it. The Commonwealth DoE highlighted the need for clear expectations about the kinds of GenAI applications that are available to schools, and national technical standards for schools to understand. For instance, the Hon Jason Clare MP, Minister for Education, recently advised that schools should not use GenAI products that sell students' data.[52]

2.19To help implement the Australian Framework for Generative AI in Schools, Education Ministers provided $1 million to Education Services Australia (ESA) to set ‘product expectations’ for GenAI tools in education, including to protect students’ data and privacy.[53] The English Teachers Association NSW added that ‘decisions about the suitability of tools could be made at scale to ensure that they are trustworthy and equitable without undermining teachers' pedagogy’.[54]

2.20Professor Nicholas Davis, Industry Professor of Emerging Technology and Co-Director at the Human Technology Institute, UTS, also highlighted the opportunity for the Australian Government to set product standards:

… there are currently no standards for efficiency, effectiveness, performance and pedagogical efficacy of ed tech and similar products in Australia. It is a fantastic opportunity for the federal government to set the standards for what is expected, including the transparency of those systems and proving that there is some theory behind them… Currently, your average school is very poorly placed to do thoughtful procurement of these systems, so advice on standards et cetera would be critical.[55]

2.21Minister Clare stated that Australia is ‘entering an age where AI has got to be part of education’.[56]There are contrasting views about who should be responsible for providing GenAI tools to schools, and how they should be rolled out. Several options for accessing GenAI tools in education were identified below.

  • Individuals and educational institutions could use publicly available GenAI tools for free. However, they raise more risks than bespoke products do. Another option would be to pay for premium subscriptions for those tools.[57]
  • Schools or State/Territory governments could run pilot programs and scale them. For example, South Australia Department for Education (SA DFE), NSW, and PLC have been pioneering this.
  • The federal government could build a foundation model from scratch or work with companies to re-train an existing foundation model/large language model (LLM) to include particular inputs and filters/constraints on data.[58]
    1. Professor Loble warned that existing GenAI products are ‘not quality education products’ and behind them sit LLMs of differing quality. Professor Loble argued that when products are procured for schools, the inclusion of upfront requirements is crucial, such as independent quality assurance. Professor Loble further stated that ‘it's really important that we know that that is a product that is linked to the best evidence we've got about how students learn and what will support the professionalism and agency of teachers’.[59]
    2. The quality of GenAI education tools could also be improved if the foundation models are trained on datasets based on the national curriculum. This would promote data inputs that are relevant, and local to Australia, and inclusive; such as being sensitive to gender and culture.[60] It could also assist with mitigating some of the risks with the outputs of GenAI tools. As TEQSA commented:

… consideration should be given the data on which AI is trained to ensure local contexts are adequately represented. This is important to avoid erasing Australian and indigenous culture in a sea of US-centric internet content. Setting down requirements for those creating AI models to be purposeful and considered about the training data can help create inclusive and diverse AI systems.[61]

2.24Professor Davis supported the idea of the government training an LLM. Professor Davis explained that it would be worth it as ‘a public good for Australia, for our neighbourhood and for our relationships to have the research and the investment in training and validating systems … [that] can be used at low cost by anyone who wants them’.[62] This option would promote equitable access to a high-quality GenAI product in the Australian school system.

2.25Dr Curran explained that it would cost over $100 million to build a foundation model. Dr Curran stated that the federal government would possibly need ‘to rely on a small number of companies with the resources to be able to build these foundational models’.[63] He further stated:

Finally, on the government platform; to be clear, when I said $100 million, that wasn't to say that I don't think we should do it. But the amounts of money we're talking about to do that are serious. Thinking about some of our other large infrastructure projects—and we should think about this like an infrastructure project on the scale of the NBN—I suspect that these projects do take longer and are far harder than we think. I think a more likely scenario is to choose a partner and say, 'We have some particular constraints on what we want in the training data.' Whether that's with OpenAI, or Amazon or anyone else, we'd say, 'We want to pay for a model to be retrained that has a much higher-quality filter on the text that you've included in the fundamental model.[64]

2.26The importance of content filtering is highlighted by SA’s GenAI in schools pilot project, as described below.

Box 4.1 SA DFE pilot project

The South Australia Department for Education (SA DFE) ran a GenAI in schools pilot project. The project began in 2023 ‘with a proof of concept with Microsoft to integrate the ‘Open AI’ platform (the platform currently hosting ChatGPT) into the department’s Microsoft Azure Tenancy (private cloud).’ Thetrial included support for teachers and students to use it, and observations were recorded about its impacts.

The SA DFE explained that the approach of having a customised GenAI chatbot:

  • ‘allows greater control over what data or information can be accessed through the platform
  • provides the department with control over the data received through the platform
  • reduces the possibility of inappropriate content being provided, meaning it is more appropriate for teaching and learning purposes’.[65]

The SA DFE’s pilot project, which was considered a success, highlights the importance of data inputs:

From a technology perspective, the performance went well. Our guardrails were robust and there was high usage. The product was highly reliable, and the content filtering worked well. We did refine as we went along, making sure that the content filters were finely tuned at all times and were blocking what they needed to, but not blocking what they didn't need to. The educator and student experiences were positive. They reported that they actively enjoyed using EdChat and found it to be useful for both teaching and learning.[66]

Integrating GenAI into curriculum

Country case studies

2.27In 2019, Singapore became the first country in South-East Asia to develop a national AI strategy.[67] Singapore’s strategy includes the use of AI in education as a national priority.[68] Singapore’s policies highlight the need for data privacy, transparency, and accountability.[69] Singapore is giving its students more agency in their learning, while still maintaining the fundamentals of education, that is, literacy, and numeracy, and a strong curriculum.[70] The Singaporean Government also provides national professional development (PD) for existing and pre-service teachers to improve their comprehension and use of AI tools.[71]

2.28The Singaporean Ministry of Education is partnering with industry to develop AI tools to assist with teaching and learning.[72] The Australian Council of State School Organisations (ACSSO) commented that ‘these tools are aligned with the curriculum and are subject to rigorous evaluation to ensure they meet educational objectives’.[73] Singapore held a pilot project on personalised education through adaptive learning and assessment. The pilot project was ‘so successful in improving student educational outcomes and assisting teachers with their workload’ that the Singaporean Government has since invested in a ten-year collaboration with an EdTech company, and the National University of Singapore is also involved.[74]

2.29Finland has ‘integrated AI tools into the curriculum to enhance student personalised learning experiences’.[75] There is a focus on disinformation and building a healthy relationship with technology from K–12.[76] Students also learn about ethical considerations of AI and how to use tools safely.[77]

2.30ISA noted that other countries have similar approaches:

As of 2021, eleven countries have officially endorsed and implemented a K–12 AI curriculum, including India, China, Belgium, and South Korea with other countries such as Germany trialling pilot programs to allow teachers and students to explore the possibilities of AI in education within specific guidelines.[78]

Curricula in Australia

2.31In the domestic context, AI is being integrated into the national curriculum. The Australian Curriculum describes what students should learn in schools, as set by Australian Curriculum, Assessment and Reporting Authority (ACARA). It is up to State and Territory government authorities to decide how to implement the Australian Curriculum, and this varies between jurisdictions.[79] Moreover, teachers decide how to deliver curriculum content through their teaching practices and activities, and this also differs.[80] There is a need to create more consistency, and the Commonwealth DoE said the focus should be on the ‘translation piece’ on implementation of the Australian Curriculum. The Commonwealth DoE contended to instead focus on delivery, including ‘guidance given to people around the kinds of tools and processes they might use’.[81]

2.32ACARA stated that AI and other emerging technologies are accounted for in the recently revised Australian Curriculum. ACARA stated that learning about AI and other emerging technologies are covered by the Foundation to Year 10 Australian Curriculum,[82] which covers how AI works, types of AI (digital tools and AI systems), and responsible use and applications of AI.[83] The Australian Curriculum covers fundamental knowledge and skills regarding AI through explicit content in the mathematics and technologies learning areas. It also connects to cross-curriculum priorities and other areas like science and humanities, and can be captured by teaching general capabilities, such as digital literacy, ethics understanding, and critical and creative thinking.[84]

2.33The Commonwealth DoE believed the Australian Curriculum sufficiently covered digital technologies,[85] whereas other stakeholders called for further updates to the Australian Curriculum. For instance, the Australian Academy of Technology Sciences and Engineering (ATSE) stated that ‘current content on programming and coding within the Australian Curriculum, needs to be supplemented with specific AI education’.[86] Noting that the Australian Curriculum has been recently revised and reflects AI, there can be further updates in the next revision.

2.34At the HE level, TEQSA is responsible for developing standards. It is argued that standards will need to consider, amongst other things, the possible impacts of AI on learning outcomes.[87] Some stakeholders argued for building AI literacy in the tertiary curricula.[88] The Australian Academy of Science argued for the integration of GenAI in HE, stating that:

AI literacy must be promoted to the specific curricula and be embedded and scaffolded across all user degree programs so that students can progressively advance and develop these capabilities together with the other core skills.[89]

2.35Similarly, ATSE stated:

Higher education providers, similarly, need to integrate AI skills and competencies across all courses as a core component of the curriculum. Crucially, the curriculum needs to reflect the rapid future development of AI tools and equip students with the skills they need to respond flexibly as these tools continue to develop...[90]

2.36Some HE courses will quickly become outdated and will need redesigning, requiring ‘the development of more agile systems of governance that are capable of being more responsive to changing context while upholding the integrity of the qualification’.[91] Educators will need to continue to be instrumental in designing and regularly reviewing content, adapting content that GenAI produces, and providing quality assurance.[92] The design process needs to factor in ethical and responsible uses of GenAI tools by students.[93]

Managing the use of GenAI

2.37As discussed, GenAI tools are being used across the education system in Australia, and in other countries, and being integrated into curricula through frameworks and polices. Stakeholders called for rules and guardrails to help manage the rollout of this technology, and noted the particular vulnerabilities of minors. Australia is developing its own approach, and is looking at other key jurisdictions.

Australia’s approach to safe and responsible AI

2.38Following extensive consultation, the Department of Industry, Science and Resources (DISR) released Australia’s AI Ethics Principles in 2019.[53] These voluntary principles are designed to ensure that AI is safe, secure and reliable. They principles are used by educational institutions, such as the Group of Eight (Go8) universities.[94] The Australian Framework for Generative AI in Schools was designed to align with these principles.

2.39DISR advised that it is considering what safe and responsible AI means from a regulatory perspective.[95] Last year, under Minister Husic, DISR released a Supporting responsible AI: Discussion paper (the discussion paper).[96] The discussion paper explored whether Australia’s regulatory system was fit for purpose to deal with new AI technologies, including GenAI, and took a system-wide approach rather than a sector-specific one.[97]

2.40The paper outlines the following themes:

  • Opportunities and challenges: the safe and responsible deployment and adoption of AI will allow Australia to improve economic and social outcomes. However, there are significant risks, such as bias and misleading outputs.
  • Domestic and international landscape: Australia can be a leader in AI, and can pursue this by continuing to engage bilaterally, regionally, and multilaterally. The paper outlines key partners’ policies on GenAI.
  • Managing potential risks of AI: various options for consideration by the Australian Government include regulation, industry self-regulation, collaboration and engagement, technical standards, assurance frameworks and bans.[98]
    1. Australia does not have any AI-specific legislation.[99] DISR flagged that several incentive structures exist to promote safe products entering the market, such as those provided for in consumer laws. Another example of an incentive structure is if an AI product has ‘some sort of limited adverse impact, there may be redress available under Australia's existing suite of technology neutral laws’.[100]
    2. In response to its discussion paper, DISR received over 500 submissions, and heard from 345 virtual town hall attendees and over 200 roundtable attendees.[101] In January 2024, the Australian Government Interim Response (Interim Response) was released, and stated that::
  • While AI will expand Australia’s economy, there is low public trust that AI systems are being designed, developed, deployed, and used safely and responsibly.
  • Many AI applications do not pose an inherent risk that would require a regulatory response, and low-risk AI should continue to flourish unimpeded.
  • Only 33% of Australians agree that Australia has adequate guardrails for AI. There is broad consensus that voluntary guardrails are insufficient. Mandatory guardrails should apply to high-risk applications of GenAI.
  • The regulatory framework does not sufficiently address the risks presented by AI. Existing laws do not adequately prevent AI-facilitated harms before they occur.
  • The government needs to work closely with international partners to establish safety mechanisms and testing for models developed overseas that will be built into Australian applications.[102]
    1. The Interim Response committed Australia to take a risk-based approach to AI, and to avoid unnecessary or disproportionate burdens for businesses, the community, and regulators. The Interim Response called for consistency with the Bletchley Declaration on the opportunities and risks posed by AI, and a human-centric approach to regulation.
    2. The Interim Response proposed the following measures to ensure responsible AI implementation:
  • ongoing auditing and performance monitoring of AI systems to further guardrails
  • define ‘high-risk’ AI in the Australian context
  • develop AI Safety Standard and implement risk-based guardrails for the industry and consider watermarking or similar data provenance mechanisms
  • establish an interim expert advisory group to support AI guardrails
  • reform Australia’s privacy laws
  • work with other Government agencies to address issues raised during consultation.[103]
    1. The Australian Government has committed to exploring ‘the case for mandating guardrails for the design, development and deployment of AI in high-risk settings’. DISR is consulting across government on these guardrails, including with the Commonwealth DoE.[104] Minister Husic has since stood up an expert advisory group. The expert advisory group will advise on immediate work on transparency, testing, and accountability, including options for AI guardrails in high-risk settings to ensure that AI systems are safe.[105]
    2. As DISR noted, the ‘high-risk’ approach aligns with that taken in the European Union (EU) legislation.[106] Australia, like many countries, is looking at the EU Artificial Intelligence Act (EU AI Act) is the most comprehensive regulatory framework on AI systems. DISR contends that whether that Act provides the gold standard depends on how it is implemented, and that:

… should Australia—and governments are still consulting on this and thinking about this—go down the path of creating mandatory guardrails in this space, one of the considerations will be whether, if an organisation or a product has gone through a similar level of due diligence… that should be recognised in Australia.[107]

2.47DISR pointed out that while the department’s work is not sector-specific and does not have a particular focus on education, there are intersections with the Committee’s inquiry. A key theme is the need to balance the opportunities and risks associated with AI. DISR highlighted that:

The considerations have largely been focused on what government might consider or conceptualise as high risk—which could include certain applications in the context of education—and what sort of predeployment guardrails, such as testing, risk assessments, accountability measures, reporting, transparency requirements, might apply. Also under consideration are what regulatory mechanisms are available to government…. there has obviously been some consideration of education, given that has come up in consultations.[108]

Other considerations

2.48There is a current lack of regulation for foundation models, which affects the broader digital supply chain and the safe use and management of GenAI. KomplyAi asserted that new regulation will be required, and potentially an Act of Parliament to govern AI.[109] Kristen Migliorini, founder and Chief Executive Officer of KomplyAi, stated that:

Legislation is appearing in jurisdictions around the world. I think regulation in AI is super complex. Some have introduced regulations, but, in my view, flexibility is one of the keys. AI is borderless and rapidly evolving, so Australia has a unique challenge due its place near the end of a supply chain, in many respects.[110]

2.49KomplyAi highlighted the following areas for potential regulation based on current overseas policies:

  • consideration of certain prohibited AI activities
  • exemptions for internal research and development without prejudice to commercialisation
  • treatment of open source software
  • and risk classifications for intersecting educational activities and AI, such as higher risk requirements for use of this AI in admissions and academic assessment.[111]
    1. Tech for Social Good (TFSG) noted that there may be a ‘governance and regulatory vacuum’ as any regulatory response may lag behind the deployment of the technology.[112] The organisation emphasised the importance of establishing strong partnerships between regulators, vendors, schools, and government agencies to ‘create productive environments for consensus-building and codesign’. This collaboration, including with philanthropic organisations, can ‘bring GenAI technologies that are safe and secure to classrooms in a way that maximises their potential as an educative tool’.[113]
    2. Stakeholders put forward various regulatory approaches to the Committee.
  • Safe AI: PLC contended that government should take a ‘safety first’ approach to GenAI: ‘if it’s not safe, it shouldn’t be used’.[114]
  • Soft law: TFSG suggested that the government consider soft law approaches to regulation including industry codes, standards, model governance frameworks and official guidelines. TFSG asserted that these mechanisms can ‘provide bridging guidance between broad regulatory obligations and the specific context, allowing them to be tailored so they remain fit-for-purpose in the education sector’.[115]
  • Introducing regulation: The National Tertiary Education Union supported the development of regulatory guardrails and implementation of good practice principles that are rooted in ethical frameworks including equity, accessibility and inclusion, prevention of bias and discrimination, and transparency and accountability.[116] On the other hand, TFSG asserted that new regulatory models can be introduced to fill gaps in existing laws where soft law is insufficient.[117] TFSG asserted ‘[t]here should be a focus on the immediate gaps in knowledge, skills, and understanding to mitigate risks and encourage best practices in the short term’.[118]
  • Using existing laws: The Tech Council of Australia contended that the best way to regulate GenAI is to build upon and clarify existing laws and participating in international standard-setting processes. This mitigates issues with a one-size-fits-all approach.[119]
  • International norms: The Tech Council of Australia also recommended that Australia takes an approach that is consistent with international norms and standards, especially from an economic perspective. If Australia creates a bespoke model that does not align with international norms, it may create barriers to investment and the deployment of GenAI technology.[120]

Privacy and copyright reform

2.52The Attorney General’s Department (AGD) is leading on privacy reforms that include considerations. Following the review of the Privacy Act 1988 (Cth), the Attorney-General released the Privacy Act Review Report (Review Report) on16February2023.[121] If the proposed legislative changes identified in the Review Report are adopted, they could have significant consequences for operators and users of AI.[122]

2.53AHRC noted that the Review Report includes proposals to strengthen privacy protections regarding AI. The AHRC contended that it was ‘likely that outcomes from the Review Report will directly impact privacy, security and data protection for children and certain AI tools’.[123] The Centre for Digital Wellbeing (CDW) urged the Australian Government to use the review process to create ‘a robust data protection framework that outlines the rights of students in relation to personal data as well as establishing limitations to the collection, use and retention of data of minors’.[124]

2.54AGD has also identified key issues of copyright and AI to further explore. These issues include ‘the material used to train AI models, transparency of inputs and outputs, the use of AI to create imitative works, and whether and when AI-generated works should receive copyright protection’.[125]

2.55The Australian Government stood up a Copyright and AI Reference Group (the Reference Group) in December 2023, tasked with better preparing for copyright challenges arising from AI. The Reference Group was established after the Attorney-General held a series of roundtables involving over 50 peak bodies and other organisations.

International approaches

2.56There have been significant developments globally regarding GenAI, and in relation to education. It is important to consider international approaches and best practices when examining how Australia should manage GenAI in education. Australia can learn from multilateral, regional and country-specific efforts identified by stakeholders. This includes emerging international standards from certain jurisdictions and standards-setting bodies.[126] DISR is cognisant that interoperability between jurisdictions would aid many developers and deployers of AI systems as they operate transnationally and are subject to different regulatory schemes.[127]

Multilateral efforts

2.57There are various multilateral initiatives that Australia supports on the management of AI. DISR and the Commonwealth DoE highlighted the Bletchley Declaration, which Australia signed alongside the EU and 27 countries in November 2023.[128] The Bletchley Declaration is about AI safety, and as Amazon Web Services (AWS) articulated, ‘the need for an evidence and scientific based identification of risks relating to AI and the need for a risk and principles based approach to addressing those risks’.[129] DISR underlined the need for Australia’s evolving response to AI to be consistent with the Bletchley Declaration.[130]

2.58The United Nations Educational, Scientific and Cultural Organization (UNESCO) has been at the forefront of AI and education,[131] and cautioned that ‘the speed at which generative AI technologies are being integrated into education systems in the absence of checks, rules or regulations, is astonishing’.[132]In 2023, it conducted a global survey of 450 schools and universities, and found that under 10% of respondents had policies or formal guidance relating to the use of GenAI.[133]

2.59UNESCO released AI and education: guidance for policy-makers (2021) which suggests that ‘policymakers should strategically review how AI can transform the role of teachers and how they can prepare to work in education settings…’.[134] Some other helpful policies, resources and activities of UNESCO include:

  • ChatGPT and artificial intelligence in higher education: quick start guide (2023)
  • Recommendation on the ethics of artificial intelligence (2022)
  • The Beijing Consensus on AI and Education (2019).
    1. UNESCO has also been organising international forums on the use of AI and education since 2019.[135]
    2. Stakeholders flagged that the Organisation for Economic Co-operation and Development (OECD) has a body of relevant work. The AAIN highlighted the following OECD guidance:
  • AI language models: Technological, socio-economic and policy considerations (2023)
  • OECD Framework for the Classification of AI systems (2022)
  • Recommendation of the Council on Artificial Intelligence (2019).[136]
    1. The OECD highlighted that given the emerging nature of GenAI in education, there was a lack of evidence of international best practices for implementation, evaluation of outcomes, and specific lessons for Australia. It outlined some of its relevant projects and initial findings.[137] Australia is participating in the OECD’s High Performing Systems for Tomorrow Phase II project, which is investigating best practice for the use of AI in secondary schools.[138] The Australian Education Union is involved with the OECD in creating policy around the use of GenAI.[139]
    2. The United Nations Childrens Fund (UNICEF) released its Policy guidance on AIfor children in 2021.[140] UNICEF has also created a Learning Innovation Hub that aims to improve K–12 education worldwide by using tested EdTech, investing in pilot projects, and generating evidence.[141]

European Union

2.64As mentioned, the European Union (EU) has the most comprehensive legislation on AI in the world, and DISR and the Commonwealth DoE are following its implementation closely.[142] DISR is leading work on safe and responsible AI in Australia, and is considering the ‘high-risk’ focused approach taken by the EU.[143] The EU also has other relevant work. For example, the European Commission released Ethical Guidelines on the Use of Artificial Intelligence and Data in Teaching and Learning for Educators in October 2022, which sit within the EU’s Digital Education Action Plan 2021–2027.[144] Additionally, the Council of Europe published a study on Artificial Intelligence and Academic Integrity in April 2023.[145]

2.65The European Parliament approved the EU AI Act in March 2024, which will be confirmed as law upon completion of the final steps. The EU AI Act will apply to all AI systems across all sectors that impact people in the EU. This includes AI systems built and operated from within the EU or elsewhere. That is, the EU AI Act will apply to the EU’s 27 Member States, as well as to entities with AI systems in the EU, such as Australian companies. It creates significant compliance obligations and financial penalties.[146]

2.66In summary, the EU AI Act:

  • Takes a risk-based approach: it has four categories of risks. It prohibits unacceptable risks (e.g. manipulative AI), regulates high-risk AI systems, has lighter obligations on limited risk AI systems (e.g. companies must ensure that end-users know they are engaging with AI chatbots and deepfakes), and does not regulate minimal risk AI systems.
  • Focuses on providers (developers) and high-risk: it regulates providers that plan to place on the market or put into service high-risk AI systems in the EU, and third country providers where the high-risk AI system’s output is used in the EU.
  • Creates rules for users (deployers) of high-risk AI systems: it applies to users in the EU, and third country users if the AI system’s output is used in the EU.
  • Regulates general purpose AI (GPAI) model providers: it imposes various obligations, such as on the data used for training, copyright, and evaluations and reporting.[147]
    1. Several stakeholders support Australia adopting a similar approach to the EU. Forinstance, TEQSA approved of the EU’s risks-based approach and ‘clear humancentric regulation of current and future AI applications’.[148] The CDW commended that the EU requires high-risk AI systems to ‘undergo a rigorous process before entering the market… including impact assessments’.[149] The CDW asserted that the EU’s model would help address ethical concerns and bias, and encouraged Australia to consider the EU AI Act, in particular its:
  • risk-based approach
  • robust data protection framework
  • third party and independent impact and audit assessments
  • mandatory transparency data use policies and reports
  • requirements for human oversight of AI systems depending on the risk category
  • accountability measures.[150]
    1. KomplyAi encouraged Australia to draw from various jurisdictions, including the EU. It noted that the EU takes a good approach to competition and to ‘foundational models and some of the big tech providers’. It suggested Australia take a risks-based approach and ‘nominate the high-risk activities that we feel need to have a governance regime in place in terms of testing, certifications to market, labelling and transparency, and governance documentation…’.[151]
    2. Further, KomplyAi stated that DISR could consider the EU’s nomination of educational activities in the Act’s high-risk section. It explained that DISR could also potentially introduce some of the EU’s prohibitions on illegal practices, especially those relating to children. Examples of illegal activities in the education sector could include ‘AI systems that infer the emotions of natural persons in the education institution’, like facial recognition, or ‘AI systems that use covert or manipulative methods to greatly influence a user's decision making abilities’.[152]
    3. Dr Curran flagged some complexities around the interpretation of these concepts:

… education-related aspects actually straddle the dangerous, 'no, don't do it at all: dangerous' category and the high-risk category… emotion detection is in the 'not acceptable at all' category in schools, but the high-risk category includes things like high-risk testing environments that might limit your access to further education opportunities. It will be very interesting to see exactly how those get interpreted, because there are a lot of tools out there that could arguably say they're already doing some of these things and have to switch off some of these features.[153]

2.71The Regional Universities Network welcomed the EU’s focus on ‘the ethical and societal implications of generative AI’. The Regional Universities Network further conveyed that while it does not support regulation specific to HE, it does support an EU model, stating:

the need for institutions to act quickly and have flexibility in decision making regarding generative AI, and [it] would not be supportive of strict regulation specific to higher education environments. However, a broad, society-wide regulatory framework for AI such as the risk-based model proposed in the EU could help to mitigate some of the challenges and risks of the technology.[154]

United States

2.72DISR and the Commonwealth DoE are closely monitoring regulatory developments in the United States (US), especially from an interoperability perspective.[155] The Commonwealth DoE is looking carefully at the US Executive Order on the Safe, Secure and Trustworthy Development and Use of AI Technologies, issued in 2023.[156] The Executive Order mainly:

  • creates new standards for AI safety to protect against risks, such as by requiring powerful developers to share critical information with the government
  • helps consumers and workers, including through workforce training and personalised tutoring in schools. It aims to ‘shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools’
  • protects nationals’ privacy, with a special focus on children
  • advances equity and civil rights, such as by addressing algorithmic discrimination through training, technical assistance, and coordination
  • encourages innovation and competition, such as by promoting a fair, open, and competitive AI ecosystem and supporting domestic research
  • advances the country’s global leadership, such as by engaging bilaterally and multilaterally and hastening standards development and implementation.[157]
    1. The US Government introduced the AI Risk Management Framework in 2023,[158] and released an accompanying publication in 2024 to help organisations identify and manage GenAI risks.[159] In 2023, it announced the establishment of seven AI centres, with two focussed on education.[160] The US Government also released a report in 2023 on Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The report creates guidelines for the ethical use of AI in education.[161] ACSSO explained that:

These guidelines emphasise the importance of transparency, accountability, and privacy protection. They recommend that schools and educational institutions develop clear policies and procedures for using AI tools, including informed consent from students and families and regular audits to ensure compliance.[162]

United Kingdom

2.74The first bilateral agreement for evaluating the safety of AI tools and systems was signed by the US and United Kingdom (UK) in April 2024, which builds on the Bletchley Declaration. The UK Secretary of State for Science, Innovation and Technology, Michelle Donelan, asserted that it was ‘the defining technology challenge of our generation’. A British Broadcasting Corporation article stated that the AI tech giants, which are mainly US-based, ‘are still cooperating with the concept of regulation, but regulators have yet to curtail anything these companies are trying to achieve’. There is an unwillingness among AI tech giants to share information about the data they use to train their AI tools.[163]

2.75The UK has many other initiatives regarding AI including:

  • its Department for Education released a Departmental Statement on generative artificial intelligence in education in March 2023. It then put out a ‘call for evidence’ from educators and experts about risks and opportunities of GenAI[164]
  • creating a coordination and expert advisory model to assist regulators to understand the technology and how to enforce regulations and promote compliance by companies[165]
  • a Centre for Data Ethics and Innovation that considers the effects of AI on various sectors, including the education sector[166]
  • the Russell Group of universities, like Australia’s Go8, released principles on the use of AI in education in July 2023.[167]
    1. The UK has taken a collaborative approach of utilising working groups and communities of practice. For instance, Jisc is a long-established not-for-profit organisation that focuses on digital, data and technology issues regarding HE, research and innovation.[168] Jisc provides guidance to universities, instead of each institution needing to work out their own approaches. Griffith University stated it ‘is the bare minimum of what we should be doing’.[169] EdTech UK has hubs across the UK; builds communities of practice of educators, businesses, researchers; and advocates and provides advice to help close digital gaps, including in AI.[170]

Canada

2.77DISR and the Commonwealth DoE are observing Canada’s recent significant legislative developments.[171] Currently, Canada also lacks a regulatory framework specific to AI, however, it has proposed the Artificial Intelligence and Data Act (AI and Data Act), which was introduced as part of Bill C-27, Digital Charter Implementation Act, 2022.[172] The proposed AI and Data Act was one of the first proposed national regulatory frameworks on AI, and is yet to come into effect. In the interim, Canada has provided the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems to give Canadian companies common standards.[173]

2.78Regarding the proposed AI and Data Act, the Canadian Government said:

It is designed to protect individuals and communities from the adverse impacts associated with high impact AI systems, and to support the responsible development and adoption of AI across the Canadian economy. It aligns with the EU's… AI Act by taking a risk-based approach and would be supported by industry standards developed over the coming years.[174]

2.79The proposed AI and Data Act would align with the EU’s approach to create interoperability and consistency with international best practices. As the AI and Data Act’s list of high-impact systems is proposed to be subject to amendment, AI systems in education deemed to be high-impact could be captured in future.[175] Given ‘the previous iteration of AIDA basically miss..[ed] the rise of generative AI entirely’, some proposed changes to the AI and Data Act establish distinct yet similar requirements for AI chat systems that are based on LLMs.[176] Canada has proposed a new regulator, the AI and Data Commissioner, who would have some responsibilities around education and upskilling.[177]

2.80Some stakeholders commended various aspects of the proposed AI and Data Act. DISR stated that the risk-assessment pre-deployments tests in the AI and Data Act, as well as the EU AI Act, function as useful guardrails. DISR noted that these Acts focus on “built-in discrimination based on either the data that's chosen or the people that have designed or developed that AI model”.[178] KomplyAi also commented that the proposed AI and Data Act helpfully:

… looks at the effect and the impact of technology rather than regulating every widget and gadget. So you start to identify a risk profile of the activity type, the novel characteristics of the AI, the scale of impact, the type of harm, if you're engaging with more vulnerable people and the type of data that you're using.[179]

China

2.81China aims to be a global leader in AI in the next seven years and is very active in managing GenAI.[180] Professor Davis advised the Committee that China has a GenAI draft law under discussion, which will soon be implemented. It is quite comprehensive, takes a rules-based (not a risk-based) approach, and sets key standards.[181]

Committee comment

2.82Since this inquiry began, there have been significant domestic and international developments in practices and policies regarding GenAI in the education system. The Committee commends the work underway by relevant Australian Government departments, including the Commonwealth DoE, DISR, and AGD, and regulators like TEQSA, ACECQA, and the eSafety Commissioner.

2.83The Committee recommends that GenAI in education be made a national priority. The Committee supports the use of GenAI tools in the Australian education system given the opportunities presented, although it recognises the significant challenges involved and need for guardrails. It is imperative that Australia forges ahead to safely and ethically maximise the benefits while mitigating the risks of GenAI in the education system.

2.84The Committee supports equity of access for all students and educators to highquality and suitable GenAI products. The Committee considers that the best way to implement GenAI education tools into the school system is by creating and implementing guidelines and polices like the Australian Framework for Generative AI in Schools, setting product standards like ESA, and integrating it into the curriculum.

2.85The Committee notes that the Australian Curriculum has recently been revised and supports further revision to remain fit-for-purpose. The Committee encourages the Australian Government to further integrate personalised education GenAI tools, like study buddies, into the school curriculum and practice. The Committee recognises TEQSA’s work in promoting greater consistency in standards for GenAI in HE.

2.86Many students and staff in Australian schools, TAFEs, and universities are already experimenting with GenAI. In response to the proliferation of GenAI tools and their uptake, many stakeholders are calling for government-led collaboration to help people engage with the technology safely, responsibly, and ethically. The Committee anticipates an increase in the uptake of GenAI tools in schools nationally, following the release of the Australian Framework for Generative AI in Schools. The Australian Government can assist in various ways with the implementation of this framework, for instance through working with key partners to provide training and setting up support hubs.

2.87The Committee considered the suitability of GenAI tools for different ages of students. The Committee recognises the additional vulnerabilities surrounding children and the use of AI, such as issues around privacy and exploitation. The Committee believes that children in ECEC should not be exposed to GenAI until a framework is developed or the NQF is updated, and recognises that staff could use the technology to reduce administrative burdens. The Committee supports the foundational concepts and ethics of GenAI being introduced in primary school, and generally accepts students’ use of certain GenAI education-specific tools under supervision.

2.88Some educational providers and educators are seeking assistance in the selection of GenAI tools to use or with the provision of tools. The Committee notes ESA’s work in setting product expectations to assist schools in selecting GenAI tools and underlines the need to not select tools that store data offshore or sell data to third parties. Government procurement for schools should be designed to include requirements and standards that ensure that the product both responds to possible significant risks and is based on evidence about what constitutes a high-quality educational product.

2.89To make GenAI education tools fit-for-purpose in Australian schools, foundation models should be trained on data that is based on the Australian Curriculum. This can help make the tools relevant and local to Australia, as well as inclusive, like being sensitive to gender and culture. Such efforts can promote the benefits of the technology being realised, can mitigate some risks presented by GenAI, encourage equity of access to a high-quality GenAI tool, and serve Australia’s economic and future workforce interests.

2.90The Committee heard a range of views about who should provide and pay for GenAI tools for use in the education system, and which products to use. Individual students and educators can access free GenAI products, or pay for premium versions, but they are generally not education-specific products. Some States and schools are already running pilot programs to incorporate GenAI education tools into schools. This can feed into an evidence base about what works and the impacts. TAFEs and universities should obtain licences to quality GenAI products for their staff and students to use.

2.91There is an urgent need to create, implement, and enforce mandatory and voluntary guardrails. The Committee supports a coordinated and proactive approach, especially between Commonwealth, State and Territory governments, regulators, industry, educational institutions, educators, and international partners.

2.92Stakeholders have opposing views on possible regulatory options. Some considerations include whether the Australian Government should legislate on matters relating to GenAI in education, whether there should be a system-wide or sector-specific approach, and whether obligations should fall on educational institutions and EdTech companies. Universities were vocal in maintaining some flexibility and not being strictly regulated, but some called for consistent standards.

2.93The Committee recognises that DISR is leading exploratory work on regulatory approaches for safe AI. The Committee supports this work and encourages the Australian Government to consider all of the proposed measures in its Interim Response and specifically in regard to the Australian education system.

2.94The Committee notes that the Australian Government is committed to taking a risks-based approach to safe AI, and that DISR is considering what constitutes high-risk AI systems in an Australian context. The EU AI Act is the most comprehensive regulatory model in the world and offers guidance to Australian policymakers. In line with the EU legislation, the Committee agrees with stakeholders and recommends that the Australian Government regulate high-risk AI systems and unacceptable risks in the Australian education system, especially given the vulnerability of minors.

2.95There is general support for the Australian Government to draw from international regulatory approaches, such as Canada and the US. This is important from a pragmatic interoperability perspective to promote harmonisation between regulatory systems given the technology applies transnationally.

Recommendation 1

2.96The Committee recommends that the Australian Government:

  • consider making the use of GenAI in education a national priority
  • create safeguards for all users, especially minors
  • maximise the opportunities of GenAI education-specific tools and integrate such tools into the school curriculum and practice.

Recommendation 2

2.97The Committee recommends that the Australian Government work with State and Territory Governments to ensure that all Australian schools are funded to100 per cent of the Schooling Resourcing Standard.

2.98This could support access to high-quality educational GenAI tools by students and educators, especially in marginalised communities.

Recommendation 3

2.99The Committee recommends that the Australian Government in conjunction with the States and Territories:

  • monitor current pilot programs and evaluate the different approaches to using GenAI education tools in schools, including as a study buddy
  • build high-quality GenAI education products with datasets based on curriculum, and that meet ESA’s product standards, based on the learning outcomes of current pilot programs.
    1. The evaluation should include consultations with State and Territory Governments to implement GenAI pilot projects about lessons learned, and how to best design the procurement process.

Recommendation 4

2.101The Committee recommends that the Australian Government work with key partners to promote GenAI tools that are fit for purpose, meaning they are:

  • quality education products in terms of the design and alignment with educational outcomes
  • featuring a higher-quality filter to restrict the data used to train an LLM
  • trained on datasets based on the Australian Curriculum, so inputs are:
  • local—reflecting the Australian context, including the curriculum and Indigenous knowledge
  • inclusive—for example, gender and disability inclusive.

Recommendation 5

2.102The Committee recommends that the Australian Government provide more support to implement the Australian Framework for Generative AI in Schools, including to:

  • expediate the taskforce’s creation of an implementation plan for the framework and ESA’s product setting work
  • provide funding to set up virtual and physical hubs to provide expert and technical advice and support to institutions
  • in conjunction with others—provide GenAI literacy and training, to leaders, teachers, support staff, students, parents and guardians, and policy makers
  • make certain guiding statements in the framework that general educators are not qualified to implement, apply instead to technical staff.

Recommendation 6

2.103The Committee recommends that the Australian Government encourage consistent guidance and uptake of GenAI:

  • in school—education, by working with ACARA to integrate AI literacy across all subjects in the next curriculum review cycle—and to update it regularly to reflect the rapid technological developments, knowledge and skills required
  • in HE—including updating the threshold standards, and recognises TEQSA’s leadership role and efforts.

Recommendation 7

2.104The Committee recommends that the Australian Government:

  • allow the use of GenAI by educators and staff in ECEC for certain purposes, such as reducing administrative burden, and defer the use of GenAI by children in ECEC until a framework is developed or the NQF is updated
  • allow students in primary school to have access to bespoke GenAI tools but restrict certain features and build in more safeguards to make those tools age appropriate, noting that primary school students should not have access to certain GenAI products like ChatGPT, which have minimum age requirements.

Recommendation 8

2.105The Committee recommends that the Australian Government promote safeguards by working with:

  • the eSafety Commissioner, and resourcing the Commissioner to support education providers by giving further guidance on how to use GenAI ethically, safely, and responsibly in educational settings
  • State and Territory education departments to develop and implement ethical, safe, and responsible AI practices, and voluntary and mandatory guardrails
  • education providers and the EdTech industry to safely integrate GenAI into Australian schools, universities, and TAFEs, with appropriate internal and external support and safeguards, to:
  • realise the benefits of GenAI to educators, other staff, researchers, and students, and to Australia broadly
  • actively mitigate risks, including the potential for misuse.

Recommendation 9

2.106The Committee recommends that the Australian Government, utilising DISR’s expert advisory group:

  • identify unacceptable risks in the education sector, including making the use of GenAI to detect emotion be under an unacceptable risk category for use in schools, like the EU’s approach
  • explicitly consider the design, development, and deployment of AI systems that could be categorised as high-risk in the education sector
  • have specific regard to the vulnerability of children
  • identify pre-deployment guardrails for GenAI products for use in the Australian education system.

Recommendation 10

2.107The Committee recommends that the Australian Government work closely with key international partners:

  • including the EU, Canada, and US, to promote interoperability regarding requirements and guardrails for GenAI products
  • including non-governmental stakeholders, to share best practice, identify opportunities, and bolster the evidence base of the impacts of GenAI in education.

Footnotes

[1]Australian Academy of Technological Sciences and Engineering (AATSE), Submission 14, pp.1–3; Independent Schools Australia (ISA), Submission 22, p. 15; Australian Human Rights Commission (AHRC), Submission 65, p. 11.

[2]Ms Delia Browne, Director, National Copyright Unit, Copyright Advisory Group, Committee Hansard, 29January 2024, pp. 8–10.

[3]The University of Sydney, Submission 44, Appendix C, p. 1.

[4]Ms Veronica Yewdall, Assistant Federal Secretary, Independent Education Union of Australia, Committee Hansard, 11 October 2023, p. 8; Mr Chris Davern, Assistant Secretary, Strategic Policy Branch, Department of Education (DoE), Committee Hansard, 6 March 2024, pp. 4–5.

[5]Mr Davern, DoE, Committee Hansard, 6 March 2024, p. 5; Mr Brad Hayes, Federal Secretary, Independent Education Union of Australia (IEUA), Committee Hansard, 11 October 2023, p. 8; Ms Julie Birmingham, First Assistant Secretary, Teaching and Learning Division, Department of Education (DoE), Committee Hansard,13 September 2023, p. 5.

[6]Curtin University, Submission 41, p. 3.

[7]South Australia Department for Education (SA DFE), Submission 2, p. 7; Dr James Curran, Chief Executive Officer, Grok Academy, Committee Hansard, 20 March 2024, p. 1.

[8]Dr Curran, Grok Academy, Committee Hansard, 20 March 2024, p. 1.

[9]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, p. 2.

[10]Mr Anthony England, Director, Innovative Learning Technologies, Pymble Ladies’ College (PLC), Committee Hansard, 29 January 2024, p. 5.

[11]AWS, Submission 85, p. 5.

[12]Mr Samuel Nikolsky, Director, Wyndham Tech School, Victoria University, Committee Hansard, 13March2024, p. 21.

[13]Ms Raphaella Revis and Mr Leonid Shchurov, University of Technology Sydney, Committee Hansard, 30January 2024, p. 35.

[14]Professor Nicholas Davis, Industry Professor of Emerging Technology and Co-Director, Human Technology Institute, University of Technology Sydney (UTS), Committee Hansard, 20 March 2024, p. 7.

[15]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, p. 6.

[16]ISA, Submission 22, p. 4; Maeve, Year 12 Student, The Grange P–12 College, Committee Hansard, 13 March 2024, p. 4.

[17]Independent Education Union of Australia, Submission 26, p. 3.

[18]Pymble Ladies’ College, Submission 93, p. 8.

[19]Maeve, The Grange P–12 College, Committee Hansard, 13 March 2024, p. 4

[20]Professor Leslie Loble AM, Industry Professor, University of Technology Sydney (UTS), Submission 49, p. 3.

[21]Australian Council of State School Organisations (ACSSO), Submission 25, p. 20; Australian Children’s Education & Care Quality Authority, ‘Review of Child Safety Arrangements under the National Quality Framework’, December 2023, viewed 13 August 2024, p. 27.

[22]ISA, Submission 22, p. 4.

[23]Mr Davern, DoE, Committee Hansard, 6 March 2024, pp. 4-5.

[24]Maeve, The Grange P–12 College, Committee Hansard, 13March2024, p. 4; Samidha, Year 12 Student, The Grange P–12 College, Committee Hansard, 13March2024, p. 4.

[25]Mr Hayes, IEUA, Committee Hansard, 11October2023 p. 8; Mr Davern, DoE, Committee Hansard, 6March2024, pp. 4–5.

[26]Regional Universities Network (RUN), Submission 40, p. 3.

[27]RUN, Submission 40, p. 3.

[28]Mrs Lorraine Finlay, Human Rights Commissioner, Australian Human Rights Commission (AHRC), Committee Hansard, 4 October 2023, pp. 16–17.

[29]ISA, Submission 22, p. 4.

[30]Association of Heads of Independent Schools of Australia, Submission 82, p. 5; Australian Science and Mathematics School, Submission 31, p. 2.

[31]Open AI Help Centre, ‘Is ChatGPT safe for all ages?’, OpenAI, viewed 13 August 2024.

[32]Mr Davern, DoE, Committee Hansard, 6 March 2024, p. 5.

[33]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, p. 5.

[34]The Hon Jason Clare MP, Minister for Education, ‘Report into Safety in Early Childhood Education and Care Settings’, Media Release, 21 December 2023, viewed 13 August 2024.

[35]eSafety Commissioner, ‘Generative AI – position statement’, eSafety Commissioner, 15 August 2023, viewed 21 May 2024; Mr Davern, DoE, Committee Hansard, 6 March 2024, p. 5.

[36]ACSSO, Submission 25, p. 20; Australian Children’s Education & Care Quality Authority, Review of Child Safety Arrangements under the National Quality Framework, December 2023, viewed 13 August 2024, p. 4.

[37]Mr Davern, DoE, Committee Hansard, 6 March 2024, p. 4.

[38]Mr Hayes, IEUA, Committee Hansard, 11October 2023 p. 8; Mr Davern, DoE, Committee Hansard, 6 March 2024, pp. 4–5.

[39]Mr Hayes, IEUA, Committee Hansard, 11October 2023 p. 8.

[40]Mr Nikolsky, Victoria University, Committee Hansard, 13March2024, p. 21; Group of Eight (Go8), Submission 63, p. 2.

[41]AHRC, Submission 65, p. 17.

[42]Mr Davern, DoE, Committee Hansard, 6 March 2024, p. 4.

[43]Mr Davern, DoE, Committee Hansard, 6 March 2024, p. 4.

[44]The Hon Jason Clare MP, ‘Release of the Australian Universities Accord’, Media Release, 25 February 2024, viewed 30 May 2024.

[45]Department of Education (DoE), Australian Universities Accord Final Report, 25 February 2024, viewed 26August2024, pp. 61-62.

[46]The Hon Jason Clare MP, ‘Release of the Australian Universities Accord’, Media Release, 25 February 2024, viewed 30 May 2024.

[47]DoE, Australian Framework Generative Artificial Intelligence (AI) in Schools, 31January2024, viewed 21 May 2024.

[48]DoE, Australian Framework Generative Artificial Intelligence (AI) in Schools, viewed 21 May 2024.

[49]Dr Curran, Grok Academy, Committee Hansard, 20 March 2024, p. 1.

[50]Associate Professor Julia Powles, Director, Tech and Policy Lab, University of Western Australia, Committee Hansard, 20 March 2024, p. 5.

[51]Professor Loble, UTS, Committee Hansard, 20March 2024, p. 8.

[52]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, p. 2.

[53]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, pp. 1–2.

[54]English Teachers Association NSW, Submission 64, p. 6.

[55]Professor Davis, UTS, Committee Hansard, 6 September 2023, p. 1.

[56]National Catholic Education Commission, ‘Artificial intelligence here to stay says Federal Education Minister’, Media Release, 8 June 2023, viewed 13 August 2024.

[57]RMIT Blockchain Innovation Hub Researchers, SupplementarySubmission 18.1, p. 1.

[58]Dr Curran, Grok Academy, Committee Hansard, 20 March 2024, p. 7.

[59]Professor Loble, UTS, Committee Hansard, 20March 2024, p. 8.

[60]Centre for Digital Wellbeing (CDW), Submission 83, p. 12; Tertiary Education Quality and Standards Agency (TEQSA), Submission 33, p. 9.

[61]TEQSA, Submission 33, p. 9.

[62]Professor Davis, UTS, Committee Hansard, 20 March 2024, p. 4.

[63]Dr Curran, Grok Academy, Committee Hansard, 20 March 2024, p. 2.

[64]Dr Curran, Grok Academy, Committee Hansard, 20 March 2024, p. 7.

[65]SA DFE, Submission 2, p. 7.

[66]Ms Julia Oakley, Executive Director, System Performance, South Australia Department for Education, Committee Hansard, 5 February 2024, p. 3.

[67]ISA, Submission 22, p. 12.

[68]Professor Loble, UTS, Committee Hansard, 6September 2023, p. 11.

[69]ISA, Submission 22, p. 12.

[70]Professor Loble, UTS, Committee Hansard, 6September 2023, p. 11.

[71]ACSSO, Submission 25, p. 8; Associate Professor Kate Thompson, Associate Professor of Digital Pedagogies, School of Teacher Education and Leadership, Queensland University of Technology, Committee Hansard, 5 February 2024, p. 10.

[72]ACSSO, Submission 25, p. 8; Claire Field, Submission 70, p. 19.

[73]ACSSO, Submission 25, p. 8.

[74]Claire Field, Submission 70, p. 19.

[75]ISA, Submission 22, p. 12.

[76]Ms Carla Wilshire OAM, Director, Centre for Digital Wellbeing (CDW), Committee Hansard, 4 October 2023, p. 8.

[77]ACSSO, Submission 25, p. 8; Ms Wilshire, CDW, Committee Hansard, 4 October 2023, p. 8.

[78]ISA, Submission 22, p. 12.

[79]Australian Curriculum, Assessment and Reporting Authority (ACARA), Submission 16, pp. 3–4.

[80]ACARA, Submission 16, p. 2.

[81]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, p. 4.

[82]ACARA, Submission 16, pp.1–3.

[83]‘Understand this Curriculum connection: Artificial Intelligence (AI)’, Australian Curriculum, viewed 13August2024.

[84]ACARA, Submission 16, p. 2.

[85]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, p. 4.

[86]ATSE, Submission 14, p. 2.

[87]National Tertiary Education Union (NTEU), Submission 52, p. 8.

[88]AATSE, Submission 14, p. 2.

[89]Professor Philip Poronnik, Chair, National Committee for Biomedical Sciences, Australian Academy of Science, Committee Hansard, 2 November 2023, p. 6.

[90]AATSE, Submission 14, p. 3.

[91]TEQSA, Submission 33, p. 4.

[92]University of Southern Queensland, Submission 6, p. 1.

[93]Ms Merryn Cagney, Co-manager, Law and Ethics Committee, Monash DeepNeuron, Committee Hansard, 9November 2023, p. 24; TEQSA, Submission 33, p. 4.

[53]Department of Industry, Science and Resources (DISR), Australia’s AI Ethics Principles, 7 November 2019, viewed 13 August 2024.

[94]Go8, Submission 63, p. 2.

[95]Mr Lucas Rutherford, General Manager, AI Governance Branch, Department of Industry, Science and Resources (DISR), Committee Hansard, 6 March 2024, p. 11.

[96]DISR, Australia’s AI Ethics Principles, viewed 13 August 2024.

[97]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, p. 7.

[98]DISR, Australia’s AI Ethics Principles, viewed 13 August 2024.

[99]A Lundie, ‘Uncovering the Future of AI Regulation: The Australian Privacy Act Review’, Herbert Smith Freehills, 20 April 2023, viewed 13 August 2024.

[100]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, p. 10.

[101]DISR, Australia’s AI Ethics Principles, viewed 13 August 2024.

[102]DISR, Australia’s AI Ethics Principles, viewed 13 August 2024.

[103]DISR, Australia’s AI Ethics Principles, viewed 13 August 2024.

[104]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, p. 7.

[105]The Hon Ed Husic MP, Minister for Industry and Science, New artificial intelligence expert group, MediaRelease, 14 February 2024, viewed 13 August 2024.

[106]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, p. 7.

[107]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, p. 10.

[108]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, pp. 7–8.

[109]KomplyAi, Submission 56, p. 4.

[110]Mrs Kristen Migliorini, Founder and CEO, KomplyAi, Committee Hansard, 29 January 2024, p. 19.

[111]KomplyAi, Submission 56, p. 5.

[112]Tech for Social Good (TFSG), Submission 32, p. 10.

[113]TFSG, Submission 32, p. 12.

[114]Mr England, PLC, Committee Hansard, 29 January 2024, p. 2.

[115]TFSG, Submission 32, p. 12.

[116]NTEU, Submission 52, p. 9.

[117]TFSG, Submission 32, p. 13.

[118]TFSG, Submission 32, p. 10.

[119]Mr Ryan Black, Head of Policy and Research, Tech Council of Australia, Committee Hansard, 11October2023, p. 3.

[120]Mr Black, Tech Council of Australia, Committee Hansard, 11October2023, p. 3.

[121]Attorney-General’s Department, ‘Review of the Privacy Act 1988’, 16 February 2023, viewed 13August2024.

[122]A Lundie, ‘Uncovering the Future of AI Regulation: The Australian Privacy Act Review’, viewed 13 August 2024.

[123]AHRC, Submission 65, p. 9.

[124]CDW, Submission 83, p. 4.

[125]The Hon Mark Dreyfus KC MP, Attorney-General, ‘Copyright and AI reference group to be established’, Media Release, 5 December 2023, viewed 13 August 2024.

[126]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, p. 6.

[127]Ms Wilshire, CDW, Committee Hansard, 4 October 2023, p. 6.

[128]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, pp. 1-2; DISR, Australia’s AI Ethics Principles, viewed 13 August 2024.

[129]Ms Min Livanidis, Head of Digital Trust, Cyber and Data Policy, Australia and New Zealand, Amazon Web Services, Committee Hansard, 29 November 2023, p. 6.

[130]DISR, Australia’s AI Ethics Principles, viewed 13 August 2024.

[131]Mrs Kristen Migliorni, Founder and Chief Executive Officer, KomplyAi, Committee Hansard, 29 January 2024, p 20; Mrs Finlay, AHRC, Committee Hansard, 4 October 2023, p. 17.

[132]Giannini, S, ‘Generative AI and the future of education’, UNESCO, July 2023, viewed 13 August 2024, p. 4.

[133]AHRC, Submission 65, p. 16.

[134]DoE, Submission 48, p. 6.

[135]AHRC, Submission 65, p. 16.

[136]Australasian Academic Integrity Network, Submission 58, p. 12.

[137]Organization of Economic Co-operation and Development, Submission 59, p. 1.

[138]DoE, Submission 48, p. 9.

[139]Mr Kevin Bates, Federal Secretary, Australian Education Union, Committee Hansard, 2November2023, p.4.

[140]United Nations International Children’s Emergency Fund, Policy guidance on AI for children, November2021, viewed 13 August 2024.

[141]Claire Field, Submission 70, p. 21.

[142]RUN, Submission 40, p. 4; Mr Rutherford, DISR, Committee Hansard, 6March2024, p. 8.

[143]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, p. 7.

[144]TEQSA, Submission 33, p. 8.

[145]Claire Field, Submission 70, p. 11.

[146]EY, The European Union Artificial Intelligence Act, 2 February 2024, viewed 26 August 2024, p. 15.

[147]Future of Life Institute, EU Artificial Intelligence Act – High levelSummary, 27 February 2024, viewed 13August 2024.

[148]TEQSA, Submission 33, p. 8.

[149]CDW, Submission 83, p. 13.

[150]CDW, Submission 83, pp. 4–5; Ms Wilshire, CDW, Committee Hansard, 4 October 2023, p. 5.

[151]Mrs Migliorni, KomplyAi, Committee Hansard, 29January 2024, p. 19.

[152]Mrs Migliorni, KomplyAi, Committee Hansard, 29January 2024, p. 19.

[153]Dr Curran, Grok Academy, Committee Hansard, 20 March 2024, p. 1.

[154]RUN, Submission 40, p. 5.

[155]Mr Rutherford, DISR Committee Hansard, 6 March 2024, p. 8; Ms Birmingham, DoE, Committee Hansard, 6March2024, p. 2.

[156]Ms Birmingham, DoE, Committee Hansard, 6 March 2024, p. 2.

[157]The White House, FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, 30 October 2023, viewed 13 August 2024.

[158]University of South Australia, Submission 29, p. 9.

[159]National Institute of Standards and Technology, AI Risk Management Framework, viewed 13 August 2024.

[160]Professor Shazia Sadiq, Fellow, Australian Academy of Technological Sciences and Engineering, Committee Hansard, 5 February 2024, p. 15.

[161]University of Technology Sydney, Centre for Research on Education in a Digital Society (UTS CREDS), Submission19, p.1; Claire Field, Submission 70, p. 11.

[162]ACSSO, Submission 25, p. 7.

[163]L McMahon and Z Kleinman, ‘AI Safety: UK and US sign landmark agreement’, BBC, 2 April 2024, viewed 13 August 2024.

[164]UTS CREDS, Submission 19, p.1; Claire Field, Submission 70, p. 11.

[165]Mr Black, Tech Council of Australia, Committee Hansard, 11October2023, p. 3.

[166]ISA, Submission 22, p. 12.

[167]Claire Field, Submission 70, p. 11.

[168]‘About us’, Jisc, viewed 13 August 2024.

[169]Professor Elizabeth Burd, Provost, Griffith University, Committee Hansard, 5 February 2024, p. 11.

[170]Dr Teresa Swist, Co-lead, Education Futures Studio, The University of Sydney, Committee Hansard,3 30January 2024, p. 18.

[171]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, p. 8; Ms Birmingham, DoE, Committee Hansard, 6March2024. pp. 1–2.

[172]The Australian Academy of Humanities (AAH), Submission 45, pp. 2–3.

[173]Government of Canada, Artificial Intelligence and Data Act, 27 September 2023, viewed 13 August 2024.

[174]Government of Canada, The Artificial Intelligence and Data Act (AIDA) – Companion document, 13June2023, viewed 13 August 2024.

[175]K Bennett, et al, ‘AI regulation in Canada What’s happening now?’, DLA Piper, viewed 13 August 2024.

[176]AAH, Submission 45, pp. 2–3; Bennett, K, et al, ‘AI regulation in Canada What’s happening now?’, DLA Piper, viewed 13 August 2024.

[177]AAH, Submission 45, pp. 2–3.

[178]Mr Rutherford, DISR, Committee Hansard, 6 March 2024, p. 8.

[179]Mrs Migliorni, KomplyAi, Committee Hansard, 29 January 2024, p. 19.

[180]Ms Vicki Thomson, Chief Executive, Group of Eight Universities, Committee Hansard, 20 September 2023, p. 1.

[181]Professor Davis, UTS, Committee Hansard, 6 September 2023, p. 8.