- Frameworks and initiatives
- This chapter provides an overview of the current legal landscape which underpins the digital transformation of workplaces. Although many states and territories have legislation and regulatory bodies to enforce compliance across workplaces, this chapter focuses on Commonwealth legislative and regulatory frameworks. It considers Australian Government-led reforms and initiatives to support safe and responsible digital transformation. This chapter considers stakeholder views on regulatory reform, and the need to build public trust in workplace technologies.
The legislative and regulatory landscape
2.2The digital transformation of workplaces cuts across various legislative and regulatory frameworks including:
- workplace and industrial relations
- work health and safety (WHS)
- privacy and surveillance
- business and competition
- intellectual property (IP) and copyright
- industry-specific laws.
Workplace relations
2.3The Fair Work Act 2009 (Cth) (Fair Work Act) is the principal legislative mechanism governing Australian workplaces and workplace relations. The Fair Work Act empowers the creation of employment standards and agency bodies, including:
- the National Employment Standards
- modern awards for specific industries and occupations
- enterprise agreements
- the Fair Work Commission (FWC)
- the Fair Work Ombudsman
- the Fair Work Division of the Federal Court
- the Federal Circuit and Family Court.
- The FWC is Australia’s workplace relations tribunal and registered organisations’ regulator. It can decide disputes about whether AI or ADM constitutes major workplace changes that may trigger consultation requirements. The FWC has already determined that changes to production, programming, organisation structure, and certain technologies are major changes for the purpose of consultation obligations. The FWC also sets the National Employment Standards, which are the minimum employment entitlements required for all employees. This includes flexible working arrangements, leave entitlements, maximum weekly hours, and termination notices and redundancy.
- The Fair Work Ombudsman, the independent regulator of workplaces, enforces the compliance framework under the Fair Work Act and standards set by the FWC. This is typically through issuing compliance notices for breaches of regulations, including underpayment or non-compliance with employment conditions.
- In addition to legislative frameworks, there are industry-specific modern awards and workplace-specific enterprise agreements. Within these, there are certain obligations to consult with workers where major changes will have a significant effect on them.
- The workplace relations framework recognises the need to manage the power and control imbalance between employers and workers. The UNSW-UTS Trustworthy Digital Society noted that compliance with, and improvement of this framework is a ‘complex and arduous task’. It requires constant collaboration between policy makers, regulators, organisations, and workers. This complexity is also partially due to all states and territories, except Western Australia, referring their workplace and industrial relations powers to the Commonwealth. The various exclusions and inclusions adopted by states and territories means this framework operates differently across jurisdictions.
Work health and safety
2.8The Work Health and Safety Act 2011 (Cth) provides for a balanced and consistent national framework to secure the health and safety of workers and workplaces. In addition, the following anti-discrimination legislation makes discrimination in employment based on protected attributes unlawful:
- Sex Discrimination Act 1984 (Cth)
- Racial Discrimination Act 1975 (Cth)
- Age Discrimination Act 2004 (Cth)
- Disability Discrimination Act 1992 (Cth).
- The national policy to improve WHS and workers’ compensation arrangements is developed by Safe Work Australia, a government statutory agency. Safe Work Australia also led the development of the model WHS laws in 2011 to harmonise WHS laws across all states and territories. This comprises the model WHS Act, Regulations, and Codes of Practice.
- The model WHS laws, which have been adopted across all jurisdictions except Victoria, and anti-discrimination laws, are technology-neutral to accommodate ongoing digital transformation. State and territory regulators hold powers to enforce compliance with WHS laws. This is especially necessary to address physical and psychosocial harms arising out of the introduction of technology in workplaces.
- Anti-discrimination laws mostly rely on individual complaints. The problem with relying on individuals to raise issues of discrimination in the workplace, as identified by Associate Professor Alysia Blackham, is that ‘few people do, even when they know they have been discriminated against’. This means there are limited test cases to provide clarity regarding the application of WHS laws to technology-enabled workplace discrimination.
Privacy
2.12The collection and handling of data in Australia carries with it various privacy protection obligations. Australian workers and the public are subject to a broad privacy law framework, which differs across states and territories.
2.13The Privacy Act 1988 (Cth) (Privacy Act) promotes and protects the privacy of individuals. This is primarily through regulating how certain entities, including Australian Government agencies, organisations with an annual turnover of more than $3 million, and some other organisations, handle sensitive and personal information. The Privacy Act also establishes the Australian Privacy Principles, which provide guidance on how to best protect this information.
2.14The Office of the Australian Information Commissioner (OAIC), the privacy and information regulator, provides entities with guidance on how the Privacy Act obligations apply to them. The OAIC also has a complaints function, where people can report breaches of their privacy.
2.15Small and medium-sized enterprises (SMEs) make up approximately 95 per cent of Australian businesses, employing a majority of Australian workers. These workers are not protected by the Privacy Act because SMEs are exempt from its application. Employee records, which refer to personal information relating to a workers’ employment, are also exempt from protections and obligations under the Privacy Act.
2.16The Australian Privacy Principles do not protect individuals against being subjected to ADM. They also do not require consent or disclosure where an entity is using AI in the collection of data other than sensitive information. Consent is, however, required where information previously collected is to be used for a secondary purpose that an individual would not reasonably expect.
2.17Although the Privacy Act does not cover employee records, the Fair Work Act does provide obligations on employers handling employee records. The focus of these provisions, however, is on record keeping for compliance with workplace laws as opposed to protecting the privacy of employees’ information.
Surveillance
2.18The surveillance law framework is significantly fragmented across Australia, without an overarching Commonwealth legislative system. Only New South Wales, Victoria and the Australian Capital Territory have specific legislation to govern how organisations can monitor and conduct surveillance in their workplaces, often on a notice and consent basis. These laws include restrictions on surveillance in sensitive locations and bathrooms.
2.19The QUT Centre for Decent Work noted the inconsistency of surveillance laws across jurisdictions:
Current surveillance laws in Australia provide a complex system with little uniformity across states, varying levels of application for the workplace, and the potential for error in interpretation. This is despite recommendations spanning ten years to replace existing state surveillance laws with a Commonwealth Act that is technology-neutral.
Business and competition
2.20The Competition and Consumer Act 2011 (Cth) regulates how businesses operating in Australia deal with customers, competitors and suppliers.
2.21The Australian Competition and Consumer Commission (ACCC) is the national competition regulator, whose work ‘prevent[s] the concentration of power and wealth that distorts society’. The ACCC conducts extensive inquiries into digital transformation, including recent inquiries exploring the risks and opportunities of AI and ADM.
2.22The Commonwealth Ombudsman has developed the Better Practice Guide to support Australian organisations with the uptake of automate decision making. This guide assist organisations to identify potential negative outcomes, mitigate any associated risks, and assess effectiveness of digital technologies.
Intellectual property and copyright
2.23In Australia, IP and copyright laws have always been considered ‘clear, simple, and fit for purpose’ to preserve the economic rights that ‘protects, and monetises Australia’s creative industries’. As Ms Grainne Brunsdon, Chief Operating Officer, Screen Australia, highlighted, this is a framework that is currently being tested in the context of AI.
2.24The Copyright Act 1968 (Cth) is the foundation of copyright law in Australia. Copyright gives exclusive rights, or ownership, to the copyright owner who charges fees or royalties in exchange for permission, or a licence, to use their copyright works. Copyright is one mechanism that provides for the protection of IP in relation to creative works.
2.25There are existing exceptions to copyright infringements, including use:
- for a specified purpose (for example, reporting the news)
- by particular sectors (for example, libraries, educational institutions, government)
- of particular types of copyright material (for example, taking photos of public sculptures).
These exceptions do not permit the theft and copying of creative works for development and training of AI systems.
2.26Extensive protection mechanisms exist to allow creatives to seek remedies where their copyright has been breached. As stated by APRA AMCOS, these mechanisms are administered by a collection of ‘mature and established collecting societies’. It is unclear whether these remedies are sufficient to counteract the ML capabilities of AI, which continuously replicates creative works as part of algorithm training.
Reforms and initiatives
2.27In response to ongoing digital transformation, the Australian Government is progressing several legislative, regulatory, policy reforms and initiatives. The primary focus is to ensure that the rollout of digital tools and AI systems across Australian workplaces is carried out in a fair, ethical, safe, and responsible manner.
2.28The Australian Government acknowledges that AI presents broad opportunities and risks for Australian workplaces. As noted by the Digital Transformation Agency (DTA), the ‘Australian Government is committed to embracing AI while acting as an exemplar in transparency, risk management and governance in the adoption of this technology’. Recognising that AI is a pivotal component of the digital transformation of Australian workplaces, the Australian Government has committed budget funding to support the uptake of AI, including:
- 2023–24 Budget: $17 million to support deployment of AI by SMEs
- 2024–25 Budget: $4.2 million to clarify and strengthen existing laws addressing risks and harms with AI.
Department of Industry, Science and Resources
2.29DISR is leading the Australian Government’s efforts to ensure that AI systems are being rolled out in different settings safely and responsibly. It is taking a risk-based approach, which would allow low-risk AI development and use to continue freely, while high-risk development and use would be more strictly regulated.
2.30DISR developed Australia’s AI Ethics Principles, which are voluntary principles designed to prompt organisations to consider the impact of AI systems. The principles are designed to ensure AI systems benefit humans and society more broadly.
2.31In 2023, DISR released their Safe and Responsible Use of AI Discussion Paper. It sought community and industry views on how opportunities for AI can be leveraged safely and responsibly while mitigating risks. Public consultation found that existing regulatory frameworks are not fit for purpose to respond to the distinct risks arising from AI.
2.32The Australian Government’s interim response to the Safe and Responsible Use of AI DiscussionPaper committed action to:
- deliver regulatory clarity and certainty
- support and promote best practice for safety
- ensure government is an exemplar in the use of AI
- engage internationally on how to govern AI.
- DISR also established the AI Expert Group, made up of 12 appointees across industry, academia, and law. The AI Expert Group is advising the Australian Government on possible definitions for high-risk use of AI that would attract new mandatory guardrails, and identify possible regulatory options.
- The clarity of high-risk settings is important for the fair, ethical, and safe use of AI. As Mr Anthony Murfett, Head of Division, Technology and Digital, DISR, detailed:
We’re very mindful of the opportunity, about taking that principles based approach versus prescription, and it is about that flexibility. So if government decides to implement mandatory guard rails or regulation there is flexibility, and it also comes down to good legal design, to think through how we put that in place. The proposal paper that’s being developed will explore trying to define high risk, the level of prescription, so we can give certainty. If there’s no certainty, it’s harder for businesses and consumers to know what they need to be mindful of.
2.35Since appearing before the Committee, DISR released its Proposals Paper for introducing mandatory guardrails for AI in high-risk settings. This proposals paper sets out examples of how AI can be used safely and responsibly in high-risk settings. DISR sought public consultation on the definition of high-risk, guardrails, and regulatory options for mandating the guardrails. Consultation will inform the next steps taken by DISR to ensure AI is appropriately regulated. The proposed mandatory guardrails include that organisations developing or deploying high-risk AI systems:
1establish, implement, and publish an accountability process including governance, internal capability, and a strategy for regulatory compliance
2establish and implement a risk management process to identify and mitigate risks
3protect AI systems, and implement data governance measures to manage data quality and provenance
4test AI models and systems to evaluate model performance and monitor the system once deployed
5enable human control or intervention in an AI system to achieve meaningful human oversight
6inform end-users regarding AI-enabled decisions, and interactions with AI and AI-generated content
7establish processes for people impacted by AI systems to challenge use or outcomes
8be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks
9keep and maintain records to allow third parties to assess compliance with guardrails
10undertake conformity assessments to demonstrate and certify compliance with the guardrails.
2.36DISR used its Business Research Innovation Initiative to encourage SMEs to use AI to improve their modern award compliance obligations. This year’s initiative had the highest number of applicants. This may indicate an increased appetite among SMEs to improve compliance and market competitiveness in taking up AI.
Department of Employment and Workplace Relations
2.37In 2021, DEWR worked with government agencies, employer bodies, unions and technology sector bodies to co-design the Regulatory Technology Roadmap for modern awards (the Roadmap). The Roadmap identifies opportunities for government to support the uptake of regulatory technology (RegTech) solutions to enhance compliance with modern awards. Progress made on specific Roadmap initiatives is outlined in figure 2.1 below:
Figure 2.1Regulatory Technology Roadmap for modern awards

Source: DEWR, Submission 3, p. 8.
2.38DEWR provided further support for Australian businesses and employers through the development of the Australian Digital Capability Framework (the ADC Framework). The ADC Framework, based on the European model, uses common language to identify, classify, and describe the digital skills required of the Australian workforce. The core digital skills focus on information and data literacy. The ADC Framework is available for all employers and the national skills system.
2.39In 2024, the Australian Government progressed significant reforms to the Fair Work system via the Fair Work Legislation Amendment (Closing Loopholes No. 2) Act 2024 (the Closing Loopholes Act). The Closing Loopholes Act establishes protections for gig economy workers and other employee-like workers, and empowers the Fair Work Commission to:
- set minimum standards for employee-like workers performing digital platform work
- resolve disputes around unfair deactivation from digital labour platforms consistent with the Digital Labour Platform Deactivation Code
- allow digital labour platforms and organisations representing employee-like workers to make collective agreements.
- The Digital Labour Platform Deactivation Code, to be made by the Minister responsible for administering the Fair Work Act, must provide for matters including rights of response relating to procedural fairness.
- The Closing Loopholes Act also established the right to disconnect. This provides employees with the right to refuse to monitor, read or respond to contact or attempted contact from their employer or another person if the contact is related to their work. The FWC currently working to include this right in all modern awards.
Digital Transformation Agency
2.42The DTA is leading the Australian Government’s digital and ICT transformation, and provides strategic, policy, and expert investment advice for government agencies. In July 2023, DTA worked with DISR to co-design and release the interim guidance on government use of public generative AI tools (the guidance). The guidance applies to Australian Public Service (APS) employees using publicly available generative AI tools at work, and uses principles of fairness and human centred values.
2.43The guidance provides that APS employees must be able to explain, justify and take ownership of advice and decisions made using AI, and must not input any classified, personal, and otherwise sensitive information into public generative AI systems to avoid inadvertent disclosure. It emphasises the importance of avoiding transposing bias in training data into AI processes to mitigate against biased outcomes.
2.44The DTA continued its work with DISR by co-leading the AI in Government Taskforce, established for 12 months until July 2024. This taskforce was tasked with exploring the opportunities for innovation and use of AI in government. It also had to ensure that the use of AI in government is safe and responsible, and developed guidance and trials based on Australia’s AI Ethics Principles.
2.45In late 2023, the DTA began coordinating a six-month Australian Government pilot trial of Copilot for Microsoft 365, a generative AI platform embedded into Microsoft 365 tools. The trial, which involved 60 government agencies, focused on how to enhance the work of the public service without jeopardising accountability of decision makers. As Ms Lauren Mills, Branch Manager, Artificial Intelligence, DTA, stated:
No matter what technology we use, whether it’s AI or other technology, we are accountable for those decisions. Maintaining that concept of having a human in the loop in all decision-making and that continuity of accountability that rests with us has been a strong focus of our work.
2.46DTA has also piloted the Australian Government AI Assurance Framework. The AI Assurance Framework is designed to guide government agencies through impact assessments against the Australia’s AI Ethics Principles for specific AI uses. It is designed to complement and strengthen existing frameworks and practices, and leverages DISR’s Safe and Responsible Use of AI consultation piece. Activities undertaken under the AI Assurance Framework are aligned with a whole of economy approach, and with the cross-jurisdictional work of the Data and Digital Ministers Meeting to develop a consistent national framework for assurance of AI use in government.
Attorney-General’s Department
2.47In 2022, the Attorney-General’s Department (AGD) began a comprehensive review of the Privacy Act. It explored the efficacy of legal obligations regarding protection of personal information, whether legal recourse should be strengthened for breaches of privacy, and effectiveness of enforcement powers and certification schemes.
2.48The Australian Government’s response to the review focused on recommendations to enhance transparency, individuals’ rights to information for ADM, and gaps in the legislative framework that undermine obligations to protect individuals’ privacy. AGD is progressing legislative reforms to enact these recommendations, including:
- updating the definition of personal information to ensure information generated using technology is captured
- establishing a fair and reasonable test to apply to all entities, which requires any collection, handling or use of data to be necessary for business function
- creating new individual rights, including the right to meaningful information where ADM has significant effects, the right to object to the collection, use, and disclosure of personal information, and the right to seek erasure of personal information
- enhancing privacy protections in the workplace context to counterbalance the existing employee records exemption
- developing specific obligations for SMEs proportionate to the risk they pose
- establishing a specific requirement to seek consent to trade in individuals’ personal information, and prohibiting the trading of children’s information
- establishing a requirement for entities covered by the Privacy Act to stipulate maximum and minimum retention periods for any personal information held
- creating a tort of serious invasion of privacy that would include intrusion upon seclusion, would apply broadly to include the private sector, and would not be limited to misuse of personal information
- increasing the range of civil penalties that may be enforced by the OAIC for breaches of the Privacy Act.
- As a key feature of the Privacy Act reforms enacted through the Privacy and Other Legislation Amendment Act 2024, the fair and reasonable test would consider whether an entity’s collection, use, and disclosure of personal information would be considered reasonably necessary for a business to function. Ms Virginia Jay, Acting Assistant Secretary, Privacy Reform, AGD, explained:
Factors that would be relevant to a ‘fair and reasonable’ consideration would be what a reasonable person would expect the risk of adverse impact or unjustified adverse impact or harm, whether the collection intrudes upon the personal affairs of the individuals to an unreasonable extent and whether there are less-intrusive means of achieving the same ends at a comparable cost with comparable benefits.
2.50The fair and reasonable test reflects community expectations regarding ongoing and implicit consent in the handling of personal information. This is particularly so where it could be used for secondary purposes that a person would not reasonably expect.
2.51Despite SMEs making up the vast majority of Australian business, they are exempt under the Privacy Act. AGD is looking to develop specific obligations under this Act for small businesses proportionate to the risk they pose. SMEs will need assistance with getting up to speed with any new privacy obligations around the collection, storage, and use of employee data. This is also important given incidents of data theft from organisations.
2.52In addition, the Australian Government may be able to support SMEs in meeting privacy obligations by helping them access and use tools like RegTech to improve regulatory compliance, providing education and training for employers, having hubs with centralised assistance and expertise, and offering AI services on a pay-as-you-go basis instead of buying a product.
eSafety Commissioner
2.53The eSafety Commissioner is working with industry to enable the adoption of a Safety by Design approach to the development of tech applications and platforms. This encourages the identification of possible harms and implementation of risk-mitigating and transparency measures. This approach is built on the following guiding principles:
- service provider responsibility: applications and platforms are responsible for user safety
- user empowerment and autonomy: users are autonomous and empowered with safety tools
- transparency and accountability: applications and platforms are transparent and are accountable for their actions.
- The eSafety Commissioner has developed the Women in the Spotlight program, which provides resources and training to respond to gendered online abuse and workplace technology facilitated sexual harassment. This program elevates and protects women whose work requires an online presence, and offers social media-self-defence training to enhance safe and effective social media use.
- The eSafety Commissioner also has regulator powers to force removal of image-based abuse materials, including sexualised deepfakes, shared without consent. It also has a cyber abuse reporting scheme for adults, and encourages information-sharing by workplaces for reporting and escalation of technology-facilitated abuse.
Treasury
2.56In August 2023, the Treasury developed the Measuring What Matters framework, which identifies the key indicators for a prosperous Australia. One key indicator identifies digital preparedness. According to the Australian Digital Inclusion Index data, 23.6 per cent of Australians remain digital excluded, with 9.4 per cent of the population remaining highly digital excluded. Digital exclusion is more prevalent among marginalised cohorts, including First Nations, people with disabilities, people living in public housing, people who have not completed secondary school, and people over 75 years old.
Regulating technology
Stakeholder perspectives
2.57There are ongoing government efforts to ensure that appropriate safeguards are developed and implemented to allow for the safe and responsible digital transformation of workplaces. However, stakeholders have mixed views on whether legislative and regulatory frameworks are already sufficient or even desired.
2.58Some warn against the risks of overregulating. Proponents of this position include some employers, employer associations, agencies, and technology developers. They assert that existing frameworks capture these emerging technologies and are capable of successfully mitigating risks. Further regulation could stifle the benefits of AI and ADM, including productivity gains and innovation opportunities for Australian workplaces. It could also create excessive barriers for businesses seeking to navigate their compliance obligations in an already complex environment.
2.59Mr Bran Black, Chief Executive Officer, Business Council of Australia emphasised that overregulation can be undesirable:
‘in the first instance, [overregulation] can stifle innovation, and in the second instance, it’s so prescriptive that it misses some things that should properly be classified as high risk.’
Mr Black argued that Government support should instead be provided to businesses to enhance their uptake of AI and ADM, and enable retraining and upskilling of workers.
2.60Other stakeholders argue that existing frameworks and protections do not mitigate the broad harms arising from AI and ADM, and that these risks can be substantial and far-reaching. This is the dominant position among workers, unions, industry researchers, and academics. Regulation is considered the most viable path to create certain and consistent change and protection. Among other things, it can mitigate data and privacy breaches, and avoid the reinforcement of social biases.
2.61As research experts on Australia’s gig economy noted, ‘from an international comparative perspective Australia is lagging behind when it comes to providing workers, and its citizens more broadly, with appropriate regulatory protections’.
2.62A primary concern is that the failure to institute robust regulatory guardrails specific to AI and ADM will give employers and technology developers unilateral power to drive changes to work. It is feared that workers will not be duly regarded despite their vulnerability to harms arising out of rushed or poor design and implementation of these technologies.
2.63The Victorian Trades Hall Council warned against the unfettered discretion of employers to introduce these technologies:
Australia is embarking on an employer-led arms race. workplace technologies conform to path dependence – once installed, they are very difficult to wind back. When a rival firm is undertaking task breakdown, surveillance, automation and work intensification, others will be pushed to adopt these technologies to survive. Failure to counterbalance these market forces with worker voice and regulation is setting Australia on a dangerous path that we are running out of time to correct.
2.64Accenture, a consulting firm, affirmed that when introducing technology, to achieve what they regard as a Net Better Off workplace, which is where workplace technology leaves both business and workers better off, there must be substantial investments in training, leave entitlements, and flexible arrangements. These investments in human capital ‘should exceed technology investments made by as much as nine-fold’.
Public views
2.65Another issue is the fact that Australians are among the most sceptical of AI in the world, which can make regulation more important. Research by the Institute for Human-Centered AI conducted in 2024 found that Australians have the highest rate of scepticism, with 69 per cent of Australians reporting feeling ‘nervous’ about AI use and services, up 18 per cent since 2022. Research by the Lowy Institute found that 52 per cent of Australians believe the potential risks of AI outweigh any potential benefits.
2.66While Australian businesses have been given ‘relatively free reign to impose AI’ and ADM in their workplaces, workers remain largely sceptical of business preparedness for this change. In a survey conducted by the Future Skills Organisation, 71 per cent of respondents said they believe their employers ought to provide greater support for generative AI adoption at work. The Institute of Human-Centred AI also found that only 38 per cent of Australians trust companies to appropriately protect their data.
2.67Persistent public scepticism of AI is exacerbating stunted and inconsistent uptake across industries. DISR attributes low adoption rates of AI in the workplace to low public trust, particularly in environments where use of AI could be considered high-risk. Allaying public concern for AI and ADM requires improving trustworthiness and social acceptance. As Dr Emmanuelle Walkowiak, stated ‘the role of collective negotiation in the deployment of technologies is central to its social acceptance’.
2.68The UNSW-UTS Trustworthy Digital Society explained that regulation coupled with education can increase public confidence in the adoption of these technologies:
AI regulation and frameworks can only build trustworthiness if they are clearly communicated and explained to the community, including communities who have historically been excluded or disadvantaged by transitions to new workplace technologies.
2.69Regulators can also help to build public trust in technology. The Business Council of Australia referred to Mr Stephen King, Commissioner at the Productivity Commission, who stated:
One of Australia’s advantages is the strength and expertise of its regulators. They can play a key role, showing how AI is covered by existing rules, issuing guidelines, working with industry to evaluate risks, and running test cases where coverage is unclear. This process will help build trust in AI as consumers see that broad rules of business behaviour are largely unchanged. New regulation is only useful if existing regulations … are inadequate.
2.70Overcoming public concern will prove difficult if efforts are not able to match pace with the constant evolution of AI and ADM. As Dr Asif Gill, Professor and Head of Discipline, Software Engineering, UNSW-UTS Trustworthy Digital Society, explained, ‘trustworthiness is not something that can be built, it can be demonstrated’, but this takes time and collaboration, which is not always afforded.
Reform and emerging technology
2.71Globally, governments have struggled to regulate the technology industry in a way that establishes effective and proportionate safeguards for users, while preserving an environment that fosters innovation for developers. It is difficult to strike a balance.
2.72This is partly due to the rapid speed at which technology is developing. As Mr Benjamin Fairless, a gig-economy transport worker, shared:
it has often been the case that parliament and government are behind the eight ball when it comes to technology. That happened when uber was introduced into Australia; it took regulators some time to catch up.
2.73Governments have historically favoured a cautious approach to regulating emerging technology at the risk of inhibiting progress and innovation. This can overestimate the benefits and underestimate the harms arising from the use of these technologies.
2.74Mr Bernie Smith, NSW Branch Secretary and Treasurer, Shop, Distributive and Allied Employees’ Association, recalled the risk of allowing unregulated development of technology with social media:
When social media took off, it was heralded as a potential golden age of participatory democracy, social connection and access to information but only if it was allowed to expand unfettered by regulation. Sadly, the reality of what has unfolded is very different to that. It’s a mixed bag, including, sometimes, a digital sewer of misinformation and exploitation—an unregulated digital wild west. We should not make the same mistake or fall for the same line that the only way AI can be implemented is if it’s unregulated. This is not true.
2.75The difficulty for regulation of AI and ADM is the fact that they are already commonplace for Australian workers. Research found that 84 per cent of knowledge workers in Australia are already using generative AI in their workplaces, and 90 per cent of employers in Australia expect to rely on AI-related solutions in their workplaces within the next five years. Research by the Tech Council of Australia found that there are already 33,000 people working in AI-related roles, with this number expected to grow to 200,000 by 2030.
2.76Nevertheless, the Australian Government will join other governments around the world in heeding warnings against allowing unfettered development of this technology. As demonstrated in figure 2.3 prepared by Accenture, legislative reform is underway globally to establish strict guardrails and confine the development and use of AI:
Figure 2.3Number of AI-related bills passed into law internationally in 2016-23 and 2023

Source: Accenture, Submission 21, p. 9.
Even though these technologies will continue to ‘routinely outpace Government responses, regulations, and outcome auditing’, any efforts, no matter the pace, to regulate AI and ADM in the workplace will have lasting implications for businesses, workers, and the public.
Committee comment
2.77The Committee recognises that the current regulatory landscape in Australia is not sufficient to deal with the impacts of ADM and AI on workplaces. The rapid digital transformation has exposed some concerning regulatory gaps in worker protections. It is critical that reforms relating to the use of technology in the workplace capture all workplaces and all workers.
2.78The complex web of Commonwealth, state, and territory legislative and regulatory frameworks presents challenges. Better consistency across jurisdictions is essential to ensuring clarity and certainty of compliance obligations for workers and employers.
2.79The Committee mainly received evidence related to Commonwealth regulation, including possible reforms to the Fair Work Act 2009 (Cth), National Employment Standards, modern awards for high-risk industries, and the Privacy Act 1988 (Cth).
2.80In response to the rapidly evolving digital transformation, the Australian Government is progressing many reforms and initiatives. The primary focus is to ensure that the rollout of digital tools and AI systems across Australian workplaces is carried out in a fair, ethical, safe, and responsible manner.
2.81The Committee acknowledges the important work underway by many departments and agencies. The Committee acknowledges DTA’s work to make the APS an exemplar in transparency, risk management and governance for the proper use of AI in the workplace. Efforts regarding the digital transformation of workplaces by DEWR, the eSafety Commissioner and the Treasury are also commended.
2.82The digital transformation leverages big data. The public has become more conscious of how their data is being handled, and has expressed concern about possible misuse. AGD’s privacy law reforms could lead to better protections for employee data.
2.83DISR is leading whole-of-government efforts to regulate AI. It found that regulatory frameworks in Australia are not fit for purpose to respond to the distinct risks arising from AI. The Committee notes that the Australian Government is committed to taking a risk-based approach to safe and responsible AI, and that DISR is considering what constitutes high-risk settings and uses of AI systems in an Australian context. DISR’s AI Expert Group is advising the Australian Government on possible reform.
2.84DISR’s policy proposal paper sought public feedback on definitions of high-risk AI, potential mandatory guardrails, and possible regulatory options. The Committee considers that high-risk AI systems in workplaces should be specifically included. One example is to classify AI systems used for employment related purposes as high-risk, consistent with the European Union.
2.85The Committee supports DISR’s proposed mandatory guardrails, which are based on extensive public consultation. They are preventative measures that would require developers and deployers of high-risk AI to take specific steps across the AI lifecycle. Under this framework, deployers of AI could include employers.
2.86Employer decision making, which can be informed by technology, can have life-altering impacts on workers and their families, including termination of employment. As demonstrated by the Robodebt scheme, ADM remains largely incapable of replicating ethics and empathy found in human decision making.
2.87Whether through the Fair Work Act or another regulatory mechanism, employers must be held accountable for ADM or AI-driven decision making. This would help counter the development of technologies that prioritise profit and productivity over the human costs.
2.88The Committee heard mixed views towards regulation. On the one hand, further regulation could stifle innovation relating to AI and ADM, and create additional barriers for businesses needing to comply in an already complex environment. On the other hand, regulation could create certainty and consistency around protection of rights, mitigation of risks, and maximisation of benefits.
2.89To achieve an acceptable middle ground, the Australian Government will need to navigate this contentious landscape as it seeks to leverage the opportunities of these technologies in a safe and responsible manner, and regulate high-risk settings and uses. The Committee is of the view that regulatory change is required, but that business must be supported to be able to thrive. Certainty through regulation can help build public trust of AI and ADM to maximise these technologies and mitigate harms.
2.90The Government’s recent introduction of a social media ban for children under 16 years old responds to risks associated with unfettered technology that can produce reprehensible and irreversible harms. Early supporters of social media did not anticipate the explosion of misinformation and disinformation, violent extremism, and offensive material. Nor could they have anticipated the extent to which this technology is being used in ways that led to vast social problems, including cyber fraud, cyber bullying, and online harassment.
2.91It is not enough to rely on technology developers and deployers to mitigate the risks of technologies like AI and ADM. Prioritising innovation at the expense of people’s right to participate in a safe, ethical, and fair digital workplace is overestimating the proposed benefits and underestimating the immense risks.
2.92The Committee recommends that the Australian Government:
- classify AI systems used for employment related purposes as high-risk, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination
- adopt and implement the Department of Industry, Science and Resources’ proposed mandatory guardrails for high-risk AI
- increase the Department of Industry, Science and Resources AI Expert Group membership to include greater industry representation, particularly high-risk industries.
2.93The Committee recommends that the Australian Government review the Fair Work Act 2009 (Cth) to ensure decision making using AI and ADM is covered under the Act, and employers remain liable for these decisions.
2.94The Committee recommends that the Australian Government work with states and territories to ensure greater consistency and modernisation of relevant legislation to enhance employee protections regarding the use of emerging technologies in the workplace.
2.95The Committee recommends that the Fair Work Commission review the National Employment Standards to respond to the adverse effects of significant job redesign caused by emerging technologies by enhancing worker entitlements like flexible work arrangements.
2.96The Committee recommends that the Australian Government review modern awards for high-risk industries to ensure workers are protected where AI has significantly transformed job design.
2.97The Committee recommends that the Australian Government consider developing information campaigns about the use of AI and ADM in workplaces to build public trust of these technologies, especially to:
- raise awareness of how the technologies work, including their opportunities and challenges, and Australian Government initiatives and reforms
- assist employers to understand their obligations under the Fair Work Act 2009 (Cth)
- convey how reforms and initiatives enhance inclusive and sustainable workforces
- encourage training and upskilling in high-risk settings and industries.