Chapter 4Impacts of AI on industry, business and workers
4.1This chapter considers the impacts of artificial intelligence (AI) on industry, businesses and workers, including its potential impacts on productivity, jobs and workplace conditions generally.
4.2The chapter also considers the particular impacts of AI on:
the creative industries, which are already dealing with significant disruption and issues arising from the use of AI, and particularly generative AI; and
the healthcare sector, as an example of a high-risk sector in which the significant opportunities for beneficial uses of AI technology must be balanced against very serious risks.
Benefits and risks of AI for industry, business and workers
4.3AI has for some years been used in a wide range of industry and business sectors. More recently, the potential applications of AI in these settings have increased markedly with the advent of large language models (LLMs) that support generative AI models such as ChatGPT-4, which can produce natural language outputs in response to user queries or inputs.
4.4Many inquiry participants pointed to the broad adoption of AI by industry and business in Australia as well as globally, and noted AI’s vast potential to promote further innovation, growth and productivity gains across all sectors of the economy.
4.5However, many stakeholders, while acknowledging the potential benefits of AI, expressed serious concerns about the potentially negative impacts of AI on workplaces and the rights and conditions of workers, and the risk of AI having a disruptive effect on particular industries and professions.
Productivity
4.6The committee heard that the application of AI technologies for uses in industry and business offers great potential for productivity improvements.
4.7The submission of the Productivity Commission (PC) observed:
The contribution AI could make to the Australian economy is likely to be sizeable…[While it] is difficult to make a robust forward-looking estimate of the productivity gains on offer from AI as trends in uptake are still forming and AI technologies are rapidly evolving…[one] estimate suggested generative AI could add up to $115 billion in productivity gains to the Australian economy by 2030 (a 5% uplift in [gross domestic product (GDP)].
4.8The Australian Chamber of Commerce and Industry (ACCI), while noting that AI is already improving business productivity, also highlighted estimates of AI’s considerable impact on future productivity in Australia and globally:
McKinsey estimates that generative AI could contribute between USD2.6trillion and 4.4 trillion annually to the global economy. In the Australian context…another McKinsey study…found that adopting AI and automation could add an additional $170 billion to $600 billion to Australia’s GDP by 2030…
4.9Mr Steven Worrall, Corporate Vice-President of Microsoft, spoke of the potential for substantial productivity and employment growth from the ‘responsible development and adoption of AI’. Noting Australia’s track record as ‘a rapid adopter of technology’, Mr Worrall stated:
Australia has an incredible foundation to build on. Forecasts predict that AI could create 200,000 new jobs and contribute up to $115 billion annually to our economy. This innovation and productivity gain that I'm hearing about from customers large and small, in both the public and the private sectors, from the use of AI is truly remarkable.
4.10The PC submission explained that AI’s potential to improve productivity arises chiefly through its application to the ‘augmenting and automating’ of certain work tasks, which increases productivity by freeing up workers’ time and thereby allowing workforces to be used more efficiently. ‘Augmenting’ a work task can be understood as typically involving AI-assisted human decision making, and ’automation’ as being where AI does the decision-making itself, though usually with a human involved or ‘in the loop’.
4.11Mr Bran Black and Ms Melanie Siva, CEOs of the Business Council of Australia and Google, respectively, have observed that AI boosts productivity by ‘freeing up workers to focus on more creative and human elements of their jobs’ and allowing workers to ‘work smarter not harder’.
4.12The PC submission observed that, through augmentation and automation, ‘AI has the potential to address some of Australia’s most enduring productivity challenges—namely skill and labour gaps, and slow service sector productivity growth’. It noted, for example:
Generative AI technologies have great potential for application in the services sector which makes up about 80% of production and 90% of employment in Australia…In the health sector, there is scope for greater AI use that would improve aspects such as routine record keeping and clinical coding, medication alerts and treatment adherence, management of hospital bed capacity and identification of patients at risk of deterioration to improve prioritisation of resources. Many similar examples exist across other parts of the services sector.
4.13ACCI described the potential ‘productivity, safety and health benefits’ of job augmentation as an ‘underappreciated opportunity’, and considered that AI has more potential for job augmentation than job automation. The ACCI submission observed:
According to the International Labour Organisation (ILO), only 2.3% of jobs worldwide could be fully automated today, whereas 13% of jobs could be boosted by AI. The ILO also found that generative AI is more likely to augment jobs by automating tasks, [rather than to]…fully automate a job, making it redundant. The complementing, rather than substitution, of jobs, will see benefits for job quality and work intensity.
4.14The joint submission from the RMIT Blockchain Innovation Hub and RMIT Digital also noted that, in contrast to job automation, job augmentation would not necessarily lead to job losses overall:
…simply because generative AI increases worker productivity does not mean that robots will take our jobs en masse. Unlike technologies that purely automate, generative AI applications typically require a process between a prompting-human and the technology. Generative AI is applied as a process of co-production…Many of the productivity improvements through generative AI will come through replacing tasks not jobs. Co-production is not merely about automating processes but enhancing them through a deep understanding of the nuances involved in each task.
4.15ACCI argued further that increased use of AI by industry and business would lead to the creation of AI-related jobs. Its submission stated:
…according to the World Economic Forum (2023), 50% of employers worldwide expect AI to foster job creation, with many new opportunities arising in the fields of AI development, machine learning specialists, and sustainability and business analysts. This will have a positive knock-on effect on jobs which consist in social interactions: 92% of US-based executives agreed that people skills are more important than ever...
Impact of AI on jobs and workplaces
4.16However, some inquiry participants raised concerns about the potential for AI-to impact negatively on jobs and workplaces.
Job losses
4.17The Victorian Trades Hall Council submission outlined the potential for automation to lead to job losses across many different industries and professions:
Potentially most alarming is the prospect of employers using AI to destroy thousands of livelihoods through automation. For industries such as trucking, warehousing and logistics, the prospect of widespread job-loss has been raised as a concern for years. More recently, advances in generative AI have threatened the arts and creative fields, graphic design and writing, legal services, education and administrative services. As the capabilities of AI continue to advance in unpredictable and dynamic ways, health care, financial services, retail, transportation, engineering, science, banking, telecommunications, public administration and computer technology industries are all likely to face systematic disruption. Experts warn of rapid and extensive job losses throughout the workforce with little safety net.
4.18The Media, Entertainment & Arts Alliance (MEAA) argued that certain sectors, including ‘finance, banking, advertising, administration, and customer service’, are already reducing workforces by replacing workers with automation products.
4.19The submission of Ms Chelsea Bonner commented specifically on the impact of automation on ‘creative industries such as fashion media and the arts’, which she described as ‘particularly vulnerable to disruption by AI technologies’. Her submission noted job losses in these industries would impact women in particular:
…these sectors employ a significant percentage of female workers in roles that are now at risk of being automated. For instance, modelling, content creation, and administrative roles within these industries could see high displacement rates…
The replacement of human models, performing artists and sex workers with AI-generated alternatives lead to job losses, disproportionately affecting women's employment and thereby their financial stability…The introduction of AI in these sectors will exacerbate these issues by reducing the demand for human talent, thus further suppressing wages and job security for women.
4.20The Finance Sector Union of Australia (FSUA) commented more generally on the potential for job losses due to AI automation to impact disproportionately on certain types of jobs and vulnerable or disadvantaged groups:
It is…clear that certain categories of jobs are far more susceptible to impacts from AI. These tend to be roles with relatively lower education and training requirements. This means that the impacts of AI may be disproportionately felt by people with particular education and experience levels, who may find it more difficult to obtain other employment. There is also a risk that these impacts will be felt more by people of lower socioeconomic groups, worsening inequality.
4.21The FSUA observed that it is unlikely that the workers whose roles are replaced by AI will be able to move into newly created AI-related roles.
4.22The MEAA commented that the potential impact of job losses due to automation was not only that it might create a ‘class of unemployed workers’ but also, more likely, that ‘it will flood the pool of workers competing for low-skill and low-wage work, further driving down wages and conditions of an already precarious sector of the economy.
4.23A range of inquiry participants also highlighted the potential for automation to displace entry pathways into industries, such as by apprenticeships and trainee schemes, with low skilled work able to be increasingly undertaken by AI rather than by human employees beginning their careers.
Workplace impacts
4.24The committee heard that the use of AI in workplaces for various purposes also has the potential to impact negatively on employees.
4.25Mr Joseph Mitchell, the Assistant Secretary of the Australian Council of Trade Unions (ACTU), noted that AI will likely affect almost every industry not only through potential job losses due to automation but also, for example, through the potential use of AI for workforce management and planning:
In some industries, the introduction of AI is intended to lead to the automation of processes and tasks, creating the potential for job losses…and in almost every industry we are likely to see the potential for AI to manage the workforce and arrange the performance of work.
4.26Deakin Law School observed that AI is already widely used to automate recruitment, staff layoffs, rostering and surveillance of staff activity. The Victorian Trades Hall Council (VTHC) raised significant concerns about the issue of AI-driven workplace surveillance, which it described as ‘dehumanising, invasive and incompatible with fundamental rights’. VTHC observed that AI has the potential to give employers ‘a supernatural level of insight about workers’ including, for example, their ‘out-of-work activities…and likelihood to engage in industrial action’. It submitted:
Legislation has not kept pace with the intrusive methods employers are using to surveil workers. Workplace surveillance includes but is not limited to keystroke monitoring, email monitoring, the collection of behavioural, social and emotional data, use of cameras and AI technology to track workers in workplaces (which is particularly common in warehouses and retail) and tracking outputs in real time. This then generates data which can be used to evaluate worker performance and inform critical choices relating to their employment.
4.27For example, Amazon—the world’s largest retailer with a growing footprint in Australia—has used AI-powered surveillance cameras and wearables in delivery vans and warehouses in the United States for at least three years, and earlier this year was fined €32 million by the French data protection authority for ‘excessively intrusive’ surveillance of warehouse workers. When asked about its surveillance practices in Australia, Amazon said it does not use the technology subject to the French fine in Australia, and said it would support ‘reasonable limits regarding surveillance.’
4.28The Shop, Distributive and Allied Employees Association (SDA) highlighted the potentially negative impacts on workers from the use of AI-led rostering systems, which could reduce the opportunity for consultation, disadvantage workers on casual and part-time arrangements and impact on workers with caring responsibilities:
The use of apps or other electronic means for communicating rosters and roster changes doesn’t provide for proper consultation with the employee, despite requirements to do so under legislation, Awards, and many of the Enterprise Agreements that our members work under. Lack of consultation has a significant impact on employee schedule control and a worker’s ability to obtain a roster that enables them to meet caring responsibilities.
It has also led to workers being forced to constantly check the app, especially for casuals and those part timers on low base contracts who need additional shifts to survive. The use of computerisation and apps also impinges on an employee’s time outside of work, putting more pressure on them while caring.
4.29While expressing concern about the potential for AI to reinforce structural inequality and undermine human rights, Mrs Lorraine Finlay, the Australian Human Rights Commissioner, also noted AI’s potential for positive impacts on workers and workplaces through uses that, for example, can increase workplace accessibility and workforce participation for people with a disability:
[AI]…technologies can be used in ways that enhance worker rights in the workplace. If I can give one example…we refer to the potential of accessibility for Australians with disability in terms of the use that might be made of assistive technologies.
4.30The ACTU also noted that there were workplace uses of AI on the ‘positive side of the ledger’ that could ‘offer a beacon to which we should aim to guide the development of AI’. These were generally the use of AI for augmentation or to ‘complement rather than supplant human ingenuity and labour’, such as the use of robot waiters programmed by people with disabilities to enable their work participation, and the use of AI to assist medical professionals to assess and interpret diagnostic scans.
4.31Mr Mitchell observed that, ultimately, whether the consequences of the adoption of AI in workplaces would be positive or negative would be determined by the ‘choices that are made about how AI is regulated and the terms upon which it is adopted’ in workplaces. He noted:
AI will create new power dynamics in Australia and exaggerate existing ones in ways that could lead to worse outcomes if left unmitigated. It will create new challenges for workers, employers and policymakers to overcome. Some of those newly expressed power dynamics are playing out right now.
Workforce consultation and training
Consultation
4.32The submission of the ACTU observed that engagement and consultation with workers, described as ‘worker voice’, in relation to the use of AI in workplaces is being shown to mitigate the risks of it having negative impacts on workers:
One trend that is emerging in relation to the adoption of AI in workplaces is…that worker voice mitigates several risks to working conditions associated with AI in workplaces.According to the data, workplaces where there is worker voice that have adopted AI–for example through works councils, trade union representation or health and safety structures–record a reduced probability of certain health and safety risks, such as exposure to heavy loads, painful positions, high noise, fumes vapours or chemical products or long working hours.This fits within a broader pattern of workplaces which are consulted about the adoption of AI being more likely to report that AI has had a positive impact.
4.33Similarly, the VTHC considered that ‘workforce consultation and training is imperative’, and cited the example of the Swedish mining industry, where the inclusion of workers in introducing automation had been ‘improved workplace safety, protected jobs and streamlined production’. By comparison, ‘rushed’ efforts to introduce automation in the UK manufacturing sector without ‘meaningful inclusion’ of workers had resulted in ‘an overestimation of the capabilities of AI and mass lay-offs only to have to rehire those workers a few months later’. The VTHC concluded:
Direct engagement with workers is essential to limiting the disruptive effects of automation and building public trust in the digital transformation.
4.34Professor Nicholas Davis, Industry Professor of Emerging Technology and Co-Director of the Human Technology Institute at the University of Technology Sydney, described his organisation’s research indicating that workplace consultation in relation to the use of AI is currently inadequate:
Our research across nursing, retail and, indeed, the public service shows that, despite the fact that tech companies are saying that artificial intelligence offers the greatest opportunity for workplace productivity and economic uplift in Australia in decades, workers are invisible bystanders in this conversation. They are not consulted, they are not engaged and they are not then playing the two roles that are absolutely essential in this revolution, which are tailoring and using the technology to get the best out of [it]… and, just as important, putting in place those practices and monitoring and governance systems to identify when people might be hurt and prevent that from happening.
4.35Mr Joseph Mitchell, Assistant Secretary of the ACTU, called for a whole-of-government approach that builds workforce consultation and participation into the frameworks and standards for using and managing AI in the workplace. MrMitchell stated:
We need to seed workers' voices into that [framework] because workers are experts in their industry. They have a say and a right to have a say in the future of the work they do and the industries that they work within…
When it comes to each workplace, the current frameworks for representation…need to be future proofed. We need to ensure…clear representation rights for workers around AI so that we have…transparency, consultation and negotiation around the impacts of AI in the workplace and on the work that we do…We want to make sure that [workers’ rights] aren't eroded by the introduction of new technologies…
4.36A number of inquiry participants pointed to the regulation of occupational health and safety (OH&S) as providing a potential model for the use and management of AI in workplaces. At the federal level, for example, the Work Health and Safety Act 2011 provides a broad, risk-based framework for ensuring the health and safety of workers, workplaces and the general public, based around effective representation, consultation and cooperation between workers and employers.
4.37Ms Elizabeth O’Shea, the Chair and Founder of Digital Rights Watch, commented on the ‘parallels with occupational health and safety’ in terms of the importance of consultation with workers around the introduction of AI to workplaces:
In how we use these [AI] tools in the workplace, it's imperative that management…consider and engage with workers early, rather than imposing these tools on workers and then either being surprised that it doesn't work or being surprised that people are feeling exploited, further disenfranchised and as if they are not having their workplace rights enforced. In this early stage of the industry, there's a real opportunity here to impose [consultation] requirements on workplace uses of AI that are consultative, for benefits in both directions: for people who work in these workplaces and for the objective of the workplace.
4.38Mr Peter Lewis, the Convenor of the Centre of the Public Square program for Per Capita, noted that OH&S-style schemes could introduce positive workplace cultures for managing the potential risks of AI across the workforce, which could help to overcome risk aversity.
4.39In an answer to a question on notice, the Human Rights Law Centre (HRLC) observed that the regulatory scheme for OH&S could be suitably applied to AI as both are fundamentally concerned with risk management and safety:
The principles underlying Australia’s Workplace Health and Safety (WHS) laws, which focus on protecting workers by requiring duty holders to eliminate or minimise risks, provides a valuable framework for developing a risk-based regulatory model for AI. Although WHS laws and AI regulation address different domains, both prioritise risk management and safety…
By applying this risk-focused mindset, AI regulation can ensure that potential harms are addressed effectively, promoting safety and accountability in the development and deployment of AI technologies. This approach will foster a culture of continuous improvement and compliance, paralleling the proactive risk management seen in WHS laws.
4.40Reflecting this same OH&S perspective on AI, the VTHC called for amendments to model OH&S laws to include consideration of the risks to workers of the use of AI. Taking the example of AI-driven surveillance of workers, the VTHC submission stated:
Workplace surveillance has also been shown to have a demonstrated impact on workers’ occupational health and safety, leading to overwork, stress and burnout. For example, it is not uncommon for Amazon workers subject to surveillance in warehouses to receive warnings if they take too long to go to the bathroom or speak with their co-workers. Model occupational health and safety laws should be amended to recognise the risk to psychosocial health posed by AI-driven surveillance technologies.
4.41The HRLC further noted that OH&S-style regulation of AI in the workplace would be sympathetic with broader risk-based schemes of AI regulation, such as the European Union’s AI Act, requiring ‘AI developers and deployers to identify, assess, and mitigate risks associated with their systems’.
4.42This point was also made by Australian vendors of AI products. MrMichaelGately, the Chief Executive Officer of Trellis Data, noted that risk assessment of AI fits neatly within the concepts of broader product liability and OH&S frameworks:
The idea that AI is like a manufactured product and therefore fits under that manufacturing OH&S set of frameworks we already have is brilliant. That is exactly where we should be…[AI should be treated] as a product in the market so that if it causes harm—just like a defect in automobile manufacturing…[the manufacturer has to] remedy that and pay due [compensation] costs…That is entirely appropriate and will ensure that tech companies…do the right thing when they deploy these products, knowing that there is that liability that goes with it.
4.43Mr John Leiseboer, the Chief Technology Officer of Quintessence Labs, similarly agreed that ‘OH&S is a very important requirement to be met when these sorts of products are being used in a workplace environment.’
4.44Mr Michael Harmer, the Chairman of Harmers Workplace Lawyers, also recently voiced his support for an OH&S-style solution to the risks posed by AI, during his remarks to the Australian Institute of Employment Rights’ annual Ron McCallum debate, saying:
Australian legislation should all move to the model of our safety legislation, all reasonably practical steps to ensure safety, not just safety, but fairness. And that should not just be under our workplace relations system, but under all our law because there is no aspect of prescriptive law that can keep up with the speed of technological AI change in this country.
Training
4.45In addition to workplace consultation, inquiry participants stressed the importance of workplace training to retrain and reskill workers whose jobs are replaced by AI.
4.46The submission of the Australian Services Union, for example, called on government to:
…ensure current and future workers receive relevant training so they can best participate in this ever-changing landscape. Workers whose roles may involve the use of AI in the future or whose future employment prospects might be diminished by the adoption of AI should be given every opportunity to receive comprehensive training or retraining.
4.47Mr Bernie Smith, the Secretary of the New South Wales Branch of the SDA, acknowledged the inevitability of displacement of workers in particular sectors due to AI. MrSmith noted that responses to dealing with technological change should focus on reskilling displaced workers to take up new jobs, potentially including the new jobs created by AI within their own organisation, rather than on redundancy processes.
4.48Similarly, Per Capita stated that AI is ‘set to disrupt many industries, resulting in job losses or job displacements’, and stressed the importance of retraining aimed at readying workers to move into AI-related roles:
While some are counting on AI also creating a host of new jobs, we need to develop programs and initiatives that account for these in a real, tangible way, not just as a hopeful premise. There should also be training programs that help transition potentially displaced workers to ready them for more AI related roles.
4.49The Australian Chamber of Commerce and Industry also argued that retraining is a key element in successfully responding to the displacement of workers by the adoption of new technologies, emphasising its broader benefits for maintaining the competitive advantages of the Australian economy:
[With past technological advances, the] capacity of economies to adopt greater automation and realise its benefits has depended on embracing retraining opportunities, and investment in research and education. Building capacity, and the adoption of new technologies, not only results in a more highly trained workforce, but ensures that a country can remain highly competitive externally, and safeguards against the possibility of jobs moving overseas.
4.50The submission of the Future Skills Organisation (FSO) indicated that, in relation to training more generally, ‘the Australian training system is facing challenges keeping pace with changes brought by rapidly moving technologies such as AI’. The FSO noted figures suggesting that Australia’s AI workforce has grown from around just 800 workers in 2014 to over 33,000 in 2023, with estimates of AI creating up to 200,00 jobs in Australia by 2030. The challenge of providing the training needed for AI-related jobs is also compounded by the increasing need more generally for a workforce that is digitally enabled.
Impacts of AI on specific industries
4.51The evidence to the inquiry shows that, for certain industries, the impacts of AI are already manifest. This is generally for industries in which the nature of the work is well suited to and is already being performed by AI, meaning they are already contending with the workforce and workplace impacts discussed above, as well as with more industry-specific issues that arise from the particular nature, character or context of the work performed in that industry. As an illustration of such matters, the following section considers the particular impacts of AI on the creative industries.
Creative industries
4.52The creative industries can be understood generally as being those businesses for which art or creativity is central to the products and services that they produce. This includes a diverse range of business relating to, for example, music, games development, graphic design, architecture, book publishing, film and television, and fashion. Creative workers account for a significant proportion of the Australian workforce; for example, the Australian Copyright Council (ACC) identifies it represents more than one million workers across its 24 affiliates.
4.53The committee notes that generative AI in particular is able to augment and automate certain creative processes, and in this regard the creative industries are at the forefront of dealing with the impacts of introducing AI into the workplace. As noted in Chapter 1, generative AI refers to AI models like ChatGPT-4 which generate novel content such as text, images, audio or code in response to human prompts or inputs. Generative AI technologies are built on large language models (LLMs) that are developed by being trained on vast amounts of data.
4.54The submissions and evidence received from inquiry participants representing or involved with creative industries identified some opportunities and, more significantly, existential risks arising from the use of AI technology in those industries.
Opportunities for the use of AI in creative industries
4.55A number of submitters and witnesses acknowledged potential benefits from the use of AI in the context of creative industries.
4.56The submission from the National Association for the Visual Arts (NAVA) observed that, from ‘aiding in the creative process to exploring new avenues for income generation, the utilisation of generative AI holds significant potential for artists worldwide’. Similarly, Ms Chelsea Bonner commented that, used ethically, AI could contribute broadly to increased productivity across the range of businesses and services supported by the artistic and creative industries:
…generative AI can produce content rapidly, from digital artwork to music and literary works, reducing the time and labour costs associated with these creative processes. For industries like advertising and media, AI can streamline production workflows, leading to considerable cost savings and increased output.
4.57The Copyright Agency submission pointed broadly to AI’s potential to increase productivity and income-earning opportunities in the creative industries:
People working in Australia’s creative industries welcome the benefits that a responsible Australian AI industry has the potential to deliver, including increased productivity, reductions in inequalities in a range of areas (including education) and opportunities to license their content to improve the quality and Australian-ness of locally developed AI tools.
4.58The Australia New Zealand Screen Association (ANZSA) submission noted that in the audiovisual industry AI has been used ‘for many years’ used to ‘enhance aspects of the filmmaking process and entertain audiences’, particularly in relation to special effects but also for other purposes:
AI has been used in a number of ways in production, such as to predict resource usage, optimisation of shooting schedules, and predicting complexity of VFX [or visual effects] shots. AI is also used in fairly routine post-production work like colour correction, detail sharpening, de-blurring, or removing unwanted objects. Some uses are more involved, like aging and de-aging an actor.
4.59A 2024 report by A New Approach outlined the following broad range of tasks that AI is already being applied to in the arts, culture and creative sector. These include:
creation of arts and culture;
discovery of content via search engines;
preservation of language and heritage;
automated content recommendation and moderation on digital platforms;
automated speech recognition, captioning and transcription;
machine translation of text and speech; and
classification ratings in video and games.
The evidence provided to the inquiry confirmed that there is already significant use of AI in the creative industries. For example, the MEAA cited a recent survey showing that 22 per cent of its members were already using AI in their work. Similarly, NAVA advised that 40 per cent of respondents to a 2023 survey indicated they had used AI for augmenting written tasks such as editing and grant applications, or for content development and ideation.
4.61In terms of AI’s financial impacts, a 2024 Creative Australia survey of workers across the art sector found that, while AI is expected to increase income-earning opportunities for the creative industries generally, there were different expectations as to which artistic occupations those opportunities would flow. For example, 43 per cent of composers thought that AI would increase their income-earning opportunities personally, compared to only 29 per cent of writers who thought that it would increase theirs. Notwithstanding the survey’s results, the submissions of guilds and other groups representing creative workers overwhelmingly expressed concern about the risks posed by generative AI.
Risks of AI to the creative industries
4.62Despite evidence of significant use of AI already in some creative industries, and its potential to improve productivity and income-earning opportunities in some areas, many inquiry participants raised very significant concerns about the risks of AI to the creative sector.
4.63A survey of Australian Writers Guild members found 94 per cent of respondents believed that their livelihoods as creative workers would be negatively impacted by AI technology, and 95 per cent expressed concerns about the reduction in quality of stage and screen projects. These findings were echoed by surveys in other creative professions, including over 90 per cent of production designers expressing concern about the impact of AI on their livelihoods and those of their crews, and 82 per cent of music creators saying AI may mean they can no longer make a living from their work.
Impacts of augmentation and automation
4.64As with the impacts on workplaces more generally, discussed above, many workers in the creative industries are concerned that the ability of AI to augment and automate creative processes will negatively affect the employment and income-earning opportunities in their industry.
4.65The Australian Society of Authors (ASA), for example, observed:
AI-content is cheap to make since no compensation need be paid to writers or artists. An increase in AI-generated books and articles will make the challenges of discoverability and dilution of audiences even tougher for professional writers. An abundance of cheap AI-generated content will lead to a consumer expectation about how much books should cost, putting downward pressure on the cost of human-created content. The richness and diversity of Australian literature is at risk.
4.66Screen Producers Australia (SPA), while recognising the great opportunities of generative AI for the screen industry, cautioned:
Broad and aggressive adoption of these systems could have a large and negative impact on the labour market within the screen industry, removing employment opportunities for creatives and crew members, while also removing career entry pathways into the industry.
4.67Accordingly, the SPA called for the adoption of AI systems by production companies ‘in a way that empowers the creatives and crew they employ, rather than replace them…[so that all] participants in the screen industry ecosystem…benefit from the opportunities these AI systems present’.
4.68The ASA and NAVA submissions also both highlighted the particular risk of generative AI to First Nations creators, with the ASA noting its ability to be used to ‘produce and perpetuate inauthentic and fake art, and [to] appropriate Aboriginal and Torres Strait Islanders’ art, design, stories and culture without reference to Traditional cultural protocols’. NAVA observed:
First Nations artists in Australia are already harmed by the physical reproduction of Aboriginal and Torres Strait Islander arts and crafts by non-Indigenous people on a large scale, [and] generative AI platforms offer a faster and easier method of output.
4.69Similarly, the National Aboriginal and Torres Strait Islander Music Office (NATSIMO) stated that a large proportion of Aboriginal and Torres Strait Islander communities make a living from their art, and that the devaluation of this work by generative AI would have wide-ranging harm on these communities:
If that economic value is lost, the impacts are potentially enormous – on health, on mental health, and in many other areas. These concerns are not just limited to community members practicing in the creative space – everything is connected.
Copyright
4.70Copyright is a type of intellectual property that is owned by the authors of original artistic works in fixed expressions or mediums such as books, plays, paintings, photos, songs, sound recordings and computer programs. In simple terms, copyright ownership provides artists with the exclusive economic rights to perform, licence and sell their work. As noted in the submission of the Australian Copyright Council (ACC), licensing the use of works by the copyright holder is a key source of income that sustains artists and the creative industries more generally:
Charging fees or receiving royalties in exchange for permission (or a ‘licence’) are among the more common ways that copyright owners derive income from their creative material. In this context, copyright is the framework which supports and incentivises the creation of new copyright materials.
Copyright protection for works created with assistance from AI
4.71A number of inquiry participants commented that presently there is a lack of clarity under Australia’s copyright framework as to the extent of protection that it affords to works created by humans but with the assistance of AI. The Screen Producers Australia submission explained:
Copyright can only subsist in material that is created by a human author. Therefore, materials created through a process with little or no human input, lack authorship and are not protected by Australian copyright law. However, The Copyright Act 1968 (Cth) is silent on the level of human authorship required to give rise to copyright protection.
4.72The BSA Software Alliance submission observed that, if the use of AI to augment or facilitate creative or artistic works was to disqualify a work from copyright protection, this could undermine the system of copyright protection and the creative industry more generally:
Copyright plays a key role in businesses’ ability to protect creative material, including software code. The use of AI should not prevent a work developed in conjunction with human creativity from being eligible for copyright protection. If copyright protection is not available simply because AI was used in the creative process, it will limit the responsible use of AI and the purpose of copyright laws. As a result, the portions of the work that are influenced by human creativity should be protected by copyright laws. Lack of copyright protection may also cause innovators to seek out jurisdictions with laws and policies that are more protective of intellectual property.
Use of copyrighted materials to train AI models
4.73A significant issue in relation to copyright arises where copyrighted materials are used to ‘train’ AI models.
4.74To develop a large language model (LLM) on which generative AI systems like ChatGPT-4 are built, the LLM is trained by being fed vast amounts of content, such as text or images, to develop its predictive capacity to the point where it can generate natural language text or, depending on its design, other outputs such as images or music. The content that can be used for training LLMs is diverse and can be sourced from, for example, books, articles, images and large datasets. In a practice known as ‘scraping’, content that may include copyright material is often taken directly from the web for the purposes of training AI models.
4.75The submission of the Australian Writers’ Guild Authorship Collecting Society, the Australian Screen Editors Guild, the Australian Production Design Guild and the Australian Cinematographers Society (the Guilds and Cinematographers) observed that AI companies ‘have conceded that their models rely on the unauthorised and unremunerated use of copyrighted work’ with OpenAI, for example, stating that stating it would be ‘impossible to train today’s leading AI models without using copyrighted materials’.
4.76Appearing at a hearing of the inquiry, Ms Lucinda Longcroft, Director, Government Affairs and Public Policy, Australia and New Zealand, Google, conceded the company uses copyrighted work to train its AI products without authorisation or remuneration, arguing that the exclusion of such works from AI training datasets could significantly impair the utility of AI:
…copyright law in most parts of the world…[persists for] at least 70 years after the death of an author or after it's published. If we were to exclude works that are still under copyright…that would mean that data relating to modern events or cultural or social issues such as LGBTQI rights, for example, would be excluded from those datasets. It is predictable that the models would then show bias or have gaps or ignorance about those interests and about that large and important part of our society. We train our models on that large corpus of publicly available data in order to ensure that they are providing the most socially beneficial uses in their outputs.
4.77When directly asked whether Amazon uses copyrighted work to train its AI products without authorisation or remuneration, the company refused to answer the question but assured the committee it takes the issue very seriously, despite a former executive recently alleging the company instructed its LLM teams to ignore copyright laws:
We don’t disclose specific sources of our training data, but as a rightsholder ourselves, we take IP related concerns seriously and respect the rights of artists and creators.
4.78Meta is currently subject to numerous lawsuits in the United States for training its AI products on a database of over 200,000 pirated books, including up to 18,000 Australian works. When asked to confirm whether Meta trains its AI products on copyrighted data without authorisation or remuneration, Meta said its LLM has exploited so much data that it would be too hard to tell:
The scale of data required to train generative AI models makes the documentation and disclosure of individual training data infeasible. Given the massive scale of data involved, it is impossible to definitively know whether specific publicly-available data is protected by copyright or not.
4.79Some of the big tech platforms developing LLMs also act as content publishers; for example, Google publishes copyrighted content on YouTube and YouTube Music, and Amazon does likewise on Kindle and Audible. When asked whether they use the copyrighted content on these platforms to train their AI models, both Google and Amazon declined to respond.
4.80Creative industry stakeholders raised concerns that the use of copyright material to train AI without authorisation amounts to a breach of copyright. The ACC submission, for example, stated:
The ingestion [into an AI system] of third-party copyright material (i.e. copyright material that the AI developer did not create) without the licence of copyright owners, may constitute an infringement of copyright.
4.81The ACC noted that a core right of a copyright holder is the right to reproduce their work, and that the ‘large scale reproduction of copyright material’ for training AI models therefore ‘exposes AI developers to liability for copyright infringement’. While Australia’s copyright framework includes ‘fair use’ exceptions for copyright infringement if material is used for research or study; criticism or review; or parody or satire, and that use is ‘fair in all the circumstances’, the ACC considered that the practice of scraping copyright material is ‘unlikely to fall under any of these exceptions, or be considered as a fair use of that material:
In terms of the requirement that the dealing be ‘fair’…[the Australian courts are unlikely to find that] a dealing is ‘fair’ where the ‘scraping’ of copyright material is used to develop a technology that produces something that effectively competes with the copyright owners’ material, and without the licence of or remuneration to, the copyright owner.
4.82The view that the use by AI of copyright material to train AI without permission of copyright holders constitutes a breach of copyright was strongly supported by creative industry stakeholders. For example, the combined submission from the Guilds and Cinematographers stated:
Generative AI ‘scrapes’, ‘mines’, ‘listens to’, ‘trains on’, or to use another word, copies, existing artistic work either used without the consent of the authors or which has been pirated and illegally published online. In both these cases, an unauthorised reproduction of copyrighted work has occurred and therefore an author’s copyright has been infringed.
Remuneration of copyright holders
4.83Noting the importance of copyright as a source of income to sustain artists and the creative industries, many creative industry stakeholders pointed to the financial consequences of the unauthorised use of copyrighted to train AI. The Australian Recording Industry Association (ARIA), for example, observed:
[The use]…of copyright materials… to train AI models without authorisation and compensation…[is] to the detriment of artists and rightsholders whose works have been used by AI developers.
4.84The Guilds and Cinematographers submission noted, with specific reference to the screen industry, that the scraping of copyrighted work to train AI circumvents the usual requirement for artists to receive ‘fair remuneration and an appropriate credit’ for the use of their work. For this reason, the Australian Publisher Association submitted that ‘the illegal ingestion of copyrighted content to train AI constitutes an existential threat to the ‘sustainability both of AI and of the creative industries on which AI depends.
4.85Further, the Guilds and Cinematographers submission argued that copyright holders should also be entitled to remuneration for outputs generated by an AI system that rely on the copyrighted material on which it was trained. It argued:
Since any ‘successful’ AI output requires successful (human) input, the commercial success of any AI generated content is also directly tied to the substantive success of the original works that are scraped by the model. In simpler terms: generative AI could only ‘write’ a successful screenplay because it is replicating successful screenplays written by people…Therefore, an original author who consents for their work to be used should be entitled to ongoing payments when their work is used by generative AI platforms to produce outputs that are commercially exploited.
4.86Accordingly, the Guilds and Cinematographers called for an opt-in system to require AI developers to seek the permission of copyright holders to train AI systems on copyrighted material; as well as the requirement for remuneration and royalties to be paid in relation to any AI-generated outputs based on that material.
4.87Ensuring that the use of copyright material to train AI systems is captured within Australia’s copyright framework was generally supported by many inquiry participants. ARIA, for example, submitted:
A regulatory framework that prioritises transparency and accountability, regarding the content used for training AI models, is essential for ensuring adherence with copyright and other laws, including the enforcement and licensing of rights...
4.88Similarly, the MEAA submitted:
…the ongoing and prior use of creative work [by AI] must be subject to consent and compensation, as well as the ability to opt out. Text and Data Mining (TDM) exceptions should be strictly limited, and any existing exemptions should be revised around this new technology and require informed consent by owners of IP rights, particularly with any content being used for self-training purposes. This should include voice and sound data including music and visual art.
4.89In contrast, some inquiry participants considered that the use of copyright material by AI should not constitute a breach of copyright or provide a basis for remuneration of copyright holders.
4.90For example, the submission from the Schools and TAFE Copyright Advisory Group (CAG) noted that in jurisdictions such as the US copyright laws allow for the training of AI on copyright materials without breaching copyright. In this regard, CAG considered that Australia’s copyright framework is a barrier to the development of the AI industry in Australia:
…in the United States, AI developers are relying on the fair use exception as a defence to claims of copyright infringement by rightsholders in material used to train AI models…[whereas no] equivalent fair use exception exists in Australia…The result is that Australia has a much stricter and less flexible copyright framework than other jurisdictions, which in CAG’s view imposes significant impediments to the development, operation and use of AI systems in Australia.
4.91This argument was rejected by the Copyright Agency, which highlighted there is ‘a vast range of content available for lawful use by AI developers, including under efficient and fair licensing arrangements.’ It also noted the UK recently rejected calls to broaden its AI exception to its copyright laws, that Japan is considering scaling back its AI exception and that, under the more permissive US regime, there are more than 24 copyright cases in train against AI developers.
4.92Ms Nicole Foster, Director of Global AI/(machine learning (ML) Public Policy for Amazon Web Services, claimed that the more restrictive copyright regime in Australia could operate as a barrier to AI systems being developed and trained on data that is culturally relevant and representative of Australian society. MsFoster considered that the ‘availability of content [for training AI] is going to be…key in ensuring that non-dominant cultures are represented’ in AI technologies. Nevertheless, when asked whether Amazon’s concern for Australian cultural representation would extend to remunerating Australian creators for the work taken from them without authorisation, Amazon declined to respond.
4.93In light of the effect of more restrictive copyright law on AI development, some inquiry participants called for Australia’s copyright laws to be amended to allow the use of copyrighted material for AI training. The CAG submission, for example, called for ‘reforms to Australian copyright law…to level the playing field with other jurisdictions’.
4.94However, other inquiry participants opposed such calls. The Guilds and Cinematographers submission, for example, stated:
We are strongly opposed to any suggestion that ‘generative’ AI systems should be allowed to use copyrighted works without permission from, or remuneration being paid to, the authors of those works.
4.95Similarly, ARIA submitted:
Australian copyright law should continue to incentivise creativity and prioritise human artistry, creativity and labour. Existing fair dealing provisions should not be changed to enable training of AI applications and systems without consent and transparency to the detriment of creators and rightsholders.
4.96It was noted by a number of inquiry participants that copyright holders had experienced significant difficulties trying to ascertain whether their material had been used to train AI systems or to challenge its use for such purposes; and, accordingly, that transparency would be a critical element if copyright law is to effectively regulate and capture the use of copyrighted material by AI. The MEAA, for example, stated:
…it is crucial that summaries of training datasets are made publicly available so that creatives can ascertain whether their work has been used in the training process. If not, it will not be possible to know the extent of use.
4.97The submission of the Attorney-General’s Department (AGD) noted that the issues identified by AI and creative industries stakeholders in relation to Australia’s copyright framework are ‘complex, global, and contested’. It advised that AGD is consulting on copyright issues with stakeholders, including through a series of copyright roundtables held in 2023, and the Copyright and Artificial Intelligence Reference Group (CAIRG), which was established in December 2023 to ‘better prepare for future copyright challenges emerging from AI’ and advise government on the key copyright policy problems and potential solutions.
4.98While stakeholders generally approved of the work of CAIRG as an ongoing consultation mechanism with the creative sector, some suggested that the impacts of AI on the creative industries requires more far-reaching whole-of-government approach.
Copyright in relation to output of generative AI models
4.99A further issue in relation to copyright arises in relation to the outputs of generative AI models. As noted in the ACC submission:
If generative AI reproduces a ‘substantial part’ of existing copyright material in the output, depending on the nature of the (text or image) prompt [input by the user], the user may be liable for copyright infringement.
4.100The ACC noted that in such cases the owner of the generative AI platform may also be liable for copyright infringement on the basis that they ‘had the power to put in place measures to prevent an infringement of copyright and failed to take reasonable preventative steps to do so’ (authorisation liability). In addition, copyright offences could apply to the distribution of AI-generated outputs that substantially reproduce copyright materials.
4.101The MEAA submission noted that, while the outputs from generative AI are meant to be ‘synthetic’, meaning they are ‘not meant to closely resemble the materials they themselves were trained on’, it expressed concern that in some instances ‘AI models have been known to produce outputs that contain copyrighted material:
…several audit studies have shown that AI models–through the use of selective prompts–can generate copyrighted material originally used in training [which has resulted in a number of lawsuits]…
4.102Ms Nicole Foster, Director of Global AI/ML Public Policy for Amazon Web Services, however, advised the committee that AI products could be designed to operate with protections such as ‘memory suppression…to prevent the [AI] models…from outputting any copyrighted content’. In support of this, Amazon, like other developers, indemnifies users of its generative AI products from intellectual property claims from third parties. However, the fine print of the indemnification terms includes a range of exclusions, including where the user generates content that it should ‘know or reasonably should know may infringe or misappropriate another party’s intellectual property rights’, suggesting that whatever protections Amazon has in place do not truly stop infringing content from being produced.
4.103The Screen Producers Australia submission suggested that production companies are ‘abstaining’ from using AI for some purposes given the current potential for copyright infringement by the use of AI-generated outputs.
AI deepfakes or mimicry of artists
4.104This issue of AI outputs potentially infringing copyright is further complicated by the potential for AI to generate ‘deepfakes’ of artists or outputs that closely mimic or resemble the style of copyrighted material on which it has been trained. The MEAA submission noted:
Another issue occurs when the output is not directly reproduced from training materials but clearly mimics the style or likeness of a creator or performer. For example, many are concerned about the capacity of AI to produce work ‘in the style of’ particular actors, performers, musicians, artists, or writers.
4.105The Guild and Cinematographers submission observed that, drawing on the body of an artist’s work, AI has the capacity to produce ‘new’ works that mimic the creative elements that constitute an artist’s distinctive style:
For some of our best-known creative practitioners, their existing corpus of work has a distinctive ‘voice’ (which will incorporate audio-visual as well as written elements) and this forms part of their commercial appeal as a creative. It is intrinsic to their future work, and a key factor in their ongoing and future engagement. AI can be used to replicate an individual creative’s artistic or ‘authorial voice’ (and future works in this voice) simply by requesting an output in the style of a particular author or artist.
4.106In relation to AI’s capacity to create deepfakes of music—songs or music that strongly resembles the style and sound of an artist or band—the submission of the Australasian Performing Right Association and Australasian Mechanical Copyright Owners Society (APRA AMCOS) commented on the ‘ever-increasing quality of the audio being generated’ and the ‘speed at which deepfake music is going viral’, noting that:
…it is abundantly clear that deepfake music is using unauthorised datasets to train AI models to produce imitations of popular artists. The protected creative work of human practitioners is being used without permission to generate AI content that directly damages and dilutes an artist’s profile, brand, market, and economic livelihood.
4.107In addition to the issue of AI being trained on artists’ work without authorisation or payment, APRA AMCOS warned that the ease and low cost of producing deepfake music could have broader commercial and financial implications for music artists.
Deepfake music can be cheap to create and is royalty-free, which runs the risk of incentivising music streaming platforms to allow deepfake music since no compensation need be paid to writers, performers, publishers, or record labels…
4.108AI deepfakes also pose significant threats for voice actors, with AI able to quickly and easily clone human voices off small samples. The Australian Association of Voice Actors (AAVA) submission stated:
The emergence of AI technology threatens to undermine [Voice Actors] work by enabling the creation of synthetic clones of their voices, without their consent. Disreputable companies are right now stealing current Voice Actor work and feeding it into AI machine learning to breathe life into a clone of the human artist.
4.109Inquiry participants also noted that, unlike copyright protection of artistic works, the legal protections afforded to a person’s likeness, or to the intrinsic character or qualities of their voice or appearance, are less clear and accessible. On this matter, Mr Joseph Mitchell, Assistant Secretary of the Australian Council of Trade Unions, told the committee:
The theft of voice, body and movement is something acutely felt by creative workers. You should not need the power and resources of Scarlett Johansson to sue OpenAI for the theft of her voice. For creative workers in Australia, the ownership of their creative and cultural capital is paramount and must be protected by law.
4.110The AAVA submitted that deepfakes of artists’ voices ‘not only jeopardises…[their] economic interests’ but also ‘raises profound ethical concerns regarding the unauthorised use of their likeness.’ It observed:
A Voice Actor’s sound, their timbre, their tone is to them like a line of code is to Microsoft–it is their property...
Healthcare sector
4.111In addition to the concerns outlined above about the potential impacts of AI on workers and workplaces, the introduction of AI poses particular challenges in high-risk settings, in which the significant opportunities for beneficial uses of AI must be weighed against the possibility of potentially very serious harms or consequences.
4.112To illustrate such matters, the following section considers the evidence received by the inquiry in relation to the use of AI in the healthcare sector.
Opportunities for use of AI in the healthcare sector
4.113The submission of the Department of Health noted that, while AI is already used in the healthcare sector for some purposes, the ‘rapid development of commercial AI solutions reveals opportunities for generative AI to solve urgent and emerging challenges in the Australian health system’. It noted that a research report released by the Productivity Commission (PC) in May 2024, titled Leveraging digital technology in healthcare (2024 PC healthcare report), found that the healthcare sector ‘has the most potential to benefit from AI adoption’.
4.114Inquiry participants identified a wide range of potential uses of AI in healthcare. Professor Steve Robson, the President of the Australian Medical Association, observed:
There is no doubt the rollout of artificial intelligence as a routine part of medical care has the potential to deliver extraordinary innovation in health care in Australia. It's likely to be transformative for patients, doctors, all health professionals and probably the entire economy.
4.115The Queensland Nurses and Midwives’ Union (QNMU), for example, pointed to ‘significant opportunities in the appropriate application of AI in healthcare’:
…AI-enabled health technologies have the potential to reshape healthcare delivery, improve patient outcomes, and enhance the efficiency of healthcare…There are already examples of AI tools being used to improve healthcare delivery, such as the early detection of Alzheimer’s disease, melanoma and skin lesions and analysing medical images to detect anomalies.
4.116Similarly, the Department of Health submission stated:
The safe adoption of AI has the possibility to solve urgent and emerging challenges in our health system and alleviate the pressure on our healthcare workforce. AI technology could address increased expectations for personalised health services, improved access to care, rising costs and the growing complexity of care for people with chronic conditions.
4.117AI can also be applied to aspects of healthcare administration. For example, in hospitals and medical practices, AI could be used to predict pre-admission rates, allocate hospital beds; schedule appointments; register patients; draft referral letters and care plans; and manage patient billing.
4.118The Department of Health noted that AI also has potential applications on the consumer side of the healthcare sector:
For consumers, AI might assist in navigating an increasingly complex health system, allow for real time language translation into a preferred language and use of health care outside of traditional business hours. Populations with the greatest potential to benefit from AI include people in regional communities, shift workers and those who speak languages other than English who may have difficulty using services.
4.119The 2024 PC healthcare report noted that the rapidly increasing scope of applications for AI could ‘free up the health workforce and prioritise resources to enhance the quality of care’. The report stated that AI has the potential to ‘enhance productivity in almost every aspect of the healthcare sector’, including keeping well, early detection and diagnosis of disease, decision-making, treatment, end of life care, [and] research and training.
4.120Given its potential benefits, some inquiry participants cautioned about being too slow to adopt AI in the healthcare sector. The Australian Centre for Health Engagement, Evidence and Values, for example, noted that Australia has ‘lagged the world’ in the development and implementation of AI in healthcare. The Royal Australian and New Zealand College of Radiologists (RANZCR) submission, noting AI’s potential to address workload pressures in their profession, warned of the potential for missed opportunities:
The safe implementation of AI could prove to be a contributing factor in assisting radiologists and other medical specialists in managing their increasing workloads effectively. Failure to implement AI technology in radiology practices not only poses risks but also represents missed opportunities for enhancing patient outcomes, streamlining healthcare delivery, and providing healthcare workers with the required tools to do a better job.
4.121The Australian Nursing and Midwifery Federation (ANMF) suggested that uptake of AI systems in Australia is ‘inhibited’ by ‘a lack of trust in confusing, often untranslatable, models; data security and privacy concerns; health inequity concerns due to underlying data biases; and poor government regulation’, and the evidence of most health-sector stakeholders stressed the need to address the significant risks of AI before it is implemented in healthcare settings.
Risks of AI in the health care sector
4.122The Department of Health noted that ‘health care is recognised as a high-risk use case for AI’. It observed:
The application of AI in health care presents heightened ethical, legal, safety, security and regulatory risks. The risks for health care are heightened because of the direct effect [sic] on patient safety...
4.123Healthcare-sector stakeholders acknowledged that the potential benefits of AI come with ‘enormous challenges’. The Australian Centre for Health Engagement, Evidence and Values, for example, observed that ‘healthcare and public health, while potentially offering pathways to benefit, are also high-risk and high-stakes areas for any application of AI’.
Privacy and data security
4.124As noted in Chapter 2, AI technologies involve the use of significant amounts of personal data, from the large data sets used to train and operate AI systems, to the personal information that is entered into AI systems and used to generate outputs for various purposes.
4.125The QNMU observed:
…significant risks attach to the use of personal healthcare information and patient data to train and use AI systems for healthcare applications…[including the] privacy of the underlying data upon which AI applications are trained, but also concerns around the use of information entered into AI systems (e.g., medical records).
4.126The ANMF outlined the potential risks of misuse or mishandling of personal information used by or contained in AI systems:
…in the context of health care…[there are] serious concerns regarding the privacy of personal medical data. The sharing of large health data repositories to inform systems, such as machine learning, is often done without the permission or knowledge of patients, and with advanced AI tools that are capable of identifying individuals even in de-identified datasets, concerns and hesitancy to provide information are warranted. Further, personal clinician or patient use of AI tools, particularly free and open-source AI, if not used with appropriate precautions can result in personal health data becoming publicly available.
The risks of AI tools that contain clinical data being hacked and used for malicious purposes also pose serious risks to patients' privacy and well-being. Further, companies' data mining and selling private patient data for profit is of major concern.
4.127The QNMU considered that ‘ensuring that sensitive patient information remains confidential and [is] used responsibly is essential to building trust in AI technologies’, and expressed its support for ‘privacy law reforms to strengthen existing frameworks to address data privacy risks and harms related to AI’.
4.128Similarly, the Consumers Health Forum submitted:
Data safety and privacy are of paramount importance for consumers. AI is bound to collect extensive amounts of data when utilised in clinical settings, and consumers have the right to know where and how this data is stored and used.
…Specific legislation that safeguards data collected and used by AI throughout its entire lifecycle, from data collection to storage to data elimination, needs to be implemented…[and legislation] must clearly state who can access data collected via AI and how data is collected, stored and used.
Automation and accountability
4.129Inquiry participants raised concerns about the potential impacts of automation in the healthcare sector, including in relation to its impact on jobs and career pathways, the traditional relationship of care between health professionals and patients, and the accountability of health professionals for decision-making.
4.130The submission of the ANMF noted that automation in the healthcare sector would ‘create workforce redundancies’ leading to loss of employment and income for workers. It observed:
The wider implementation of AI will require the reskilling of the workforce as jobs become gradually replaced by autonomous AI systems and new jobs are developed. This will necessitate strategic planning in how AI systems are implemented throughout the workforce and investments to support those affected.
4.131While noting the potential for AI and automated systems to increase access to affordable healthcare, the ANMF cautioned that automation in pursuit of cost-saving could in fact increase the inequity of the healthcare system:
Artificial intelligence as a method for increasing equitable access, a common selling point for such systems in the healthcare setting, raises several concerns. While such systems are highly regarded for their affordability and offering opportunities for those who are disadvantaged to have some level of care, these systems should not replace a person's access to human practitioners as a means of cost-saving. Unnecessary gatekeeping of human practitioners through the design of autonomous systems to service the health needs of the disadvantaged should not restrict access to human/preferred care and perpetuate inequities. The adoption of AI technologies in healthcare and beyond should not be such that those with greater means and resources stand to benefit more than those with less.
4.132Others noted the importance of maintaining the human element of healthcare. The QNMU, for example, observing that the nursing and midwifery professions are ‘deeply rooted in values of empathy, compassion, and the ability to form meaningful connections with patients’, commented:
The introduction of AI-driven technologies could lead to a loss of the human element, potentially affecting patient satisfaction and overall wellbeing…AI must be used to complement and support professional roles, without compromising the human connection that remains irreplaceable in healthcare delivery and central to the nursing and midwifery professions.
4.133Similarly, Ms Annie Butler, the Federal Secretary of the Australian Nursing and Midwifery Federation, emphasised the importance of retaining the essential human aspect of the healthcare experience for patients:
…if you're in hospital… you don't often remember the machines and all the things that were done to you. You often remember the hand that touched you…[Our concern is therefore] about making sure we don't allow AI to dehumanise the delivery of care and take away the thing that matters so often most people, particularly to elderly residents in nursing homes...[We should use AI] as a copilot and never allow it to take over so that the clinician remains at the forefront guiding the overall delivery of care and an entire patient journey.
4.134The QNMU also opposed the development or use of AI ‘solely…for the exclusive substitution or replacement of professional roles’:
…AI must never replace human-delivered care and clinical decision making, but rather be used as a tool to contribute to quality improvement and clinical care optimisation.
4.135In addition to such concerns about ‘dehumanising’ healthcare work, the Australian Centre for Health Engagement noted that automation creates the risk of deskilling healthcare workers, which could ‘compromise decision making across various stages of clinical management, and potentially undermine patient safety’.
4.136Automation was also seen as raising significant questions of accountability around healthcare decision-making, particularly where decisions lead to mistakes or poor outcomes. The Department of Health referred to the problem of ‘automation bias’—being the ‘tendency for humans to over-rely on, and delegate responsibility to, decision support systems’—creating a risk for patients where AI systems make errors and ‘complicating accountability’. On this issue, the QNMU submitted:
It remains unclear who would be responsible for any errors or adverse events caused by the AI systems and how to establish a clear framework for liability and regulation…
4.137Given this, the QNMU called for ‘clinical and regulatory oversight of AI system outputs’ ‘to ensure AI system recommendations are safe, appropriate, and relevant to the patient’.
Bias, discrimination and error
4.138As noted in Chapter 2, a major and widely recognised risk of AI is the capacity of AI systems to generate results or decisions that are biased or erroneous. The problem of bias can arise from AI design or bias within the data used to train an AI system, and lead to discriminatory outcomes where human decisions are based on the outputs of that system.
4.139The ANMF submission observed that, while AI can match or even outperform human practitioners in performing certain tasks—for example in diagnosing certain illnesses—the accuracy of such systems is highly dependent on the quality or representativeness of the data on which the AI system is trained. It explained:
If the dataset [used by an AI model] lacks a diversity of presentations across a diverse sample set…the model has the potential to develop biases and inaccuracies among certain groups.
4.140The Department of Health submission noted that AI bias in healthcare settings can lead to worse care and health outcomes for certain groups:
…biased algorithms can lead to exacerbation of inequities, existing social inequalities, and disparities in patient care, especially in underrepresented populations. For example, a machine learning algorithm was found to be less accurate at the detection of melanoma in darker skinned individuals, as it had mainly trained on fair skinned patients. AI may also predict greater likelihood of disease because of gender or race when those are not causal factors.
4.141Similarly, the ANMF noted that AI systems are influenced by bias towards underrepresented populations in health research generally, which ‘must be addressed prior to the wider implementation of AI models based on this data’:
As white people have been the primary reference group in clinical assessments, AI models based on this data will reflect these biases. Historical data, on which AI models are based, are racially biased. For example among women with breast cancer, black women had a lower likelihood of being tested for high-risk mutations compared with white women, leading to an AI algorithm that depends on genetic test results being more likely to mischaracterize the risk of breast cancer for black patients than white patients. Discrimination in medical research also includes dangerous prejudices against gender and sexually diverse people which must be unpacked and disentangled from data sets before they are implemented into AI systems.
4.142Further, the Department of Health noted the potential for AI to produce not just biased but completely erroneous outputs, which can also have significant consequences for patient safety and health outcomes:
In some cases, [AI] outputs can be entirely wrong, commonly referred to as hallucinations. This may pose serious patient safety risks when AI software is used to give clinical decision-making support, for example differential diagnosis or disease screening tools. AI algorithm failure could lead to incorrectly categorising a patient resulting in unnecessary, delayed or ineffective treatment.
4.143Given the significant consequences of AI bias, discrimination and error in healthcare settings, the ANMF and others emphasised the ‘need for guidelines for the development and rigorous testing of AI models before their implementation’, as well as incorporating ‘a human in the loop…during the design and use of AI technologies’.
Transparency
4.144As noted in Chapter 2, the need for transparency in the development and operation of AI systems is a ‘a key principle at the highest levels of international governance and for industry when it comes to responsible AI adoption’, and the evidence of healthcare sector stakeholders revealed a broad consensus on the need for transparency of AI systems used in healthcare settings to mitigate the significant potential risks to patient safety.
4.145For example, the Department of Health noted the importance of healthcare providers being informed of any limitations in the data used to train or operate AI systems to allow them to understand potential biases of the system and avoid discriminatory and unsafe outcomes for patients.
4.146Stakeholders also stressed the need for healthcare practitioners to be able to understand the algorithms or ‘logic’ by which AI system outputs are produced, especially when used to support decision-making in clinical contexts. The QNMU, for example, considered that AI should be subject to ‘greater testing, transparency and oversight’ to allow practitioners to verify or validate the reliability of the algorithms used to generate outputs:
AI standards must require transparency and accessibility for health practitioners and users to be informed about AI supported clinical decision[s], including the right to access information about how an AI-assisted decision was made, where that decision affects them.
4.147Similarly, the Department of Health stated:
Data standardisation, stewardship and interoperability are important steps in optimising data quality for trusted AI outputs…Achieving transparency in AI systems through responsible disclosure is essential to ensure that users understand what the system is doing and why. Understanding processes and input data helps consumers and healthcare providers to build confidence in the technology. The requirements for transparency in health care are crucial since the decisions directly affect people's lives.
Regulation of AI in the healthcare sector
4.148In light of the risks outlined above, healthcare sector stakeholders identified the need for government to develop a regulatory framework to ensure the safe, ethical and effective use of AI in healthcare settings, with current regulatory arrangements acknowledged as being insufficient. QNMU, for example, noted:
The deployment and application of AI remains largely unregulated and there is lack of transparency regarding the ethical principles of how AI technologies are developed and no real nationally coordinated governance and regulatory arrangements in place to ensure the ongoing efficacy and ethical safeguards of AI.
4.149The submission of the Department of Health cited studies showing low levels of confidence in Australia and overseas regarding the use of AI in healthcare due to its potential risks, noting that ‘if patients and clinicians do not trust AIs, their successful integration into clinical practice will ultimately fail’.
4.150While generally acknowledging the difficulty of ensuring that AI ‘regulation is future-proofed and can meet unforeseen challenges without restraining innovation’, inquiry participants called for strong and comprehensive regulation of AI in the healthcare sector due to its significant potential risks for human health and care outcomes. The QNMU, for example, stated that AI requires a ‘higher burden of regulatory compliance due to the potential risks and harms to patient health’, calling for a comprehensive regulatory framework to address the range of risks presented by AI:
Regulatory frameworks should ensure appropriate implementation of ethical principles pertaining to AI in healthcare, including elimination of biases, maintaining privacy of patient and practitioner data and the establishment of national governance and accreditation frameworks…[Regulatory] frameworks or guidance must ensure that the rights of patients are protected, and improved health outcomes are achieved. This will require AI to be developed and regulated to ensure specific and appropriate safeguards, such as human intervention during decision-making processes and upholding that health professionals are the ultimate decision makers for clinical care.
4.151The ANMF submitted that the ‘complexity’ of regulating AI highlights the need for national standards for ‘the governance of healthcare-based AI systems to ensure their capability to translate to safe and effective clinical services’, and called for the development of such safeguards ‘in consultation with consumers, industry, peak bodies, and other key stakeholders’.
4.152The importance of consultation with the healthcare industry in the development of the regulatory scheme for AI was a strong theme in the evidence of inquiry participants, with the QNMU, for example, also calling for consultation to ‘better evaluate the opportunities, impacts and regulatory requirements specific to the healthcare environment’.
4.153In this regard, the Department of Health submission pointed to ongoing consultation with the healthcare sector on regulating AI, which commenced with the government’s Safe and responsible AI in Australia discussion paper released in June 2023. The department noted that a number of submissions to the consultation had ‘advocated for a risk-based approach to regulation that ensures the ethical implementation of AI in healthcare…and supports national governance establishment’. Support for a risk-based regulatory approach was also expressed in some submissions to the inquiry, with the Consumers Health Forum (CHF), for example, calling for Australia to adopt a risk-based regulatory framework for AI similar to the approach taken by the European Union’s Artificial Intelligence Act.
Committee view
4.154The evidence received by the inquiry has shown that, while AI is already used in many industries, the generative AI systems like ChatGPT-4 that have come to such prominence in recent years have the potential for myriad new uses in the workplace, and to drive significant improvements to the productivity of Australian businesses and workers. These productivity gains will be delivered through the use of generative AI to both augment and automate work tasks, thereby freeing up workers to be employed more efficiently and to undertake higher value tasks.
Job losses and training
4.155However, the committee heard significant concerns about the potential for the use of AI in workplaces to impact work and jobs. AI will primarily be used to support and augment existing jobs by automating or streamlining specific tasks; however, there will also be some job losses where entire roles are able to be fully automated by AI. While the committee acknowledges that there will be high growth in the AI-related jobs that support the development and deployment of AI in workplaces and beyond, AI automation will tend to replace jobs that require lower education and training requirements or that are particularly well suited to being performed by generative AI systems. In this regard, the committee notes with concern that job losses flowing from the use of AI in the workplace are likely to have a disproportionate impact on vulnerable groups, such as women and people in lower socioeconomic groups.
4.156Further, the automation of low-skilled jobs has the potential to disrupt entry pathways into industries through apprenticeships and trainee schemes, thereby undermining the career prospects of young people and the longer-term viability of workforces, as well as contributing to the problem of social inequality more generally.
4.157Given the likelihood of job losses arising from the adoption of AI in workplaces, inquiry participants called strongly for government to ensure that robust policies, programs and supports are in place to provide for the training and reskilling of workers whose jobs are replaced or impacted by AI.
Workplace impacts
4.158The committee is also concerned about evidence regarding the impacts of AI on workers’ rights and working conditions, particularly where AI systems are used for workforce planning, management and surveillance in the workplace. The committee notes that such systems are already being implemented in workplaces, in many cases pioneered by large multinational companies seeking greater profitability by extracting maximum productivity from their employees. The evidence received by the inquiry shows there is considerable risk that these invasive and dehumanising uses of AI in the workplace undermine workplace consultation as well as workers’ rights and conditions more generally.
Workforce consultation
4.159Given the potential impacts of AI on workplaces and workers, many inquiry participants stressed the importance of consultation with workforces in relation to proposed uses of AI in the workplace. Submitters and witnesses noted that consultation with workers reduces the potential for AI to negatively impact on workers, increases the prospects of AI systems being safely and successfully introduced to workplaces, and provides for appropriate transparency and consultation where the implementation of AI will lead to the replacement of jobs and job losses.
4.160However, the committee heard that consultation with workforces around AI is presently insufficient, with a lack of a nationally consistent framework for ensuring transparency, consultation and negotiation with workers in relation to the use and management of AI in the workplace. A number of inquiry participants suggested that the regulation of AI in the workplace could be profitably informed by Australia’s approach to regulating OH&S, which is done through nationally consistent laws establishing a risk-based framework for effective representation, consultation and cooperation between workers and employees in relation to OH&S. As the introduction and use of AI in workplaces is fundamentally a matter of risk management and safety, the committee considers that there is significant merit to developing OH&S-style approaches to the regulation of AI in the workplace.
4.161In addition to workplace consultation, inquiry participants pointed to the need for government to ensure that Australia has robust policies and supports in place to ensure that workers are consulted prior to any redundancies or restructures and to provide for training and reskilling of workers whose jobs are replaced.
Impacts of AI on creative industries
4.162While much of the evidence received by the inquiry addressed the prospective or anticipated impacts of AI on Australian industry, business and workers at a general level, a significant body of evidence was received identifying the impact that AI is already having on the creative industries in Australia.
4.163In this regard, the committee notes that generative AI may be well-suited to augmenting and automating certain tasks in the creative industries, particularly tasks that are auxiliary to the primary creative process, that could deliver some efficiency and productivity gains .
4.164However, creative industry stakeholders almost unanimously expressed grave concerns about the impact of generative AI on jobs, career pathways, the quality of creative outputs and the health of the creative industry labour market generally.
Copyright
4.165A particularly pressing concern for the creative industry is the issue of copyright infringement arising from the use of copyrighted materials to train AI systems. The committee heard that copyright is a fundamental source of income that sustains artists and the creative industries more generally, providing creators of artistic works with the exclusive economic rights to perform, sell and license the use of their works to third parties.
4.166The committee heard that, as copyright can only apply to works created by human authors, at present there is a lack of clarity in Australia’s copyright laws regarding the extent of copyright protection, if any, that is afforded to works created by humans with the assistance or augmentation of AI. Creative industry stakeholders therefore called for copyright law to be clarified as to the extent of copyright protection afforded to works created with assistance from AI.
4.167A more far-reaching and entrenched concern in relation to copyright arises in relation to the use of copyrighted materials to train AI system without the permission or authorisation of the copyright holder. The committee heard that it is widely accepted that large amounts of copyrighted material have been used without permission to train the foundation or LLMs on which generative AI systems like ChatGPT-4 are built. While in countries like the US the use of copyrighted materials in this way may not infringe the copyright holders’ rights—although this notion is being challenged by dozens of lawsuits brought by US creative workers—such uses are likely to amount to a breach of copyright under Australia’s more stringent copyright laws.
4.168However, the committee heard that a lack of transparency around precisely what materials are used to train AI models has made it difficult for Australian copyright holders to ascertain if their works have been used to train AI models, and therefore to pursue compensation for any such infringement of their copyright. In addition, this issue is further complicated by the question of whether and to what extent copyright holders should be entitled to financial compensation for the outputs of generative AI systems that are based on copyrighted material.
4.169The views of inquiry participants on how to resolve the issues relating to copyright were mixed. The majority of creative industry stakeholders, particularly those directly representing creative workers and rightsholders, called for increased transparency and disclosure around the materials used to train AI, and for Australia’s existing copyright and royalty schemes to continue to be used and adapted, as necessary, to ensure that copyrighted materials cannot be used to train AI or formulate the output of AI systems without the permission and remuneration of copyright holders.
4.170On the other hand, some stakeholders— predominantly those representing AI developers, and production companies who stand to profit from the automation of creative work—were concerned that Australia’s more stringent copyright protections in relation to the use of copyrighted materials to train AI would stifle the development of AI systems in Australia, and therefore recommended that the copyright law be amended to provide an exemption for such uses.
4.171The committee notes that the impacts of AI on copyright, and particularly the use of copyrighted material to train AI systems, raise complex legal matters relating to the objects and design of Australia’s copyright framework and the development of AI systems and the AI industry in Australia. The committee recognises that the resolution of these issues involves important policy questions, and notes that consultation with the creative industries is presently occurring through the Copyright and Artificial Intelligence Reference Group (CAIRG) that has been stood up by the Attorney-General’s Department.
Deepfakes or mimicry of artists
4.172The capacity of AI to produce deepfakes or outputs that closely resemble artists’ copyrighted work or style was another significant issue raised by creative industry stakeholders. The committee heard that AI can generate deepfakes or outputs that convincingly reproduce the style of an artist with relative ease, and without the payment of any compensation or remuneration, thereby undermining artists’ brands and livelihoods.
4.173 In the case of certain creative professions, such as voice artists, the ability of AI systems to copy or deepfake a person’s voice not only creates the potential for devastating loss of earnings, but also raises significant moral concerns about the commercial appropriation of intrinsic aspects of a person’s being or likeness.
Impacts of AI on the healthcare sector
4.174The inquiry also received a considerable amount of evidence regarding the potential opportunities and risks of the use of AI in the healthcare sector. The committee heard that the healthcare sector can potentially realise some of the greatest benefits from the use of AI, including in relation to medical research, preventative health, diagnostics, chronic disease management, medical administration and patient access to medical services.
4.175However, inquiry participants recognised that the risks of AI in healthcare are correspondingly high due to the potential for adverse outcomes to adversely affect patient safety and health outcomes. In this regard, the concerns raised by submitters and witnesses about the specific risks of AI in the healthcare sector illustrate how the general risks of AI discussed in Chapter 2 can apply in high-risk settings. Key risks identified by healthcare sector stakeholders included: data security; automation and accountability; bias, discrimination and transparency.
4.176In terms of regulating AI in healthcare settings, there was broad agreement among inquiry participants that the current regulatory arrangements in Australia are insufficient to manage the risks of AI in healthcare, with some noting that low levels of public and stakeholder confidence in the management of AI’s risks represent a barrier to the successful integration of AI into healthcare settings. Accordingly, the committee identified widespread support for strong and comprehensive risk-based regulation of AI, based on appropriate industry consultation and evaluation to ensure that regulatory arrangements are well calibrated to address the respective opportunities and risks of AI in healthcare settings.
Regulating the impacts of AI on industry, business and workers
4.177As set out in Chapter 2, since 2019 the Australian government has implemented a number of policy proposals and initiatives seeking to introduce frameworks and guidance for industry, business and government on the responsible and ethical development and implementation of AI. These include, for example:
the release of the AI Ethics Framework in November 2019, setting out principles to guide businesses and government in the responsible design, development and implementation of AI; and
the establishment of the National Artificial Intelligence Centre in 2021 to support and accelerate Australia’s AI industry, including by helping small and medium businesses to adopt AI by addressing barriers to the implementation of AI technology.
4.178In June 2023, the government commenced the consultation on safe and responsible AI in Australia, designed as the vehicle to inform a comprehensive policy response to the regulation of AI in Australia. Following an interim government response in January 2024, the government released its Safe and responsible AI in Australia: Proposals paper for introducing mandatory guardrails for AI in high-risk settings in September 2024 (the high-risk AI proposals paper), confirming the government’s commitment to a risk-based approach to AI focused on regulating AI in high-risk settings, seeking views on proposed principles for assessing whether AI systems should be classified as high risk, and proposing three options for implementing mandatory guardrails for AI for further public consultation.
4.179The committee notes that the high-risk AI proposals paper broadly recognised the potential risks to industry, business and workers of AI systems in the workplace. The proposals paper noted that using AI systems in employment settings can have ‘substantial impacts on a person’s opportunities…[including] in recruitment and hiring, promotions, transfers, pay and termination’, as well as the risk of inequitable impacts where AI is implemented without sufficient consultation with workers:
Adopting AI in the workplace can also affect workers, who may feel excluded from discussions around how AI is integrated into business contexts. When a poorly designed AI system is adopted at scale, it can cause systemic social inequality and marginalisation of groups including women, people of colour and people with disabilities.
4.180The committee further notes that the government’s proposed principles for assessing whether AI systems should be classified as high risk would allow consideration of a number of the workplace impacts and risks identified by inquiry participants. These include:
the risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations (principle (a)), which could allow assessment, for example, of AI’s impacts on AI bias leading to discriminatory outcomes in employment recruitment;
the risk of adverse impacts to an individual’s physical or mental health or safety (principle (b)), which could allow assessment, for example, of the impacts of AI-driven worker surveillance on the mental health of workers, and the impacts of AI bias on different cultural groups or physiologies;
the risk of adverse impacts to groups of individuals or collective rights of cultural groups (principle (d)), which could allow assessment, for example, of AI bias in healthcare settings on vulnerable and marginalised groups; and
the risk of adverse impacts to the broader Australian economy, society, environment and rule of law (principle (d)), which could allow assessment, for example, of deepfakes on particular creative professions.
4.181The high-risk AI proposals paper stated that an assessment of whether a proposed workplace use of AI should be considered high risk would require consideration of:
the type of impact it would have on people;
any potential discriminatory impacts on people from a particular cohort;
any society-wide impacts based on the scale of the deployment; and
the severity and extent to which the risks are likely to occur.
4.182Workplace uses of AI that could be considered to be high risk based on assessment by the proposed principles could include, for example, ‘an automated CV scanning service’ that determines an individual’s suitability for a job’, an ‘automated rostering system’ that does not take into account an employee’s caring duties, and an automated AI system for evaluation of worker performance’. In contrast, an AI system that automatically pre-fills payroll information based on work attendance data would be unlikely to classified as high risk by reference to the proposed principles.
4.183The proposition that the use of AI in automating payroll processes is inherently lower risk is concerning given the severe infringement on workplace rights and economic security that could arise if an employee’s payroll is incorrectly processed. This highlights the risks of attempting to pick and choose elements of AI use in the workplace that should or should not be subject to consultation, transparency and accountability requirements.
4.184The committee believes the use of AI in the workplace presents unique challenges, because the nature of the relationship between employers and employees is unique, by way of the imbalance of bargaining power and asymmetry of information available to workers about how AI dictates and influences their working life.
4.185In the high-risk AI proposals paper, the Australian Government asks whether it should adopt a principles-based approach or a more explicit list-based approach to defining uses of AI as high-risk or not. The committee believes that, regardless of the approach chosen, it should be patently clear any use of AI which may impact people’s rights at work are within the scope of the definition.
4.186That the Australian Government ensure that the final definition of high-risk AI clearly includes the use of AI that impacts on the rights of people at work, regardless of whether a principles-based or list-based approach to the definition is adopted.
4.187The committee also believes Australia’s industrial framework must be updated for the impending AI era. Australia has a longstanding and uncontroversial tripartite approach to OH&S regulation, in which there are positive duties on employers to identify and minimise risk; provisions requiring adequate workforce consultation, cooperation and representation; and compliance enforcement mechanisms including the right to cease work where there is a serious and imminent risk to safety.
4.188The proposition that this existing approach to OH&S regulation could be applied to manage the workplace risks posed by AI was supported by a broad range of stakeholders, including trade unions, local AI vendors, not-for-profit organisations, workplace lawyers and think tanks.
4.189That the Australian Government extend and apply the existing work health and safety legislative framework to the workplace risks posed by the adoption of AI.
4.190There are numerous issues relating to the use of AI at work that require serious regulatory consideration which, due to time constraints, have not been explored in sufficient detail in this committee but are currently being examined by the House Standing Committee on Employment, Education and Training’s Inquiry into the Digital Transformation of Workplaces. These issues include the loss of jobs and the related need for training and reskilling; the impact of algorithmic management of work; and whether new workplace rights—for example, rights protecting employees from excessive workplace surveillance—are required to respond to the changing nature of work.
4.191While the appropriate regulatory response to these issues is outside the scope of this inquiry, the principles outlined in the high-risk AI proposals paper of consultation, transparency and accountability should inform the Australian Government’s regulatory approach.
4.192That the Australian Government ensure that workers, worker organisations, employers and employer organisations are thoroughly consulted on the need for, and best approach to, further regulatory responses to address the impact of AI on work and workplaces.
4.193There is no part of the workforce more acutely and urgently at risk of the impacts of unregulated AI disruption than the more than one million people working in the creative industries and related supply chains. If the widespread theft of tens of thousands of Australians’ creative works by big multinational tech companies, without authorisation or remuneration, is not already unlawful, then it should be. This question is complicated by the absolute lack of transparency that LLM developers have adopted in Australia and around the world.
4.194The notion put forward by Google, Amazon and Meta—that the theft of Australian content is actually for the greater good because it ensures the representation of Australian culture in AI-generated outputs—is farcical. Big tech companies are not investing billions of dollars in AI as a philanthropic exercise, but because of the enormous commercial potential that it represents. If the platforms are interested in supporting Australian creators, they should begin by fairly licencing their work in line with Australia’s existing copyright framework.
4.195This hypocrisy was best highlighted by a comment made by Google’s Product Director for Responsible AI, Ms Tulsee Doshi, at the committee’s hearing on 16August 2024, at which she was asked why Google is refusing to be transparent about its training data. Ms Doshi responded: ‘we need to always make sure that we’re balancing the needs and privacy of our users and also recognising the importance of protecting IP and information that contributes to industry competitiveness.’ In the same breath, Google says that it cannot be transparent about the copyrighted data it has taken to train its AI products, because it needs to protect its own IP.
4.196The committee supports the ongoing detailed consultation that is taking place on these issues through the CAIRG, and urges the Government to heed the calls by creative workers, rightsholders and their representative organisations to ensure AI developers are transparent about their exploitation of copyrighted works, and that such works are appropriately licenced and paid for, in line with existing copyright frameworks. The optimal approach to ensuring remuneration for AI-generated commercial outputs that have relied on copyrighted inputs also warrants further investigation.
4.197That the Australian Government continue to consult with creative workers, rightsholders and their representative organisations through the CAIRG on appropriate solutions to the unprecedented theft of their work by multinational tech companies operating within Australia.
4.198That the Australian Government require the developers of AI products to be transparent about the use of copyrighted works in their training datasets, and that the use of such works is appropriately licenced and paid for.
4.199That the Australian Government urgently undertake further consultation with the creative industry to consider an appropriate mechanism to ensure fair remuneration is paid to creators for commercial AI-generated outputs based on copyrighted material used to train AI systems.