Chapter 5 - Additional challenges

  1. Additional challenges
    1. Further risks with digital transformation must be mitigated to ensure the safe and responsible use of technologies. This chapter considers various harms that can arise from the poor design, implementation, and use of AI and ADM in the workplace. It explores the importance of job security and satisfaction, consultation, transparency and accountability, and effects on marginalised cohorts.

Job satisfaction and displacement

5.2Digital transformation can lead to job creation and increased worker satisfaction. However, many workers are concerned about how AI and ADM could lead to displacement and decreased job quality, especially through automation.

5.3Evidence suggests there is a lack of data to verify or refute these concerns. This does not devalue the real fears that Australian workers have as AI and ADM drive rapid change.[1] It does highlight the importance of strengthening the research base to better understand the impacts of these technologies in work contexts.[2]

Job displacement

5.4Automation has been a key driving force in the evolution of jobs. The Shop, Distributive and Allied Employees’ Association underscored that automation is not new and that it can transform work for the better:

automation is an old phenomenon, entailing the ability of machines to perform jobs typically performed by humans – such as the advent of the car displacing thousands of jobs in shovelling manure from streets in the age of the horse and carriage.[3]

5.5Research by the World Economic Forum anticipates that ‘AI could displace up to 75 million jobs and create 133 million new jobs, leading to a net increase of around 58 million new jobs in the global economy’.[4] As job displacement will occur, policy interventions that focus on retraining and upskilling will be important to capitalise on new job opportunities.[5]

5.6The UNSW-UTS Trustworthy Digital Society noted that displacement will more likely affect disenfranchised and low-skilled workforces, including those from multicultural communities, women, and low-socio economic groups. These workers will be further exposed to a growing income gap and reduced opportunities for social mobility.[6]

5.7AI and ADM may also affect people seeking entry-level roles. A lack of access to these roles can result in young workers not being exposed to fundamental learning and training. Outsourcing entry-level roles and tasks to AI and ADM technologies could mean ‘an entire generation of professionals may not enter the workforce, and therefore may not have the opportunity to advance to senior positions’.[7]

5.8Research cited by the Multicultural Professional Network also found that 75 per cent of Australian businesses believe their employees do not possess the skills needed to contribute to the digital economy.[8] In the finance sector, some organisations that have deployed AI and ADM are favouring younger workers who already possess relevant digital skills, and task automation is increasing. This leaves mid-career, and senior professionals unsupported in upskilling as needed.[9] This job polarisation is likely to worsen as advanced tech-literacy grows in demand.[10]

Job quality

5.9Whether AI and ADM improve or reduce job quality depends largely on how an organisation uses it. While these technologies can free up employees from menial work and administrative burdens, loss of autonomy due to AI and ADM ‘will contribute to reducing job quality and satisfaction’.[11] Overreliance on AI and ADM will make workers feel that work has become ‘dehumanised’.[12]

5.10Mr Bernie Smith, NSW Secretary and Treasurer, Shop, Distributive and Allied Employees’ Association, told the Committee that workers in the retail sector have seen a decline in job quality as the frontline of the AI rollout. Many retail workers have to navigate inconsistent policies for AI rollouts.

5.11Using supermarket ‘smart’ self-checkout registers as an example, Mr Smith shared that retail workers are being told not to engage abusive or disrespectful customers. Simultaneously, they are being told to engage customers where the system suspects they are not scanning the items correctly. Due to this conflicting information, workers are finding themselves exposed to increasingly volatile interactions with frustrated customers.[13]

5.12Research cited by the Victorian Trades Hall Council found that reduced job quality from using AI and ADM can harm businesses and workers:

In a metaanalysis of over 20,000 European managers involved in firms using algorithmic management, the loss of personal autonomy and task variety caused by AI was found to have such a negative impact on worker wellbeing that it possibly outweighed net efficiency gains from these technologies.[14]

Consultation and engagement

Worker consultation

5.13Whether the introduction of AI or ADM constitutes a major change that would trigger consultation obligations under the Fair Work Act remains unsettled.[15] In the first instance, employers decide if these technologies have a significant effect on workers. Where workers dispute this, they can report it to the FWC, relying on disputation clauses in modern awards.[16] The ultimate arbiter of this is the FWC, who can also determine if any consultation undertaken was adequate.[17]

5.14Allowing employers to determine major changes reflects managerial prerogative. Under this principle, organisations have a right to carry out business as it deems fit, provided workers are not forced to do anything unjust or unreasonable. The Australian Chamber of Commerce and Industry asserted that, ‘it is not the place of the employment commission to interfere with the right of an employer to manage its own business’.[18]

5.15Even where consultation obligations have been triggered, AI systems can inadvertently undermine compliance. Ms Nicole McPherson, National Assistant Secretary, Finance Sector Union, shared an example of an insurer who used AI to make roster changes and was unable to consult with workers effectively. As they could not understand how the AI informed these changes, the insurer was unable to explain to union members what data informed these changes, and why they were necessary.[19]

5.16Instead of meaningful and effective consultation, workers are being ‘consul-told’.[20] Associate Professor Alysia Blackham told the Committee that many workers are unaware that ADM or surveillance is in use until they have been subjected to it or terminated.[21]

5.17Mr Steven McGibbony, Section Secretary, Bureau of Meteorology, Community and Public Sector Union asserted that when it does occur, consultation is often ineffective and tokenistic.[22] The ‘Invisible Bystanders’: How Australian Workers Experience the Uptake of AI and Automation report highlighted that workers are sceptical and disengaged from consultation. A surveyed worker said:

they don’t consult with the people that use the technology. They make the decision for us. And then eventually we’re expected to just adapt and take on the new technology. Based on previous experience, I’d have to say no, they’re not going to consult. It doesn’t actually happen in practice.[23]

5.18Consultation with workers on AI and ADM can counteract implementation risks, including where these technologies may:

  • be introduced too fast without appreciating existing culture, settings and training needs
  • be poorly designed or integrated, impeding existing processes and systems
  • disrupt work patterns by requiring time for calibration, servicing, and maintenance
  • require workers to attain additional computer programs knowledge
  • cause unnecessary expenditure of capital.[24]
    1. Outcomes are generally positive where workers are consulted.[25] Consulting workers prior to introducing AI and ADM is crucial to the success of these technologies. The Australian Services Union noted:

Workers are the experts in the industries and occupations in which they work and must be given comprehensive information to make informed decisions and provide valuable insights on how these technologies will impact their workplace.[26]

5.20Representatives from the Community and Public Sector Union told the Committee that Robodebt proves that consultation with workers is necessary to counteract harm. Robodebt used a ‘very simple automated process’ which incorrectly calculated debt owed by Centrelink payment recipients. The fact that management at Services Australia considered it ‘not a big deal’ meant no consultation was undertaken when the automation system was rolled out. Even when workers expressed concerns with the system, these were repeatedly dismissed. As Emma White, Section Secretary, Services Australia, Community and Public Sector Union, asserted:

what [robodebt] told us is that there is still a real veil of secrecy around these types of technologies and the point at which it’s appropriate for staff to be engaged. I guess what we would say is: if there is something to be learned from robodebt it’s that it is never too early to engage and consult with workers.[27]

Imposing versus integrating

5.21Even if workers have not been consulted, they are often required to shoulder the burden of operating, using, and overseeing these technologies. AI and ADM require a ‘skilled workforce’ to design these systems and ‘ensure appropriate data is inputted and verify its outcomes’.[28] Despite this, Dr Liam Byrne, Project Coordinator, Future Work and Unionism, the Australian Council of Trade Unions, explained that many workers are not being told about the technologies before they are introduced and not receiving sufficient training to maximise its potential.[29]

5.22As the UNSW-UTS Trustworthy Digital Society noted, while this may produce immediate productivity gains, use of chatbots or other AI-informed customer triaging technologies may not benefit business long term. This is especially the case where customer satisfaction and loyalty are dependent on building rapport and trust. Temporary spikes in productivity in the short term will not overcome diminishing customer relationships long-term.[30]

5.23The Business Council of Australia also asserted that failure to consult workers will ‘naturally’ result in consequences to organisational performance, public perception, and reputation.[31]

5.24Organisations should ensure ‘change is not simply pursued for the sake of change’.[32] Consultation should occur to help determine whether change is necessary, and to overcome poor design, implementation, and use of AI and ADM in workplaces.

Transparency and accountability

Training and development

Algorithmic bias

5.25The quality of AI and ADM is largely informed by the quality of the training data. These technologies can produce high quality outputs if training data is regularly updated, and the systems have ‘rigorous processes built into the back end to refine the model over time’.[33]

5.26The alternative, which is more common, is the actualisation of ‘garbage in, garbage out’. This refers to the use of ‘biased, historical or out-of-date data, or data which under- or over-represents certain groups’ leading to biased outputs. This can occur when training large language models or when users input prompts that reinforce bias. This is known as algorithmic bias in AI and ADM.[34]

5.27The Fair Work Act provides that employers cannot take adverse action against workers or prospective workers based on protected attributes. These attributes include race, sex, gender identity, age, marital status, religion, social origin and political opinion. This applies regardless of whether AI and ADM are in use in a workplace.

5.28Algorithmic bias occurs in AI and ADM. In a study shared by the Multicultural Professional Network, 86 per cent of AI leaders acknowledged that their AI systems produced biased outcomes.[35] The absence of a positive duty on employers to eliminate discrimination based on all protected grounds, like the positive duty under s 47C of the Sex Discrimination Act 1984 (Cth), means workplace bias is reinforced and replicated through these technologies.[36] As Associate Professor Alysia Blackham explained:

That positive equality duty, in section 47C, requires employers to ‘take reasonable and proportionate measures to eliminate, as far as possible’, sex discrimination and sexual harassment. I argue that we need to extend this to all protected grounds to capture all the different types of discrimination and bias that might be affected by these tools. And we need to ensure these duties apply to all those in the complex digital technology supply chain, those who are developing, selling and ultimately using these new technologies.[37]

5.29Open AI’s DALLE-2 and Stable Diffusion illustrated the severity of algorithmic bias. This system was used for AI image generation of different professions, and portrayed low-skilled workers as women and people from multicultural communities, and professionals and experts as white males.[38]

5.30Research shows that algorithmic bias disproportionately affects First Nations communities, people with disabilities, women, young people, and people from multicultural communities. AI and ADM prejudice these cohorts by ‘replicating, multiplying and systematising existing workplace biases’.[39]

5.31Under-representation of certain groups in data can be due to bias or may reflect actual under-representation depending on the industry. For example, despite being responsible for the development of AI and ADM, the STEM workforce is not diverse enough. Just 12 per cent of the STEM workforce identify as having a disability, chronic illness or neurodiversity. Less than 0.5 per cent are First Nations people. This is more pronounced for industries with narrow and discriminatory data collection and research practices.[40]

5.32In hiring and recruitment, algorithmic bias can affect marginalised cohorts. For instance, a defunct Amazon tool which was used to sort through applicants’ resumes was found to favour male over female applicants for software development and technical roles:

The tool had been trained on resumes (and, presumably, hiring outcomes) from 10 years of job applicants; men are significantly over-represented in the field, and were therefore significantly over-represented in the pool of resumes and successful applicants. The tool ‘learnt’ that male applicants were to be preferred. The tool therefore reportedly penalised applicant’s with the word “women’s”, or the name of all women’s colleges.[41]

5.33The APS also experienced algorithmic bias in hiring and recruitment. The Merit Protection Commissioner discovered ‘unintentional issues with AI and automated selection technologies used in large scale recruitment for senior positions at Services Australia’.[42] Several of these promotions were overturned for failing to be meritorious.[43] Despite ongoing issues, a third of Australian employers utilise these technologies in their recruitment processes.[44]

5.34Not only is algorithmic bias dangerous for workers, but it results in significant losses for businesses. Research commissioned by the World Economic Forum found that where organisations implemented and relied on biased AI models, this resulted in:

  • 62 per cent losing revenue
  • 61 per cent losing customers
  • 43 per cent losing employees
  • 35 per cent being subjected to lawsuits and other actions for damages
  • 6 per cent citing reputation and brand damage.[45]
    1. Until overarching ‘ethical and societal problems’ are addressed, data used to train AI and ADM will never be accurate or representative. Without addressing data bias, algorithmic bias will continue to infect these technologies, compromising decision making and undermining safe and ethical workplaces.[46]

Theft of intellectual property

5.36The Committee received evidence about how AI systems, especially generative AI technologies, are often trained on stolen data. APRA AMCOS noted that much of the content used to develop generative AI has been ‘scraped, mined, listened to, trained on, or to use another word, copied’. This often occurs unlawfully, without consent, credit or compensation. Use of stolen data to train AI systems is a breach of intellectual property (IP) and copyright law.[47]

5.37An example shared with the Committee was the IP theft that occurred to a group of Australian authors, who alleged up to 18,000 books were pirated in the training of the Books3 data set.[48] This data set was then used by Meta, Eleuther AI, and Bloomberg to train their generative AI models. Renowned author Richard Flanagan, who was a victim of this breach, referred to it as ‘the biggest act of copyright theft in history’.[49] A separate incident involved a group of Australian artists whose work was scraped for the training of the LAION-5B.31 AI dataset. Archibald finalist Kim Leutwyler, who was a victim of this scraping, described this as a ‘violation’ noting artists were not credited or compensated.[50]

5.38It is increasingly apparent that AI poses a detrimental threat to the creative sector. With many creative workers relying on freelance, gig, or contract work arrangements and earning an average income of $23,200 annually for their creative works, this sector is already deeply vulnerable.[51] As Mr Matt Byrne, Political Lead, Media, Entertainment & Arts Alliance, posed:

the cost we have to measure up is: do we sell our Australian art, our creative industries, our ability for Australians to tell their stories, to sing songs and to make art that expresses our unique culture, or do we assume that the profit motive of big tech companies takes primacy?[52]

5.39Mr Thomas Burt, voice actor and member of the Media, Entertainment & Arts Alliance, explained how his voice had been stolen to train AI and the detrimental effects that had on him:

The first time my voice was cloned, it was a violation. The sheer rage of hearing my voice coming out of a character’s lips that I had never recorded, never consented to and was not paid for cannot be overstated. It shattered me. This was something that had been taken from me. My voice and the choice of how I used it was no longer under my control. My mental health took an immediate and severe turn for the worst, and it has been a slow process of rebuilding my life. A previous client took recordings that we had completed together, cancelled our contract under false pretences, fit those recordings into an AI model and, hey presto—synthetic Thomas.[53]

5.40The theft of Indigenous Cultural Intellectual Property (ICIP) is also concerning. Ms Sophie Parker, Federal Vice President, Entertainment Crew and Sport, Media, Entertainment & Arts Alliance, noted ‘we’re already witnessing the emergence of this new frontier of colonial extraction, with AI generators used to imitate and appropriate Indigenous art’.[54]

5.41ICIP is already being breached by the replication of Aboriginal and Torres Strait Islanders’ creative works by generative AI. The Media, Entertainment & Arts Alliance pointed to ‘numerous reports of AI-generated “Indigenous art” being commodified and sold online’. This is adding to competition Aboriginal and Torres Strait Islander creatives face from the ‘fake “Indigenous art” market’.[55] The Australian Writers’ Guild also warned that it is entirely possible for AI to generate fake dreaming stories, for sale and profit by non-First Nations companies. This would likely be without regard to culture, community, or compensation.[56]

5.42The theft of IP to train AI is malicious and intentional. OpenAI confirmed it would be ‘impossible to train today’s leading AI models without using copyrighted materials’.[57] As the Australian Society of Authors submitted, ‘the multi-million dollar AI companies have taken a free ride on the labour and IP of Australian and international creators’.[58]

Use and outcomes

5.43AI and ADM can be used well or misused. As Dr Fiona McDonald, Acting Director, Centre for Future Work, asserted, ‘We don’t have a problem with AI—full stop; it’s the ways in which AI is used’.[59] The use of AI and ADM has serious implications for transparency and accountability.

Algorithmic management

5.44Many businesses are not exercising due diligence when implementing AI and ADM, and this failure can limit users’ ability to undertake safe, responsible, and transparent use of these technologies. Research conducted by Dr Natalie Sheard found that businesses that used AI-based hiring systems:

  • were unfamiliar with Australia’s voluntary AI Ethics Framework
  • did not understand the risks and limitations of AI-based hiring systems
  • did not undertake comprehensive risk or impact assessments
  • did not establish monitoring and evaluation frameworks
  • failed to meaningfully advise applicants that AI was being used.[60]
    1. Where organisations are not being responsible or transparent, they cannot appropriately seek their workers’ nor customers’ consent around use of these technologies. The total lack of transparency is attributable to the fact that worker consent, if sought at all, is being sought on an indefinite basis. This undermines informed consent, because workers cannot contemplate how their data will be used by employers in future.[61]
    2. Workers subjected to algorithmic management are prevented from meaningfully exercising their right to question and rebut management. These decisions are given a ‘veneer of complete objectivity’,[62] leaving workers with ‘no access to, nor power over’ the outcomes.[63]As the Victorian Trades Hall Council highlighted, ‘the company becomes immune from challenge, with no accountable human figure responsible for justifying or even explaining these changes’.[64] The Australian Council of Trade Unions highlighted how this undermines workers’ right to question and understand decisions which affect their employment:

All decision-making, including decision-making using AI, which affects workers must be open, transparent and capable of both internal and external review and challenge. The right of all workers to an explanation is a critical component of this and should ensure that workers know how, why, and by whom a decision has been made.[65]

5.47Inhibiting workers’ ability to question management decisions that affect their rights is a procedural fairness issue. This is compounded when AI or ADM are designed without transparency or accountability measures.[66]

Human in the loop

5.48Algorithmic management also overlooks the importance of the ‘human in the loop’. This refers to the principles-based approach which notes the importance of human oversight to maintain accountability and disrupt AI and ADM from producing adverse or unfair outcomes.[67]

5.49In recruitment, AI and ADM ‘embeds barriers between individuals’, undermining candidates’ ability to present their ‘humanity’ to recruiters.[68] The QUT Centre for Decent Work explained that these technologies are assessing candidates’ fit with the organisations based on non-verbal behaviour, facial expressions, and speaking pattern and tone. Candidates with incompatible communication styles, including those with impairments that affect speech or non-verbal communication, are being disadvantaged.[69]

5.50Algorithmic management and the removal of the human in the loop would undermine trust and confidence across industries. In the health care sector, human skills like critical thinking and empathy set nurses apart from AI and ADM.[70] Human intervention also prevents formulaic or rigid application of policies and procedures by these technologies. The Pharmacy Guild of Australia noted that these technologies cannot replicate interactions and care between practitioners and patients that are critical to good patient outcomes.[71]

5.51In the APS, Services Australia officers can use human interactions to ensure customers receive the payments and support they need. This exercise of human discretion and empathy is ‘something that AI cannot do’.[72]

5.52Community and Public Sector Union research found that 80 per cent of APS employees were concerned that AI could erode public trust, which is essential for effective government. As Emma White, Section Secretary, Services Australia, Community and Public Sector Union, expressed:

Government decisions, whether on social security, migration or the NDIS, can and do have significant impact on people’s lives, and we must ensure that that accountability, but also that human involvement in those processes, is maintained.[73]

5.53The Tech Council of Australia argued that:

fully automated ADM systems should not replace human decision-making in high risk scenarios that are likely to significantly impact individuals’ lives or rights. This human oversight, along with empathy and judgement is crucial for maintaining trust and ensuring the responsible use of technology.[74]

Accountability

5.54A further issue with reliance on AI and ADM is the ‘black box’ obscuring accountability. This refers to the inexplicability of the algorithms that determine the outputs of AI and ADM.[75] Associate Professor Alysia Blackham noted that, ‘without actually being designed to be explicable, no one can understand how these technologies are making these decisions’.[76]

5.55In the European Union Artificial Intelligence Act 2024, which regulates the use of AI, accountability is required through the right to an explanation to counteract the black box. This gives workers the right to understand how data was used in an AI system to generate an output, and how that output was used to come to a particular decision.[77]

5.56Without knowing how decisions are made by AI or ADM, holding a particular party to account is difficult. Who is responsible—the developers who make the technology, businesses who implement it, or workers who use them? Dr Natalie Sheard asserted that ‘obligations need to apply across the whole ecosystem. There needs to be distributed responsibility’.[78] Dr Kobi Leins agreed:

The question was asked before, ‘Who should be responsible?’ I challenge the questions to be non-binary. They shouldn’t be asking, ‘Is it either legislation or companies or the systems?’ It should be all of them.[79]

5.57It is unclear if accessorial liability provisions under the Fair Work Act could apply to capture developers for harms caused by their AI or ADM. Nevertheless, many stakeholders agree that developers are not taking accountability where these systems produce poor or incorrect outputs.

5.58The healthcare sector argued that developers should be accountable when their technologies lead to malpractice.[80] The Australian Medical Association noted it would be ‘unfair’ to hold doctors accountable if they are ‘unable to work out how and why a decision was made’ by an AI system:

as AI takes on more autonomous decision making, it may be argued by some doctors that they should not be responsible for that which they cannot control. Where AI integrates into clinical practice, standards of care must require AI use, and therefore traditional common law measurement of liability may require change.[81]

5.59Obligations of employers continue in the context of introducing AI and ADM systems in workplaces, but questions regarding who should be held ultimately accountable for the outcomes of these technologies remain.[82]

Marginalised cohorts

5.60Stakeholders observed that digital changes in the workplace, including AI and ADM, have more pronounced effects on marginalised cohorts.[83] This poses ‘pressing and complex questions around how we define, create and organise inclusive and sustainable work and workplaces now and in the future’.[84]

5.61Marginalised cohorts can include:

  • women
  • communities lacking in digital literacy
  • low-skilled workers
  • people in regional and rural areas
  • First Nations people
  • multicultural communities
  • people with disabilities
  • young and older workers.[85]
    1. Groups of workers already discriminated against can be adversely affected by the digital transformation. This includes discrimination on the basis of age, race, gender identity and intersex status, sexual orientation, pregnancy and marital status, disability, class and geography.[86] These cohorts already face certain risks in the workplace, such as bias, discrimination, harassment, social and economic disadvantage, and power imbalances. AI and ADM can deepen and expand risks.[87] Proper management and regulation are crucial.
    2. Bias and discrimination faced by certain cohorts could increase as employers continue to adopt more technology. Existing bias and discrimination can be reinforced by AI and ADM software as it represents the views of developers, the training data and inputs. Algorithmic bias can be caused by the under-representation of these cohorts in the data used to train AI.[88]
    3. The former Australian Department of Industry, Innovation, and Science ‘made early predictions on the entrenchment of socio-economic disadvantages of automated technologies’.[89] The Australian Salaried Medical Officers' Federation highlighted that AI and ADM’s effects on job security and socioeconomic inequality especially affect marginalised groups, and the implications are wide but not yet fully comprehended.[90]
    4. Several stakeholders pointed out that effects of AI and ADM on workers are even more dire when a worker falls into more than one marginalised group. For instance, the eSafety Commissioner found that generative AI can be used to create personalised harassment towards a colleague, and this is ‘more likely to be experienced by women, particularly women with disability, those who identify as LGBTIQ+, and young women’.[91]
    5. People in marginalised groups are often in jobs that are more susceptible to automation, such as administration and retail. They are also more likely to work in places without enterprise bargaining agreements and can feel less safe speaking up about workplace matters, including technology-related ones.[92]

Gender equality

5.67AI and ADM can enhance gender equality. As mentioned in Chapter 2, AI powered chatbots can detect and report sexual harassment, discrimination and bullying. However, technology can also be misused in the workplace, leading to adverse impacts on gender equality. Technology can have mixed uses and impacts. For instance, monitoring technologies may be used to sexually harass workers or conversely to identify this unacceptable behaviour.

5.68History shows that the government needs to assist at-risk workers in these industries.[93] This is partly due to systemic and structural barriers faced by women, including limitations to accessing upskilling and retraining opportunities and to have job mobility. The Victorian Trades Hall Council stated:

If AI and automation continues to be rolled out without government intervention, it is likely there will be significant unequal outcome[s] for women, exacerbating pre-existing gender segregation of the workforce. For example, one of the professions most likely to be impacted by AI; clerical and administrative work; has the highest concentration of women workers in Australia, where women make up 72% of the roles. The third highest concentration of women workers is sales workers, being 59%, and professionals, being 55%, both of which are roles likely to be impacted by AI and automation.[94]

5.69The underrepresentation of women in STEM is affecting how technologies like AI are being developed and deployed, and how these technologies impact their lives. The lack of representation is linked to embedded gender norms that can start in childhood. In Australia, women account for only 36 per cent of university STEM course enrolments and 16 per cent of vocational STEM course enrolments, and they represent only 27 per cent of the STEM workforce and are significantly underrepresented in high-tech roles. This reduces women's influence in such roles and on the development of technologies like AI, including to potentially mitigate algorithmic bias.[95]

5.70The Committee heard that another way to increase women’s voice is to consult with women about gender considerations regarding the development, testing and implementation of technology in the workplace. For example, about tools to monitor employee performance. This can help bring to light issues that often affect women such as menstrual health, menopause, domestic violence and sexual harassment.[96]

5.71As mentioned, existing gender discrimination in human resources decisions like hiring, wage setting, promoting, and firing is expected to increase with algorithmic bias. Basic Rights Queensland highlighted an example of women with disabilities being disadvantaged in a recruitment process due to algorithmic bias. Women with certain disabilities lost points in video interview assessments, which involved AI analysis of speech and facial expressions.[97]

5.72Some possible responses to gender impacts of ADM and AI include:

  • regulation and governance to address bias and discrimination involved in the development and deployment of technology in the workplace
  • more diverse datasets and gender inclusive teams developing and deploying emerging technologies like AI
  • more gender diversity in STEM course enrolments and labour market
  • better integration of gender and intersectionality considerations in policy development.[98]

Closing the Gap

5.73There are concerning effects of the digital transformation of workplaces on Aboriginal and Torres Strait Islander people. These issues link to Closing the Gap’s targets that relate to:

  • youth engagement in employment or education
  • strong economic participation and development of people and their communities
  • high levels of social and emotional wellbeing
  • access to information and services enabling participation in informed decision-making regarding their own lives.[99]
    1. A key concern is the need to protect ICIP, which can be eroded by AI. This is discussed above in this chapter.
    2. Another risk is that Aboriginal and Torres Strait Islander people are particularly susceptible to automation. Where AI and ADM systems drive automation, job displacement will become more pronounced.[100] The Australian Salaried Medical Officers' Federation cited research from the Australian Bureau of Statistics. The research found that Aboriginal and Torres Strait Islanders are more likely to be employed in jobs that are extremely susceptible to automation, with men being at higher risk as they are more likely to be engaged under casual contracts in occupations that are more prone to automation.[101]
    3. Aboriginal and Torres Strait Islander people face ‘embedded discrimination in the labour market’ and are often under-employed or in insecure jobs. Existing discrimination and inequalities in workforce participation will only deepen if there is not proper regulation and safeguards to roll out these technologies.[102]
    4. Emerging technologies and data-driven processes of governance can be negative for Aboriginal and Torres Strait Islander people when government decisions are based on data that is not representative of their interests, and occurs without consultation. This impairs self-determination and can increase policies that are discriminatory and unequal.[103] AI reflects the biases and worldview of the people who create and use it. These biases can exacerbate longstanding intergenerational impacts.[104]
    5. The Framework for the Governance of Indigenous Data (2024) is a welcome development. It aims to give Aboriginal and Torres Strait Islander people more agency over how their data is governed within the APS, so government data better represents their priorities and goals.[105]
    6. One way to help close the gap is through digital inclusion. Currently, the digital divide is substantially affecting Aboriginal and Torres Strait Islander people and is particularly pronounced in rural and remote areas. This divide is expected to widen as workplaces become more reliant on emerging technologies.[106]
    7. A lack of digital literacy and accessibility of services faced by many Aboriginal and Torres Strait Islander women feeds into the increased likelihood of experiencing technology-enabled abuse. Research by the Australian National University revealed that these women are disproportionately more exposed to this type of abuse.[107]

Committee comment

5.81Although automation has driven the transformation of work since the industrial revolution, the complexity and speed of change of AI and ADM systems is unprecedented. The poor design, implementation, and use of AI and ADM in the workplace can pose a wide range of risks. Government, industry, and employers must collaborate to ensure technological advancements and any job displacement are appropriately balanced.

5.82The Committee acknowledges that task automation is increasing in many sectors, such as finance. Job displacement is expected to become more likely, in particular for disenfranchised and low-skilled workforces. There are also concerns that AI and ADM systems may affect people seeking entry-level roles. It is imperative that young workers access fundamental education and training about the use of this technology.

5.83As more tasks traditionally performed by humans become increasingly automated, impact assessments and retraining and upskilling programs will become essential to capitalise on new job opportunities. Stakeholders cautioned that marginalised cohorts are at risk of being left behind. Clear policies must be developed to ensure these cohorts are part of the digitally transformed workforce. Employers could also make substantial investments in worker entitlements, like leave and flexible arrangements, to offer benefits to workers whose jobs are more technologically affected.

5.84The Committee recognises that whether AI and ADM systems improve or reduce job quality largely depends on how an organisation uses these systems. While these technologies can free up employees from menial work and administrative burdens, harms due to AI and ADM systems need to be countered, like loss of autonomy. The Committee recommends that industry and employers regularly evaluate these impacts to ensure they do not outweigh benefits to business and workers.

5.85Although consultation obligations exist under the Fair Work Act, it remains unclear whether the introduction of AI or ADM systems constitutes a major change that would require employers to consult workers. The Committee heard that when consultation does occur, it is rarely meaningful and effective.

5.86The Committee recommends stronger protections for workers’ voices. Workers are experts in their respective workplaces, and they can help inform the safe and responsible deployment of technology. It is the Committee’s view that consultation should occur to help determine whether change is necessary, and to overcome the inadequate design, implementation, and use of AI and ADM systems in workplaces. While it is acknowledged that businesses bear significant costs to undertake consultation with workers, this does not eradicate the need for effective consultation. Moreover, consultation has direct benefits for employers. The Fair Work Commission and the Fair Work Ombudsman could assist workplaces to engage in improved consultation.

5.87The Committee heard that AI systems, especially generative AI technologies, are often trained on stolen data, which breaches IP and copyright laws. Australia’s world-leading creative sector is at the forefront of artistic expression and cultural preservation, yet artists’ IP rights have been spurned by AI developers. For example, the theft of the ‘likeness’ or voice of a voice actor, when it is not already being protected through contract law. ICIP is alarmingly being breached by the replication of Aboriginal and Torres Strait Islanders’ creative works by generative AI.

5.88Regulatory protections are required as technology developers, and contracting parties, are exploiting workers. The Committee commends the AGD’s work on copyright and AI in consultation with the Copyright and AI Reference Group. Reform must advance rapidly to curtail the sweeping theft of IP and ICIP, and the proliferation of deepfakes.

5.89Algorithmic bias is a prevalent challenge in AI and ADM systems. The quality of AI and ADM systems is largely informed by the quality of the training data and inputs, and algorithmic bias in data can lead to biased outputs. This is even more likely to affect already disadvantaged cohorts.

5.90Developers and deployers of AI and ADM systems can help mitigate algorithmic bias and compromised decision-making by ensuring that datasets are diverse and representative, as well as the people developing the technology systems. It is important to create requirements like this on technology developers so that AI training data and models do not perpetuate social biases.

5.91The Committee supports the extension of positive equality duties to all protected attributes grounds under the Fair Work Act. The positive equality duties under section 47C of the Sex Discrimination Act 1984 (Cth) present a model for obligating employers to mitigate bias and discrimination. This duty requires organisations and businesses to eliminate, as far as possible, certain unlawful behaviour from occurring in the workplace or in connection to work. In the context of employers using AI and ADM systems, this would require them to take active measures to mitigate bias and discrimination based on protected attributes.

5.92The use of AI and ADM systems has serious implications for transparency and accountability. AI or ADM systems are often designed without transparency or accountability measures, which erodes a worker’s ability to question their employer’s decisions, which may have a significant impact on their work and lives. Where organisations are not being responsible nor transparent, they cannot appropriately seek their workers’ nor customers’ consent around the use of these technologies.

5.93Algorithmic management overlooks the importance of the ‘human in the loop’ to maintain accountability and prevent AI and ADM systems from producing adverse outcomes. Increasing transparency and human oversight is fundamental for workers to understand and challenge technology outputs, and improve accountability for decision making. This can also increase public trust of emerging technologies.

5.94The report on the Royal Commission into the Robodebt Scheme illustrated that the success of ADM systems relies on awareness of its use and human oversight. Without this, accountability remains uncertain, outcomes remain opaque, and those subjected to unfair automated decisions are deprived of procedural fairness.

5.95The Committee supports a right to an explanation in Australian legislation. In the European Union Artificial Intelligence Act 2024, accountability is required through the right to an explanation to counteract the black box. This gives workers the right to understand how data was used in an AI system to generate an output, and how that output was used to come to a particular decision.

5.96Stakeholders observed that digital changes in the workplace, including AI and ADM systems, have more pronounced effects on marginalised cohorts, such as women, Aboriginal and Torres Strait Islanders, and young people. These cohorts already face certain risks in the workplace, and AI and ADM systems can deepen and expand them. Effects of AI and ADM systems are even more dire when a worker is part of more than one marginalised group.

5.97The Committee recommends strategies to mitigate impacts on marginalised cohorts. Developing targeted training programs for marginalised cohorts to improve their digital access, literacy, and upskilling regarding ADM and AI technologies is important. Experienced and trusted not-for-profit and community-based organisations may be best placed as delivery partners for these cohorts.

Recommendation 14

5.98The Committee recommends that the Australian Government encourage employers and peak employer bodies to address job displacement as a result of automation in their sectors by:

  • prioritising job creation and augmentation through comprehensive training and retraining programs
  • implementing impact assessments on the introduction of technologies to evaluate the effects on workers.

Recommendation 15

5.99The Committee recommends that the Australian Government amend the Fair Work Act 2009 (Cth) to improve transparency, accountability and procedural fairness regarding the use of AI and ADM systems in the workplace by:

  • requiring all organisations that use AI or ADM systems to disclose this to existing and prospective workers and customers
  • developing a legislative right to an explanation, based on the European model
  • banning the use of technologies like AI and ADM systems for final decision-making without any human oversight, especially human resourcing decisions.

Recommendation 16

5.100The Committee recommends that the Australian Government consult with industry experts to develop technologies, especially when used in high-risk sectors, that are:

  • developed with traceability and oversight functions to enhance accuracy and security, and enable human intervention
  • integrated across sectors in a safe and responsible way
  • accompanied by technical documentation to support the understanding of system function and outputs.

Recommendation 17

5.101The Committee recommends that the Australian Government:

  • require developers to demonstrate that AI systems have been developed using lawfully obtained data that does not breach Australian intellectual property or copyright laws
  • create explicit protections for Indigenous Cultural Intellectual Property
  • support the continued creation of Australian-owned intellectual property for technology platforms by establishing an AI Fund for businesses engaged in that work.

Recommendation 18

5.102The Committee recommends that the Australian Government strengthen obligations on employers to consult workers on major workplace changes before, during, and after the introduction of new technology. This should include consideration of whether the introduction of a technology is fit for purpose and does not unduly disadvantage workers.

Recommendation 19

5.103The Committee recommends that the Australian Government consider the resourcing and capacity of the Fair Work Commission and Fair Work Ombudsman to assist employers and employees in undertaking enhanced consultation.

Recommendation 20

5.104The Committee recommends that the Australian Government require developers and deployers (employers) to implement measures against algorithmic bias, including:

  • using more diverse training datasets and managing rules around user prompts
  • conducting regular mandatory independent audits to assess the extent and impacts of algorithmic bias.

Recommendation 21

5.105The Committee recommends that the Australian Government:

  • extend positive equality duties to all protected attributes grounds under the Fair Work Act 2009 (Cth), as modelled on the positive duty in the Sex Discrimination Act 1984 (Cth)
  • develop targeted training programs for marginalised cohorts to improve their digital access, literacy, and upskilling regarding ADM and AI technologies
  • improve diversity across STEM industries to better reflect Australian society, and ensure equity and accuracy in the development and deployment of technology and mitigation of impacts such as gendered and cultural bias.

Ms Lisa Chesters MPChair29 January 2025

Footnotes

[1]SDA, Submission 48, p. 6.

[2]ACCI, Submission 48, p. 2.

[3]SDA, Submission 48, p. 6.

[4]UNSW Sydney, Submission 18, p. 5.

[5]Accenture, Submission 21, p. 7.

[6]UNSW-UTS Trustworthy Digital Society, Submission 12, p. 11.

[7]ACTU, Submission 60, p. 9.

[8]Multicultural Professional Network, Submission 2, p. 2.

[9]Mr Robert Potter, National Secretary, ASU, Committee Hansard, 9 August 2024, p. 2.

[10]ACTU, Submission 60, p. 9.

[11]QNMU, Submission 11, p. 7.

[12]UNSW-UTS Trustworthy Digital Society, Submission 12, p. 7.

[13]Mr Bernie Smith, NSW Branch Secretary and Treasurer, SDA, Committee Hansard, 2 September 2024, p.20.

[14]VTHC, Submission 26, pp. 15–16.

[15]ACCI, Submission 54, p. 13.

[16]Ms Jessica Tinsley, General Counsel and Director, ACCI, Committee Hansard, 9 July 2024, p. 13.

[17]Mr Stuart Kerr, Assistant Secretary and Senior Executive Lawyer, Bargaining and Coverage Branch, DEWR, Committee Hansard, 20 September 2024, p. 2.

[18]ACCI, Submission 54, pp. 16–17.

[19]Ms Nicole McPherson, National Assistant Secretary, FSU, Committee Hansard, 19 July 2024, p. 9.

[20]Ms Danae Bosler, Assistant Secretary, VTHC, Committee Hansard, 2 September2024, p. 2.

[21]Mr Steven McGibbony, Section Secretary, Bureau of Meteorology, Community and Public Sector Union (CPSU), Committee Hansard, 2 September 2024, p. 8.

[22]Mr Steven McGibbony, Section Secretary, Bureau of Meteorology, CPSU, Committee Hansard, 2 September 2024, p. 8.

[23]UTS Human Technology Institute, Submission 27, p. 25.

[24]AMA, Submission 14, p. 3.

[25]Mr Robert Potter, National Secretary, ASU, Committee Hansard, 9 August 2024, p. 2.

[26]ASU, Submission 5, p. 1.

[27]Emma White, Section Secretary, Services Australia, CPSU, Committee Hansard, 2 September 2024, p. 9.

[28]Dr Alex Veen, Dr Tom Barratt, Dr Caleb Goods, Dr Brett Smith, Submission 9, p. 3.

[29]Dr Liam Byrne, Project Coordinator, Future Work and Unionism, ACTU, Committee Hansard, 9 August2024, p. 14.

[30]UNSW-UTS Trustworthy Digital Society, Submission 12, p. 5.

[31]BCA, Submission 52, p. 11.

[32]AMA, Submission 14, p. 2.

[33]Mr Ben Leech, Branch Manager, Digital Prioritisation and Planning, DTA, Committee Hansard, 12 July 2024, p. 5.

[34]Associate Professor Alysia Blackham, Submission 8 p. 4.

[35]Multicultural Professional Network, Submission 2, p. 2.

[36]Associate Professor Alysia Blackham, Submission 8, pp. 4–5.

[37]Associate Professor Alysia Blackham, private capacity, Committee Hansard, 26 July 2024, pp. 1–2.

[38]UNSW-UTS Trustworthy Digital Society, Submission 12, p. 10.

[39]DEWR, Submission 3, p. 20.

[40]Science and Technology Australia, Submission 46, p. 4.

[41]Associate Professor Alysia Blackham, Submission 8, pp. 1–2.

[42]CPSU, Submission 50, p. 4.

[43]Emma White, Section Secretary, Services Australia, CPSU, Committee Hansard, 2 September 2024, p. 7.

[44]QUT Centre for Decent Work and Industry, Submission 17, p. 2.

[45]UNSW-UTS Trustworthy Digital Society, Submission 12, p. 10.

[46]Dr Mehdi Mehdi Rajaeian, Peter Faber Business School, Australian Catholic University, Professor Jo Ingold, Peter Faber Business School, Australian Catholic University, Professor Mohamed Abdelrazek, Deakin University, Submission 28, p. 2.

[47]APRA AMCOS, Submission 22, p. 2.

[48]ACTU, Submission 60, p. 21.

[49]Australian Writers’ Guild and Australian Writers’ Guld Authorship Collecting Society, Submission 42, p. 6.

[50]ACTU, Submission 60, p. 22.

[51]Mr Matt Byrne, Political Lead, Media, Entertainment & Arts Alliance (MEAA), Committee Hansard, 1 November 2024, pp. 1–2.

[52]Mr Matt Byrne, Political Lead, MEAA, Committee Hansard, 1 November 2024, p. 2, 5.

[53]Mr Thomas Burt, Member and Voice Actor, MEAA, Committee Hansard, 1 November 2024, pp. 2–3.

[54]Ms Sophie Parker, Federal Vice President, Entertainment Crew and Sport, MEAA, Committee Hansard, 1 November 2024, p. 2.

[55]MEAA, Submission 66, p. 8.

[56]Australian Writers’ Guild and Australian Writers’ Guild Authorship Collecting Society, Submission 42, p. 7.

[57]Australian Writers’ Guild and Australian Writers’ Guild Authorship Collecting Society, Submission 42, p. 6.

[58]Australian Society of Authors, Submission 51, p. 2.

[59]Dr Fiona Macdonald, Policy Director, Centre for Future Work, Committee Hansard, 2 September 2024, p. 29.

[60]Dr Natalie Sheard, Submission 55, Attachment 1, p. 12.

[61]Professor Leah Ruppanner, Founding Director, WFHRI, Committee Hansard, 26 July 2024, p. 8.

[62]Mr Oscar Kaspi-Cruchett, Researcher, Politics, VTHC, Committee Hansard, 2 September 2024, p. 5.

[63]DEWR, Submission 3, p. 18.

[64]VTHC, Submission 26, p. 9.

[65]ACTU, Submission 60, p. 10.

[66]ASU, Submission 5, p. 2.

[67]Ms Lauren Mills, Branch Manager, Artificial Intelligence, DTA, Committee Hansard,12 July 2024, p. 3.

[68]ASMOF, Submission 47, p. 6.

[69]QUT Centre for Decent Work and Industry, Submission 17, pp. 8–9.

[70]Mr Simon Mitchell, QNMU, Councillor, Queensland Branch, ANMF, Committee Hansard, 2 September 2024, p. 40.

[71]Pharmacy Guild of Australia, Submission 39, p. 4.

[72]CPSU, Submission 50, p. 2.

[73]Emma White, Section Secretary, Services Australia, CPSU, Committee Hansard, 2 September 2024, p. 7.

[74]Tech Council of Australia, Submission 62, p. 6.

[75]QNMU, Submission 11, p. 9.

[76]Associate Professor Alysia Blackham, private capacity, Committee Hansard, 26 July 2024, p. 2.

[77]Ms Nicole McPherson, National Secretary, FSU, Committee Hansard, 19 July 2024, p. 10.

[78]Dr Natalie Sheard, private capacity, Committee Hansard, 2 September 2024, p. 35.

[79]Dr Kobi Leins, private capacity, Committee Hansard, 2 September 2024, p. 33.

[80]AMA, Submission 14, p. 5.

[81]AMA, Submission 14, p. 5.

[82]ACCI, Submission 54, p. 19.

[83]Ms Danae Bosler, Assistant Secretary, VTHC, Committee Hansard, 2 September 2024, p. 2.

[84]QUT Centre for Decent Work and Industry, Submission 17, p. 1.

[85]ASMOF, Submission 47, pp. 8–9; VTHC, Submission 26, p. 6.

[86]UNSW-UTS Trustworthy Digital Society, Submission 12, p. 3.

[87]DEWR, Submission 3, p. 20.

[88]Associate Professor Alysia Blackham, Submission 8, p. 1.

[89]ASMOF, Submission 47, p. 9.

[90]ASMOF, Submission 47, pp. 7–8.

[91]eSafety Commissioner, Submission 36, p. 3.

[92]Ms Danae Bosler, Assistant Secretary, VTHC, Committee Hansard, 2 September 2024, p. 5.

[93]ACTU, Submission 60, p. 30.

[94]VTHC, Submission 26, p. 6.

[95]WFHRI, Submission 37, pp. 7–8.

[96]Basic Rights Queensland, Submission 63, p. 16.

[97]Basic Rights Queensland, Submission 63, p. 19.

[98]Basic Rights Queensland, Submission 63, pp. 3–4, 7–8.

[99]UNSW-UTS Trustworthy Digital Society, Submission 12, pp. 2–3, 10–11.

[100]ASMOF, Submission 47, p. 9.

[101]ASMOF, Submission 47, p. 9.

[102]ACTU, Submission 60, p. 30; ASMOF, Submission 47, p. 9.

[103]UNSW-UTS Trustworthy Digital Society, Submission 12, p. 10.

[104]Basic Rights Queensland, Submission 63, p. 19.

[105]National Indigenous Australians Agency, Framework for Governance of Indigenous Data, 30 May 2024, accessed 10 December 2024.

[106]Basic Rights Queensland, Submission 63, p. 19.

[107]Basic Rights Queensland, Submission 63, p. 20.