Chapter 3 - Developing the AI industry in Australia

Chapter 3Developing the AI industry in Australia

3.1Chapter 2 considered the general risks of Artificial Intelligence (AI) and potential regulatory and policy approaches to managing the risks of AI technologies.

3.2This chapter considers the evidence received by the inquiry regarding the potential opportunities and benefits of AI technology and proposals for developing the AI industry in Australia.

Snapshot of the Australian AI industry

AI companies in Australia

3.3The Australia’s artificial intelligence ecosystem: Catalysing an AI industry report (the AI ecosystem report) is produced by the National Artificial Intelligence Centre (NAIC). This purpose of the report is to provide a snapshot of the current state of Australia’s AI ecosystem to inform future strategy and policy decisions about its growth and development.[1]

3.4The AI ecosystem report defines Australia’s AI ecosystem as comprising startups; small-to-large-sized companies; universities; education, training and research institutes; industry organisations; and public sector agencies engaged in developing and applying AI technologies. In 2023, the AI ecosystem report identified ‘544 companies in Australia whose main business activity is developing and selling AI products and services’. These companies sit within a ‘broader ecosystem’ of more than 336,000 technology companies in the professional, scientific and technical services industry and close to 25,000 firms in the information, media and telecommunications industries.[2]

3.5The AI ecosystem report found that AI companies in Australia provide a range of products and services, with data services, finished solutions and consulting being the most common. Australian AI companies providing their services to a wide range of customer industry groups, with the most common being software; data and analytics; science and engineering; and professional services.[3]

3.6About 85 per cent of Australian AI companies have 50 or fewer employees. However, there are 13 Australian publicly listed AI companies trading on the Australian Stock Exchange and internationally (UK and USA). The combined market capitalisation of these companies was approximately $73 billion in 2023.[4]

AI skills and jobs in Australia

3.7The Australian AI ecosystem report indicates that the demand for AI-related skills is growing in Australia and internationally. For example, in 2022, 2.1percent of all job postings in the USA were AI-related. Australia was ranked third in AI job postings, with 1.2 per cent of all job postings in 2022 being AI-related. Demand for AI jobs has been growing faster in Australia relative to international comparisons, with the share of AI-related job postings increasing by more than seven times between 2014 and 2022, roughly double that of peer nations.[5]

Developing Australia’s AI industry

3.8The essential challenge for Australia is to develop its AI industry through policies that maximise the widespread opportunities afforded by AI technologies, while ensuring appropriate protections are in place. Such policies could comprise a mix of, for example, direct regulation, such as AI-specific laws, regulations and codes, and other support measures, such as targeted funding, infrastructure development and capability and skills building.

Use of AI in Australia to date

3.9The committee’s inquiry occurs in a context of heightened public interest in AI technology, much of which followed the release in November 2022 of ChaptGPT. However, despite the relatively recent interest in more widely accessible generative AI models, AI has been employed over recent years in various aspects of the Australian society and economy to deliver significant benefits. This includes, for example:

using AI to consolidate large amounts of patient data to support diagnosis and early detection of health conditions;

AI tools to help evaluate and optimise engineering designs to improve building safety;

using AI to expedite travel at airports through the use of SmartGates;

using AI to support personalised learning and teaching in remote areas; and

AI-enabling improvements and cost savings in the provision of legal services.[6]

3.10The submission from the Department of Home Affairs (DHA) noted that ‘products and services that utilise AI are already broadly in use across the Australian economy’, and summarised these as being generally in relation to:

Automated decision making (ADM): machine-based systems that make predictions, recommendations or decisions based on a given set of human defined objectives;

content curation or recommendations: systems that prioritise content or make personalised content suggestions to users of online services; and

generative AI: sophisticated machine learning algorithms used to predict an output, such as images or words, based on a prompt.[7]

3.11The Digital Transformation Agency (DTA) observed that past uses of AI by government have typically been ‘in the form of narrow applications that perform specific tasks within defined domains’, with the technical expertise and costs of deploying and operating AI forming a ‘natural barrier to adoption for many agencies.’[8] More recently, however, there has been a rapid development driven by generative and general purpose AI:

Generative AI has changed this and brought AI to the masses with large language models such as ChatGPT being widely accessible, easy to use and interact with, while also delivering outputs that often require no technical expertise.[9]

3.12The DHA submission noted that the development of AI products and services in Australia is ‘rapidly accelerating’ and that ‘significant investment by industry and governments is driving unprecedented advancements in AI’.[10]

3.13However, the Australian government’s June 2023 Safe and responsible AI in Australia (the 2023 AI discussion paper) observed that, relative to other countries, adoption rates of AI across Australia remain relatively low’, due in part to low levels of public trust and confidence of Australians in AI technologies and systems.‘[11]

Transformative nature of AI

3.14The submissions provided to the inquiry reflect the understanding, both in Australia and globally, that the current state of AI technology brings with it profound opportunities across a broad and rapidly increasing range of uses.

3.15The Tech Council of Australia, for example, observed that ‘AI is one of the most transformative technologies of our time, offering significant economic, social, environmental, and strategic opportunities’.[12] This view was also expressed in the 2023 AI discussion paper:

…AI presents significant opportunities for Australia to improve economic and social outcomes. AI has been identified as a critical technology in Australia’s national interest. In its recent 5-year Productivity Inquiry report, the Productivity Commission (PC) identified AI as one of the transformative digital technologies that can help to drive productivity growth in Australia including through the support it provides for the production and adoption of robotics. McKinsey has estimated that automation, including AI, could cumulatively add between $1.1 trillion and $4 trillion to the Australian economy by the early 2030s.[13]

3.16The opportunities arising from potential applications of AI apply broadly across virtually all areas of government, society and the economy. The DHA submission, for example, noted the potential breadth of application for AI in the delivery of government services:

As [AI] technologies mature, government will increase automation and machine learning into core business roles such as risk, strategy, resource allocation and delivery. AI is an attractive and scalable solution to improving our service offerings to the public with increased sophistication.[14]

3.17In relation to the economy and society, the Tech Council of Australia submitted that AI offered ‘major benefits across Australia’s key industry and service sectors’, including education, manufacturing, agriculture finance, professional services, telecommunications and public transport. It also noted the use of AI to solve ‘pressing societal and global challenges’ through ‘research and scientific discovery at the frontier, as well supporting ground-breaking applications’ in fields such as astronomy, biology and ecology.[15]

3.18The Department of Industry, Science and Resources (DISR) identified the following general areas of opportunity for increased use of AI and AI innovation:

Automation: automation of simple or repetitive tasks like writing emails or summarising documents, thereby freeing up human workers to focus on tasks that require human input like care, leadership and creative problem-solving. For example:

healthcare: use of AI for to assist with analysis of medical imagery, diagnosis, predicting patient outcomes and administrative tasks;

education: use of AI classroom assistants for individualised, step-by-step math and reading instruction; and

manufacturing: use of AI to assist with quality control, such as inspecting for and identifying the root cause of defects; supply chain optimisation; demand forecasting and balancing inventory levels.

Skills shortages: addressing skills shortages in areas of need. For example:

agriculture: use AI to monitor crops; conduct quality checks; assist in planting and harvesting; and improve yields;

recruitment: use AI to identify broader talent pools and reduce bias in selection processes;

science, technology, engineering and maths: use generative AI, which has been shown to perform well at such tasks, to alleviate skills shortages; and

aged care: use of AI-connected motion to detect deviations in the regular movements of people living with dementia and alert carers.

AI-driven innovation: facilitating innovation across industries by using AI to develop new products, services and business models. For example:

pharmaceuticals: use of AI to screen existing medicines for new applications and predict which molecules can treat different illnesses;

medicine: use of AI to circumvent medical professionals' assumptions or knowledge limitations and reduce disparity in health outcomes for women and culturally and linguistically diverse groups; and

environment: use of AI to empower traditional owners to respond to environmental challenges and support caring for country.

Decision-making: use of AI to improve the quality of decisions. For example:

renewable energy and emissions reduction: use of AI for forecasting renewable energy production; and analysing weather patterns, historical data and grid conditions to predict the output of renewable energy sources like solar or wind power; and

mining: use of AI to provide real-time information to support quicker and more precise decisions about hazardous conditions, thereby lowering the chance of injuries.

User experience and accessibility: creating tailored or personalized approaches to improve user experience or accessibility through use of AI-powered systems like chatbots. For example:

physical/cognitive impairment: use of AI for converting text to speech and vice versa; summarising; and translating or interpreting different languages, accents or speech disorders;

personal security: 24/7 chatbots for domestic violence survivors that leave no digital trail and require no download or registration, to provide secure and fully encrypted services (including secure evidence collection); and

hospitality industry: use of AI virtual assistants accessed through a hotel website or app to assist guests with things like check-ins, ordering room service and planning holiday activities.[16]

Developing Australia’s sovereign AI capability

3.19Many submissions identified the development of Australia’s ‘sovereign capability’ in respect of AI technology as a key strategy for realising its many opportunities.[17]

3.20The Commonwealth Scientific and Industrial Research Organisation (CSIRO) described sovereign capability as follows:

Sovereign capability relates to the ability of [a] country’s governments, industry and society to use technologies productively and effectively to meet their needs in the absence of any inputs sourced from other countries.

With respect to AI, sovereign capability includes the availability (and scalability) of high performance computing infrastructure, secure data storage, skilled technical workers, datasets for training/adapting AI models and the ability to manage/regulate AI model use in Australia.[18]

3.21The CSIRO noted that sovereign capability also includes the notion of data security:

AI systems vacuum-up vast quantities of data. They need this data to work, but if the data is sensitive, private or confidential that can cause concerns for citizens or governments about whether the data is secure. If the models are built (trained) and operated within Australia then data sovereignty is improved. It is important to address the need for data sovereignty and ensuring…Australian data is stored and processed within national borders when necessary.[19]

3.22Sovereign capability therefore can refer broadly to the development and regulation of all the elements comprising an Australian AI industry, across both government and private sectors.

3.23The Australian Chamber of Commerce and Industry submitted that fostering the Australian AI industry involves more than managing innovation through rules and regulations, arguing that government needs to provide the right conditions and incentives to support new operating models and business ideas; realise efficiency gains; and increase productivity and competitiveness. This could involve measures like regulatory sandboxes; industry codes of conduct; voluntary technical standards; ethical principles; skills investment; and research and development.[20]

3.24The ANU Integrated Artificial Intelligence Network submitted that fostering the development of the AI industry in Australia also requires measures to promote professionalism in the supply and use of AI services more generally. This could include measures relating to, for example, industry codes of conduct; recognition of qualifications; non-discriminatory accreditation and registration; insurance, including public liability, product liability and professional indemnity schemes; and educational support and services relevant to all phases of AI development.[21]

3.25A number of submissions commented on measures required to develop a responsible and ethical AI industry. The University of Sydney Faculty of Engineering and Information Technology, for example, identified transparency and trustworthiness of AI systems; preserving personal privacy and cybersecurity; and ethical guidelines for the adoption of AI as key aims for supporting the AI industry in Australia.[22] In relation to small and medium business enterprises (SMEs), for example, these aims could be pursued by:

identification of risks and harms arising from the adoption of AI by SMEs;

development of solutions to mitigate risks and harms particular to SMEs;

development of AI ethics assurance approaches focused on the use and development of AI by SMEs;

development of guidelines for the adoption of generative AI by SMEs;

providing training for and consultation with SMEs in relation to the responsible use of AI; and

engaging SMEs in the co-design of AI solutions with various stakeholders.[23]

Benefits of developing sovereign capability

3.26Many submission to the inquiry noted the importance of Australia developing its sovereign capability in relation to AI technology.

3.27Science & Technology Australia, for example, noted that Australia would be letting ‘key opportunities slip past’ if it were to rely on ‘foreign off-the-shelf capabilities rather than invest in our own AI industry and...workforce’. A ‘deep sovereign capability’ would allow Australia to develop AI technologies that are ‘relevant, appropriate and safe’ for Australian society, avoid dependency on other countries, and realise sustained economic benefits.[24]

3.28The Pawsey Super Computing Research Centre and the Curtin University Institute for Data Science noted that, while many countries are making significant investments in national AI programs, the level of investment in Australia is relatively low. It called for Australia to invest significantly in the establishment of sovereign AI capability to improve the pace of AI development and adoption into practical applications, develop its AI workforce and maintain its international competitiveness in research and industry.[25]

3.29The Deloitte submission observed that sovereign AI capabilities would allow Australia to meet forecast workload requirements for government and defence through domestic rather than international solutions; and provide an opportunity for ‘the public and private sectors to work together to provide infrastructure for these needs’.[26]

3.30The Kingston AI Group identified the development of an Australian AI industry as an opportunity to reinforce Australia’s ‘existing brand of being a provider of premium, responsible and safe products’. It considered that fostering a responsible AI industry can be supported by:

targeting funding to deliver quality training to current and prospective Australian AI professionals;

defining what responsible AI is so that Australian companies can understand and develop it;

prioritising funding to development of responsible AI generally rather than targeting particular sectors;

supporting whistleblowing and penalising irresponsible AI; and

government becoming an early adopter of responsible AI.[27]

3.31In relation to parliament and the public sector, the Community and Public Sector Union (CPSU) identified four areas of reform to foster responsible the responsible development and use of AI: privacy reform and ethical sourcing of data; inclusive AI development; public education and awareness; and establishing a federal Parliamentary Science and Technology Office. Elaborating on its proposal for a Parliamentary Science and Technology Office, the CPSU stated:

As the use of AI increases, we must continue to build and improve relevant expertise within both Parliament and the public sector.

Decision-makers will need greater assistance navigating the issues associated with emerging technologies such as AI. The overload of information available and the need for greater science literacy means that science advisory mechanisms that can help our parliamentarians grapple with these challenges are more important than ever.

A dedicated technology assessment office, known as the Parliamentary Science and Technology Office should be established, providing a similar non-partisan and independent function to the Parliamentary Library and Parliamentary Budget Office to inform parliamentary debate.[28]

Development of an Australian foundation AI model

3.32In addition to its broader meaning, ‘sovereign capability’, and similar phrases such as ‘sovereign AI’ and ‘sovereign LLM’, can refer more specifically to control or ownership of a specific AI model or system. Professor Nicholas Davis noted that sovereign capability in this sense can refer to an AI model or system that possesses one or more of the following three characteristics:

an AI system controlled by a government for its own secure use…(for example, a Large language Model (LLM) that can be used exclusively and with complete independence by a state);

an AI system developed or trained using national datasets, values and languages for specific, nationally-oriented purposes (for example, an LLM optimised for sovereign purposes or that reflects national characteristics); and

an AI model trained ‘from scratch’ by or for a specific country that gives government knowledge and oversight of the training data and process used for developing the model and surrounding system components (for example, an LLM that is entirely the product of Australian efforts).[29]

3.33At a general level, Professor Nicholas Davis observed that the development of sovereign AI models in Australia would require government to establish ‘clear laws and a correspondingly effective regulatory environment’.[30]

3.34The 2024 CSIRO report titled Artificial Intelligence foundation models Industry enablement, productivity growth, policy levers and sovereign capability considerations for Australia (2024 AI report) identified a number of ‘generic policy levers’ available to government to address ‘sovereign capability issues relating to foundation models’. These include:

building high performance computing (HPC) infrastructure and democratising and prioritising access;

negotiating bilateral or multi-lateral international collaborations to share AI expertise and resources;

increasing workforce skills via training, education and improved access to national and global talent pools;

developing resources, policies, regulations and testing systems to ensure that the development of AI foundation models is productive, safe and ethical;

identifying, validating and making available datasets that could be used to train AI foundation models; and

investing in building, adapting (fine-tuning) and applying AI foundation models to improve government functions.[31]

3.35The concept of developing sovereign capability in this regard thus refers specifically to the development of foundation AI models which encompass ‘[LLMs]…and other AI systems that rely on foundation model architectures’.[32] A number of participants in the inquiry offered views on whether Australia should pursue the development of its own foundation AI model.

Benefits of developing a foundation AI model

3.36Foundation AI models in themselves offer benefits through the wide range of applications or uses for which they can be employed for significant productivity gains. In the case of LLMs, for example, this could include ‘analysing documents and other data and holding conversations in natural language’.[33] However, Professor Nicholas Davis observed:

Intensifying the safe and responsible use of foundation models – and deepening Australian understanding of how they work – may be beneficial for reasons other than their application in text analysis and generation. The mathematical and data science techniques that power LLMs can also be turned to myriad other positive uses, such as drug discovery.[34]

3.37Professor Davis said that, of a number of potential benefits that could flow from developing a sovereign AI capability, two were of particular importance given the ‘current state of AI awareness, adoption, and governance in Australia, and the fast-moving nature of foundation model technologies’.[35] These were:

the potential for spurring specialised job creation and innovation; and

advancing research and knowledge in the technical, strategic and organisational aspects of foundation model development and use.[36]

3.38Professor Davis advised that a locally developed or deployed foundation AI model could also provide benefits relating to:

system and data privacy and security: the model could be designed with increased security and system and data privacy;

data and system control: the model could be operated with greater levels of control over, for example, data inputs, computational and power usage and environmental impacts;

reduced dependence on foreign providers: the model would be less exposed to the risk of external influence or disruption by foreign providers.

nationally determined purposes: the model could be designed or adapted to address specific national challenges and needs.

Australian law, culture and language: the model would be better able to understand and generate text in Australia's most used languages; and could incorporate colloquial context and Australian legal norms.

trust and adoption of AI systems: an Australian foundation model could increase public trust and encourage the adoption of AI more generally.[37]

3.39Similarly, Professor Anton van den Hengel argued that the key benefits of developing an Australian LLM would be much broader than the utility of the model itself:

The benefit from building a large language model is not only that we get a large language model. Having a large language model is a subsidiary benefit of the process of building a large language model. The reason to build a large language model is that it's one step on the path towards building the infrastructure for an AI industry.[38]

3.40The CSIRO also noted that the development of sovereign AI models in Australia, using Australian data, would assist in the establishment of Australia’s AI industry:

Sovereign capability is an important consideration in the adoption of AI technology and the competitive interests of Australian workers and firms as next generation frontier/foundation AI models take hold. There is scope to build more of these models in Australia with Australian data. This will improve data security and make the models work better in Australian context. Understanding the gaps in our sovereign capability and identifying ways to fill these gaps can also help Australia’s own AI industry emerge.[39]

3.41The Accenture submission emphasised the importance of data quality to the performance of AI model performance, in calling for the Australian government to work with industry to explore ways of ‘sharing of sovereign data to help incubate Australian-based model development and give Australian businesses a competitive advantage’.[40]

3.42Beyond the direct benefits to the Australian AI industry of developing a sovereign foundation AI model, Professor van den Hengel observed that the development of an Australian LLM could have important revenue implications. Noting that profits from foreign-owned technologies used in Australia, such as Google, Facebook and Uber, are delivered to overseas companies, Professor van den Hengel observed there is a very real risk that products developed using foreign-owned AI models could disrupt important Australian industries, such as mining and agriculture, and see further offshoring of profits.[41]

3.43Per Capita’s Centre of the Public Square noted that foreign ownership of dominant companies like Google and Facebook had seen a transition of ‘public communications infrastructure from publicly managed platforms to privately owned digital products’ and warned of a similar market dominance developing in AI around a small number of foreign companies, notably Google, Microsoft and Meta. It therefore called for the development of sovereign AI capability in Australia through the building of ‘critical infrastructure and systems’ for AI in Australia.[42]

3.44The CSIRO also cautioned about the potential risks to Australia of market domination of the AI industry by foreign companies:

Sovereign capability is important when it comes to AI because the vast bulk of AI tools Australians use are sourced from offshore. This is a great opportunity when these tools are readily available, work appropriately and are reasonably priced. However, this can rapidly change. There are concerns about market concentration and monopoly power associated with generative AI models (e.g. large language models). One firm can dominate the marketplace making it difficult for Australian firms to compete.[43]

3.45Similarly, Xaana.AI observed that ‘monopolistic control over AI technologies’ by global tech giants stifles the growth and competitiveness of local AI firms…[and] undermines the development of a sovereign AI industry in Australia. It noted:

Heavy reliance on international AI technologies might impede Australia's economic independence…[whereas] developing local AI solutions would not only retain economic benefits within the country but also foster innovation that is uniquely tailored to Australia's requirements, offering long-term advantages.[44]

3.46Professor Nicholas Davis considered that the availability of an Australian foundation model could increase public trust in use of AI and spur its safe and responsible adoption.[45] The Insurance Council of Australia also commented on the potential for the development of sovereign Australian AI to foster public trust in the use and adoption of AI:

…improving Australia’s sovereign AI capabilities will also help to build Australian values into AI systems deployed here. As the Kingston AI Group states “AI systems are designed and based on programs reflective of the attitudes and value systems of their creators. This may lead to a mismatch where the importing country does not align with the value systems of the receiving country.”

Building in our values will help foster responsibility in AI supply chains and foster trust and transparency between deployers of AI including business and government and the consumers who interact with AI.[46]

3.47La Trobe University observed that Australia’s renewable energy resources would offer a natural advantage for operating computing HPC facilities with ‘accelerated data and computing power for building sovereign AI capabilities’:

The natural resource advantage for Australia in building AI factories is the almost unlimited supply of renewable energy (solar, wind, hydrogen) for powering these factories, which could also incentivise leading frontier AI companies to invest in Australian infrastructure.[47]

Cost of developing a sovereign AI model in Australia

3.48In relation to the approach to developing sovereign AI models in Australia, the CSIRO 2024 AI report noted that developing sovereign AI capability could involve different approaches:

Sovereign capability doesn’t necessarily mean the whole AI model is developed and managed from within Australia; it’s about our ability to manage the way the model is used and our ability to maintain socio-economic activity if the model is made too costly, inaccessible or abruptly changed in some way. Sometimes this might mean building and operating the model from within Australia; other times it may mean having the skills, resources and optionality to manage models built offshore.[48]

3.49The infrastructure requirements and cost of developing a sovereign AI model in Australia would differ depending on whether it was done by independently developing a LLM or foundation AI model, or by fine-tuning a model developed elsewhere.[49]

3.50For example, the evidence received by the inquiry suggests that the cost of independently developing a sovereign LLM in Australia would be significant. In particular, the training of AI models requires HPC facilities employing multiple servers housing multiple graphics processing units (GPUs). As an illustration, the committee heard that a single Nvidia H100, for example, a leading industry benchmark GPU, costs approximately AUD$40,000-50,000. The Nvidia DGX server, which houses eight H100 GPUs, costs approximately AUD$450,000.[50]

3.51Overseas initiatives to establish collaborative AI research centres that provide access to HPC facilities, such as the US National Artificial Intelligence Research Resource (NAIRR) and EU AI factories, are therefore estimated to involve investments in the multi-billion-dollar range to deliver ‘sizeable GPU systems that will house thousands to tens of thousands of GPUs, associated data centres, data storage and other ancillary equipment’.[51]

3.52In the private sphere, foundation models such as OpenAI’s ChatGPT have demanded large investments. While not publicly disclosed, it has been estimated that a cluster comprising 10,000 GPUs was used to train recent OpenAI ChatGPT models. If an OpenAI ChatGPT model AI was developed using a HPC cluster of 1,250 Nvidia DGX servers, the cost would be approximately $563 million, and likely approaching $1 billion with the addition of ancillary costs.[52]

3.53In contrast, the committee heard that an alternative approach of fine-tuning a pre-trained foundation model from commercial, open-source or international partners on sovereign datasets could be achieved with 50 Nvidia DGX servers for a cost of approximately $100 million including ancillary costs.[53]

Risks associated with developing a sovereign AI model in Australia

3.54However, some inquiry participants observed that the building of sovereign AI capability, and particularly a sovereign AI model, in Australia is not without risk.

3.55The Australian Research Data Commons, for example, noted that, while the policy rationale for developing sovereign capability is sound, the development of a sub-par model that was not competitive globally would not be able to deliver the benefits sought, and that Australia’s relative lack of infrastructure and expertise means a wholly Australian manufactured option is out of the question.[54]

3.56Monash University submitted that, with the most powerful LLMs residing in large technology companies, no large Australian LLM initiative exists that could compete with those companies; and called for Australia to advocate for the development of a global open LLM through an international consortium of public science and governments.[55]

3.57Professor van den Hengel submitted that, while government investment would be required to drive development of an Australian foundation model, that investment should be directed towards incentivising and supporting the private sector, rather than government, to develop new models based on commercial imperatives.[56]

3.58Similarly, Professor Nicholas Davis suggested that government should refrain from a ‘moonshot-style effort’ to create an Australian AI foundation model and should instead support a range of policies to promote the development and use of foundation models and AI technology more generally that promise significant benefits for Australia. Professor Davis suggested that independent analysis of the technical, economic and legal aspects of developing AI would be required to ensure it is developed in a lawful, safe and responsible way, and recommended that the government:

…commission detailed, independent expert analysis on the desirability and feasibility of developing and extending Australia’s artificial intelligence capabilities, including but not limited to foundation models.[57]

3.59In a January 2024 report on the roles of government in the development of the Australian AI industry, titled Making the most of the AI opportunity Research paper 1: AI uptake, productivity, and the role of government (AI research paper),the Productivity Commission suggested that the private sector's development and uptake of AI in Australia will largely happen ‘without direct government assistance’. However, the commission considered that, in addition to ensuring effective regulation of AI, government should ‘lead by example in AI procurement and use, and by increasing the safe sharing of data that is needed for AI applications’ and focus on policy interventions that effectively support safe AI uptake and productivity.[58]

3.60Submitters noted there are areas where Australia has an existing competitive advantage that could be highly complementary to AI. For example, the Tech Council of Australia noted Australia’s tech sector is already globally competitive in enterprise software, fintech, quantum computing and biotech.[59]

3.61The Productivity Commission’s AI research paper stated that Australia’s role in the global AI value chain needs to be driven by comparative advantage, and that this advantage does not lie in activities like the development of general purpose AI models, which require extreme quantities of data and investment. Rather, Australia’s focus should be on the development of smaller, more bespoke AI models, or on the applications or downstream value-adds developed on top of externally-developed general purpose AI models.[60]

Committee view

Developing sovereign AI capacity in Australia

3.62AI technologies have been widely in use throughout the Australian economy for many years. However, with the advent of publicly accessible generative AI that can produce natural language, text, image and audio outputs, the range of potential uses of AI has burgeoned dramatically, and inquiry participants broadly recognised that AI is a transformative technology that offers vast potential for productivity and gains and economic growth across all facets of Australian business and industry.

3.63As set out in Chapter 2, separate to the question of regulating to address the risks of AI technology, the Australian government has implemented a range of policy initiatives in recent years designed to promote the responsible and ethical use of AI by government, business and industry. However, many number of inquiry participants identified the need for government to develop and implement policies to foster the development and growth of the Australian AI industry—referred to generally as Australia’s ‘sovereign AI capability’.

3.64There were mixed views about whether the Australian government’s support for sovereign AI capability should extend to the development of sovereign AI models, including a model developed directly by the Australian government. The committee heard that the development and training of foundation AI models on Australian datasets would produce AI systems that better reflect the Australian population and culture and thereby reduce the potential for bias and discrimination in applications based on those systems. Some submitters also noted the development of sovereign AI foundation models in Australia would support the development of the AI industry more generally, and particularly the high-performance computing facilities and skilled workforce that are required as the foundation of sovereign AI capability.

3.65However, inquiry participants noted that, while the costs of developing a sovereign AI model depend on the approach taken, the development of foundation AI models carries significant risks due to their extremely high cost, the difficulty of competing with the global technology firms at the forefront of LLM development, and the real potential for commercial failure. In this regard, the evidence received by the inquiry suggested that government should seek to incentivise and support private sector development of sovereign AI in Australia generally, rather than pursue direct government involvement in developing a sovereign LLM.

3.66In this regard, the committee notes that there is a range of policy options that government could consider, including in relation to the establishment of high-performance computing facilities and the making available of significant Australian datasets to support sovereign AI development. However, with respect to the latter, it is critical the Australian Government maintains confidence in the privacy and integrity of Government-held datasets. More generally, the committee considers that the government has a key role to play through measures aimed at increasing AI workforce training and skills.

3.67The committee agrees with the conclusions of the Productivity Commission, among other submitters to this inquiry, that the Australian Government should focus its resources on areas where there are existing areas of comparative advantage, including on bespoke AI models and downstream applications of general purpose models, rather than the development of an Australian Government-backed general-purpose model.

Recommendation 4

3.68That the Australian Government continue to increase the financial and non-financial support it provides in support of sovereign AI capability in Australia, focusing on Australia’s existing areas of comparative advantage and unique First Nations perspectives.

Footnotes

[1]National Artificial Intelligence Centre (NAIC), Australia’s artificial intelligence ecosystem: Catalysing an AI industry, December 2023, p. 6.

[2]NAIC, Australia’s artificial intelligence ecosystem: Catalysing an AI industry, December 2023, p. 6.

[3]NAIC, Australia’s artificial intelligence ecosystem: Catalysing an AI industry, December 2023, pp 2 and 14.

[4]NAIC, Australia’s artificial intelligence ecosystem: Catalysing an AI industry, December 2023, p. 23.

[5]NAIC, Australia’s artificial intelligence ecosystem: Catalysing an AI industry, December 2023, p. 24.

[6]Department of Industry, Science and Resources (DISR), Safe and responsible AI in Australia, Discussion Paper, June 2023, pp 3 and 7.

[7]Department of Home Affairs (DHA), Submission 55, p. 2.

[8]Digital Transformation Agency (DTA), Submission 53, p. 2.

[9]DTA, Submission 53, p. 3.

[10]DHA, Submission 55, p. 3.

[11]DISR, Safe and responsible AI in Australia, Discussion Paper, June 2023, p. 3.

[12]Tech Council of Australia (TCA), Submission 37, p. 2.

[13]DISR, Safe and responsible AI in Australia, Discussion Paper, June 2023, p. 7.

[14]DHA, Submission 55, p. 3.

[15]TCA, Submission 37, pp 2-3.

[16]DISR, Submission 160, pp 4-6.

[17]See, for example: Science & Technology Australia, Submission 161, p. 1; Kingston AI Group, Submission 122, p. 2; La Trobe University, Submission 186, p. 1; Deloitte, Submission 106, p. 3; Accenture, Submission 97, p. 6; Australian Alliance for AI in Healthcare, Submission 234, p. 5; and Insurance Council of Australia, Submission 46, p. 2.

[18]Commonwealth Scientific and Industrial Research Organisation (CSIRO), Submission 63, p. 5.

[19]CSIRO, Submission 63, p. 6.

[20]Australian Chamber of Commerce and Industry, Submission 37, pp 9-10.

[21]ANU Integrated Artificial Intelligence Network, Submission 66, p. 5.

[22]University of Sydney Faculty of Engineering and Information Technology, Submission 62, p. 5.

[23]University of Sydney Faculty of Engineering and Information Technology, Submission 62, p. 5.

[24]See, for example: Science & Technology Australia, Submission 161, pp 1-2.

[25]Pawsey Super Computing Research Centre and the Curtin University Institute for Data Science, Submission 130, pp 1-2.

[26]Deloitte, Submission 106, p. 5.

[27]Kingston AI Group, Submission 122, p. 4.

[28]Community and Public Sector Union, Submission 219, p. 6.

[29]Professor Nicholas Davis, Human Technology Institute (HTI), Answers to questions on notice (3), 21May 2024 (received 26 June 2024), p. 1.

[30]Professor Nicholas Davis, Co-Director, HTI Industry; and Professor, Emerging Technology, University of Technology Sydney, answers to questions on notice (3), 21 May 2024 (received 26 June 2024).

[31]CSIRO, Artificial Intelligence foundation models: Industry enablement, productivity growth, policy levers and sovereign capability considerations for Australia, March 2024, pp 22-25.

[32]Professor Nicholas Davis, HTI, Answers to questions on notice (3), 21 May 2024 (received 26June2024), p.1. As noted in Chapter 1, a ‘foundation model’ is a general purpose AI model that forms the basis of more specialised AI systems. Examples of such systems include GPT-4, AI21 Jurassic and BLOOM.

[33]Professor Nicholas Davis, HTI, Answers to questions on notice(3), 21 May 2024 (received 26June2024), p.3.

[34]Professor Nicholas Davis, HTI, Answers to questions on (3), 21 May 2024 (received 26 June 2024), pp3-4.

[35]Professor Nicholas Davis, HTI, Answers to questions on notice(3), 21 May 2024 (received 26June2024), p.5.

[36]Professor Nicholas Davis, HTI, Answers to questions on notice(3), 21 May 2024 (received 26June2024), p.4.

[37]Professor Nicholas Davis, HTI, Answers to questions on notice(3), 21 May 2024 (received 26June2024), p.4.

[38]Professor Anton van den Hengel, Director, Centre for Augmented Reasoning, Australian Institute for Machine Learning, University of Adelaide; and Chair, Kingston AI Group, Committee Hansard, 16 July 2024, p. 3.

[39]CSIRO, Submission 63, p. 1.

[40]Accenture, Submission 97, p. 3.

[41]Professor Anton van den Hengel, Director, Centre for Augmented Reasoning, Australian Institute for Machine Learning, University of Adelaide; and Chair, Kingston AI Group, Committee Hansard, 16 July 2024, p. 2.

[42]Mr Jordan Guiao, Director of Responsible Technology, Centre of the Public Square, Per Capita, Committee Hansard, 21 May 2024, p. 13.

[43]CSIRO, Submission 63, p. 1.

[44]Xaana.Ai, Submission 167, p. 2.

[45]ANU integrated Artificial Intelligence Network, Submission 66, p. 5.

[46]The Insurance Council of Australia, Submission 46, p. 2.

[47]La Trobe University, Submission 186, p. 1.

[48]CSIRO, Artificial Intelligence foundation models Industry enablement, productivity growth, policy levers and sovereign capability considerations for Australia, March 2024, p. 19.

[49]CSIRO, Answers to questions on notice (10), 20May 2024 (received 17 June 2024), p. [4].

[50]CSIRO, Answers to questions on notice (10), 20May 2024 (received 17 June 2024), p. [4].

[51]CSIRO, Answers to questions on notice (10), 20May 2024 (received 17 June 2024), p. [4].

[52]CSIRO, Answers to questions on notice (10), 20May 2024 (received 17 June 2024), p. [4].

[53]CSIRO, Answers to questions on notice (10), 20May 2024 (received 17 June 2024), p. [4].

[54]Australian Research Data Commons, Submission 217, p. 3.

[55]Monash University, Submission 180, p. 2.

[56]Professor Anton van den Hengel, Director, Centre for Augmented Reasoning, Australian Institute for Machine Learning, University of Adelaide; and Chair, Kingston AI Group, Committee Hansard, 16 July 2024, p. 3.

[57]Professor Nicholas Davis, Co-Director, HTI Industry; and Professor, Emerging Technology, University of Technology Sydney, Answers to questions on notice (3), 21 May 2024 (received 26June2024), p. 16.

[58]Productivity Commission, Making the most of the AI opportunity Research paper 1: AI uptake, productivity, and the role of government, January 2024, p. 1.

[59]TCA, Submission 74, p. 3.

[60]Productivity Commission, Making the most of the AI opportunity Research paper 1: AI uptake, productivity, and the role of government, January 2024, pp 5-7.