Issues and Insights Article, 48th Parliament

The digital transformation challenge

Developments in artificial intelligence and the continued influence of online platforms present significant regulatory challenges for governments. Different countries are responding to these challenges in a variety of ways. What opportunities exist for international cooperation and coordination of responses, and what progress has Australia made in regulating these digital technologies?

Key issues

  • Policymakers face multiple digital transformation challenges with complex regulatory choices to be made. These include the development and use of artificial intelligence (AI) and the prevalence of online platforms.
  • Overseas jurisdictions are taking a variety of regulatory and legislative approaches to AI depending on their risk management posture.
  • Policymakers are increasingly pressing online platforms to take responsibility for user welfare. However, it is unclear how the US (where many online platforms are based) will respond to such measures.
  • Australia has led the world in a number of areas of online platform regulation, including age assurance requirements for social media, but significant questions remain regarding the effective reach of such regulation.
  • Given this dynamic landscape, reforms regarding the eSafety Commissioner’s role, powers and structure have been proposed.

Introduction

Developments in artificial intelligence (AI) and the continued influence of online platforms present significant regulatory challenges. Around the world, government regulatory approaches to AI range from voluntary guidelines to complex legislative frameworks. Despite international coordination efforts, there is no consensus on balancing AI risk management with promoting innovation.

Concurrent safety concerns are also prompting some jurisdictions to impose user welfare obligations on online platforms. In 2024, the Australian Government committed to introducing a ‘digital duty of care’ for online platforms and introduced age assurance requirements for some social media platforms. However, questions remain regarding the capacity of online platform regulation to address a range of continuing policy challenges.

Artificial intelligence

How is AI being regulated overseas?

Regulatory attempts to address AI developments are rapidly evolving. The EU’s AI Act (considered the first comprehensive AI regulatory framework) came into force in August 2024. Its key features include:

However, former European Central Bank president, Mario Draghi, has urged a shift ‘from trying to restrain [AI] to understanding how to benefit from it’ (p. 4). His report further highlighted that onerous regulatory barriers risked excluding European companies from early AI innovations and burdening researchers (p. 79). Some legal experts have also identified limitations and loopholes in the AI Act.

The US has taken a ‘lighter touch’ approach. In 2023, then President Joe Biden issued an Executive Order (EO) on AI development and use, which included establishing a US AI Safety Institute. His administration also obtained voluntary commitments from leading AI developers concerning testing, transparency and information-sharing. However, in January 2025, President Donald Trump repealed the EO and ordered that a new AI Action Plan be established within 180 days. The US Congress has not passed legislation on AI, but some states have introduced specific measures, such as California’s AI Transparency Act (commencing 1 January 2026).

Source: Parliamentary Library

Can we coordinate?

Overarching this international patchwork of AI policies and legislation are efforts to coordinate and align regulatory approaches. For example, Australia is part of the OECD’s Global Partnership on AI, which adopted a recommendation for governments to ‘actively co-operate’ to advance progress on ‘responsible stewardship of trustworthy AI’. In 2023, Australia also participated in the first AI Safety Summit and signed the Bletchley Declaration, which focuses on managing ‘frontier AI’ risks through collaborative research and policy development.

In September 2024, the Council of Europe published the Framework Convention on AI. Its objects include ensuring ‘that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law’ (Article 1). Signatories (which can include non-European states) include the US, the EU, the UK, Canada, Japan and Israel. If Australia became a party to the Framework Convention (or a similar binding treaty) it may support broader Commonwealth AI legislation under the external affairs power.

However, the Paris AI Action Summit in February 2025 highlighted divisions in the approaches to AI regulation. Unlike the majority of participants (including Australia), the US and UK declined to sign the summit’s statement recognising the need for ‘cooperation on AI governance’.

Where are we up to?

In August 2024, the Australian Government released a voluntary AI safety standard containing 10 guardrails. These include testing, transparency and accountability requirements such as measures to ‘achieve meaningful human oversight’. These voluntary guardrails largely mirror the government’s proposed mandatory guardrails for AI in high-risk settings, released for consultation the following month. A key difference is the proposed obligation for developers to undertake conformity compliance assessments before an AI system is deployed.

In November 2024, a Senate Committee recommended:

  • the Australian Government ‘introduce new, whole-of-economy, dedicated legislation to regulate high-risk uses of AI’, including general-purpose AI models, such as large language models (LLM)
  • financial and non-financial support for ‘sovereign AI capability’ continue to increase.

The Australian Government subsequently announced it would develop a National AI Capability Plan to grow investment, strengthen capability, boost skills and secure economic resilience. Some stakeholders have advocated that this be fast-tracked, while others have highlighted the lack of regulatory certainty constraining AI adoption.

Online platforms

Forced to care?

According to Australian Communication and Media Authority (ACMA) research, Australians’ online activity peaked in 2021, with ongoing high levels of use and ‘near universal’ internet access (p. 5). This activity creates multiple policy challenges as a conduit for obscene material, scams, hate speech, disinformation, harassment, cyber-bullying and mental health risks. Public concern regarding harmful online content is evident in the number of complaints made to the eSafety Commissioner (Figure 1). In response, policymakers are seeking to make large online platforms responsible for addressing user welfare risks.

Figure 1           Complaints made to the eSafety Commissioner

Source: ACMA annual reports

In November 2024, a Joint Select Committee investigating the impacts of social media expressed bipartisan support for imposing a ‘duty of care’ on social media and other online platforms. A review of the Online Safety Act 2021 (published the previous month) also recommended this as a priority reform. Accordingly, the government committed to legislating a ‘digital duty of care’ for digital platforms. This approach aligns with the eSafety Commissioner’s Safety by Design principle (which encourages technology companies to anticipate and mitigate online threats), as well as approaches in other jurisdictions such as:

However, the potential effectiveness of such regulation remains unclear, especially if countries where online platforms are based resist these measures. For example, the Computer & Communications Industry Association (which includes major US online platforms), has highlighted the impact and cost of compliance with overseas regulation. In February 2025, President Trump also signed a memorandum that warned against ’disproportionate’ foreign regulation of US technology companies.

Age assurance

In 2024, the Australian Parliament legislated to require (by December 2025) certain social media platforms take ‘reasonable steps’ to ensure users are aged 16 or over. However, some experts have criticised the legislation as a ‘blunt’ and inappropriate policy response and noted a lack of clarity regarding which technologies would meet the approved standard. An ongoing Age Assurance Technology Trial is expected to inform the eSafety Commissioner’s guidelines for platforms on the eventual compliance requirements.

A number of jurisdictions (including the EU) are also considering or commencing age-related restrictions as a means of online child protection. Under the UK’s Online Safety Act, the regulator (OFCOM) will require online services to ‘have highly effective age assurance processes’ by July 2025 to prevent child access to pornography. This complements requirements for certain platforms to perform and address children’s risk assessments.

Elsewhere, the US Supreme Court is considering a challenge to a Texas state law which requires age verification if at least one-third of a website’s content is deemed ‘harmful to minors’. Such a challenge may also impact other state laws relating to online age verification.

Misinformation and disinformation

Many countries have recognised the capacity for online platforms to spread and amplify harmful misinformation and disinformation and adopted various measures in response. For example, in 2019, Singapore passed legislation ‘to prevent the electronic communication in Singapore of false statements of fact’. The legislation allows a government authority office to issue ‘correction directions’. In the UK, OFCOM is establishing an advisory committee, while the EU’s Code of Conduct on Disinformation will begin to apply to ‘very large’ online platforms and search engines under the Digital Services Act framework in 2025.

Currently, ACMA oversees the voluntary Australian Code of Practice on Disinformation and Misinformation, administered by the Digital Industry Group. However, the scheme has recognised shortcomings, including that platforms can simply cease to comply. Last year, proposed legslation to give ACMA stronger enforcement powers against misinformation and disinformation failed to pass the parliament, due to freedom of expression concerns.

Foreign-controlled applications

In 2024, the US Congress legislated to ban ‘foreign adversary controlled applications’, based on perceived security threats. A key driver of this was TikTok, a popular social media app owned by Chinese firm ByteDance. However, President Trump has delayed regulatory action on TikTok to allow his administration the ‘opportunity to determine the appropriate course forward’. India previously banned TikTok (among other apps) due to concerns over links with Chinese authorities, while a possible ban in Australia has also been discussed. However, the effectiveness of app bans has been questioned, as users can simply migrate to other apps with the same security issues. Similar issues are likely to arise in relation to AI systems. Australia has joined other countries in banning government devices from using apps linked with China-based AI system ‘Deepseek’.

A super regulator?

Since being established in 2015, the eSafety Commissioner has become a prominent online platform regulator. Its role spans administering complaints and objections schemes provided by the Online Safety Act 2021, issuing takedown notices for harmful content and registering industry codes and standards. The commissioner has also utilised its enforcement powers, including in February 2025 with a nearly $1 million infringement notice to messaging app Telegram for non-compliance with transparency measures associated with the Basic Online Safety Expectations.

Despite these enforcement powers, it is unclear how much the ‘Australian’ online environment can be regulated. For example, survey data suggests 27% of Australians use a Virtual Private Network (VPN), which allows them to access the internet while obscuring their location. Following the 2024 Wakeley Church stabbing, the eSafety Commissioner sought to prevent Australian users using VPNs from accessing footage of the incident on X (formerly Twitter). However, the Federal Court (at paras 49–51) highlighted that the commissioner’s removal notice would effectively ‘be deciding what users of social media services throughout the world were allowed to see’. It considered the commissioner’s notice was likely to be ‘ignored or disparaged in other countries’ and would clash with the ‘comity of nations’.

The 2024 statutory review of the Online Safety Act recommended reforms be considered to further strengthen the eSafety Commissioner’s powers. These included requiring major platforms to have a local presence to better facilitate enforcement action, and introducing a licensing scheme for major services (recommendations 43 and 45).

The review also recommended structural reforms to the eSafety Commissioner office, including transition to a ‘commission model’ of governance. This would include establishing a board to collectively make substantive regulatory decisions (similar to ACMA) (recommendation 58) and a cost recovery mechanism (recommendation 64).

Additionally, the review further suggested consideration of ‘a central online harms regulator’, or Digital Services Commission (recommendation 67). This echoes a previous recommendation by the Joint Select Committee for a Digital Affairs Ministry with ‘overarching responsibility for the coordination of regulation… [of] … digital platforms’ (recommendation 1). Such proposals are based on the recognition that online safety, privacy, scams and competition issues are connected.

Conclusion

The 48th Parliament will face a range of digital transformation challenges. Technological developments in AI are leading policymakers to confront competing pressures to enhance productivity, while also managing risk. Another key agenda item is the adoption of harm minimisation measures to protect users of online platforms. Significant changes to the eSafety Commissioner’s role may be needed to facilitate this reform. Australia’s initial regulatory responses have sought to respond to these challenges, but the dynamic nature of technological change is likely to present its own test for policymakers.

Further reading