Chapter 2Issues regarding the implementation of age assurance measures
2.1This chapter considers the key issues raised by inquiry participants on the implementation of age assurance measures under the Internet Search Engine Services Online Safety Code (Class 1C and Class2 Material) (the Search Engine Services Code) and the Social Media Minimum Age (SMMA) obligation established under Part4A of the Online Safety Act 2021 (the Online Safety Act).
2.2While inquiry participants often expressed support for improving children’s and young people’s safety online, concerns were raised regarding the implementation of age assurance measures. These concerns centred on:
the privacy and data implications of age assurance measures;
the efficacy of age assurance measures and technical limitations; and
the adequacy of oversight mechanisms for age assurance measures.
Online safety for children and young people
2.3As outlined in this section, inquiry participants’ evidence highlighted the need to support children’s and young people’s safety online, including by minimising exposure to age-inappropriate material. In particular, evidence was received on:
the extensive use of technology by children and young people;
the harms associated with age-inappropriate material; and
the importance of online access for wellbeing and development.
Children and young people use technology extensively
2.4Australian children and young people use technology extensively, including online search engines and social media. Indeed, the committee heard that children’s use of technology is ‘almost ubiquitous’ and begins from an age when children cannot fully understand the risks involved.[1]
2.5For example, the Alannah & Madeline Foundation’s submission cited data which indicates 18 per cent of Australian preschool children aged 2–5 have their own laptop, tablet or personal computer and 16 per cent have access to someone else's device. The Alannah & Madeline Foundation also cited data from the United Kingdom (UK) which indicates 96 per cent of children aged 8–14 have used a search engine and, on average, a child user of Google ‘visits the service 152 times a month’.
2.6Moreover, data cited by UNICEF indicates ‘84% of children will have a social media presence by the age of two, and by age 12 every single child in Australia will be online’.
2.7Yet, for many children and young people, being online means exposure to age-inappropriate material. The eSafety Commissioner’s recent Keeping Kids Safe Online survey of 3454 Australian children aged 10–17, found 74 per cent had encountered content associated with harm online. Of the children surveyed:
47 per cent had seen fight videos online;
32 per cent had seen sexual images or videos online;
27 per cent had seen material showing or encouraging illegal drug use;
22 per cent had seen extreme real-life violence online;
19 per cent had seen material suggesting how a person can suicide or self-harm; and
12 per cent had seen violent sexual images or videos online.
2.8In addition to other concerning findings, the eSafety Commissioner said that more than half of children surveyed (53 per cent) had experienced cyberbullying and more than a quarter (27 per cent) had personally experienced online hate.
Harms associated with age-inappropriate material
2.9The committee heard that children and young people face a range of risks associated with exposure to age-inappropriate material.
2.10The eSafety Commissioner told the committee that children and young people ‘may be at greater risk than adults of experiencing a range of adverse impacts, including to their mental health, as a result of exposure to online content associated with harm’. Further, the eSafety Commissioner submitted that children from certain cohorts are at greater risk of experiencing harm online:
eSafety research shows that certain cohorts of children, including Aboriginal and Torres Strait Islander children, children with disability and LGBTIQ+ teens, are at greater risk of harm online, including being more likely to encounter content associated with harm online.
2.11In its submission, the Australian Human Rights Commission (AHRC) outlined the harms associated with children’s and young people’s exposure to online pornography as follows:
Reports indicate that nearly half of children between 9–16 experience regular exposure to sexual images. Studies have found that ‘pornography both contributes to and reinforces the kinds of social norms and attitudes that have been identified as drivers of violence against women’, and that viewing pornography is ‘associated with unsafe sexual health practice’.
2.12The AHRC also noted data from 2022 which showed 23 per cent of 14 to 17-year-olds had ‘encountered violent sexual material online’. The AHRC considered young people’s exposure to such content ‘may be associated with harmful sexual practices, sexual violence, stronger beliefs in gender stereotypes and sexually objectifying views of women’.
2.13Further, the committee received evidence of the harms experienced by children and young people from other forms of age-inappropriate material. For instance, the Alannah & Madeline Foundation submitted that self-harm material can be ‘an immersive, destructive 'cycle' for some teens’ and is ‘especially troubling given the rise in self-harm among young adolescent girls since the late 2000s’.
Harms associated with social media
2.14Inquiry participants’ evidence emphasised that social media is a vector of age-inappropriate content and can expose children and young people to anti-social behaviour and unlawful conduct, such as sexual harassment and cyber bullying.
2.15For instance, Collective Shout, a campaign organisation against the objectification of women and the sexualisation of girls, submitted that social mediaplatforms have ‘become tools of sexual harassment’ which are routinely used to sexually harass girls in school. Further, Collective Shout submitted that research it undertook in 2024 indicates a connection between students’ social media use and ‘increased sexual behaviours in schools’, including by having a ‘major influence’ on shaping ‘inappropriate sexual norms’ among students.
2.16Inquiry participants also considered that social media use can adversely impact children’s and young people’s mental health. While the AHRC acknowledged social media is ‘important for some children and young people who already face barriers to inclusion, safety and wellbeing’, the AHRC submitted that:
… social media can be harmful for children and young people due to the ease of access to age-inappropriate content. It can also negatively impact mental health through exposure to cyberbullying and addictive design features that encourage excessive use. Inadequate content moderation means children and young people often encounter harmful material without adequate safeguards or support. These risks are further amplified by algorithmic systems that prioritise engagement, making it more likely that vulnerable users are exposed to sensational or damaging content.
2.17The eSafety Commissioner’s Keeping Kids Safe Online survey provides insight into the ways in which Australian children experience cyberbullying. For instance, of the children surveyed:
38 per cent had someone say hurtful things to them;
25 per cent had humiliating or hurtful things said about them;
16 per cent had been sent or tagged in offensive or upsetting videos/photos;
13 per cent had been told to hurt or kill themselves, or that they should die; and
7 per cent had humiliating or hurtful fake photos or videos of them shared online.
2.18The committee heard that, at its worst, social media has contributed to the suicide deaths of children and young people around the world. In one Australian case, Collective Shout outlined the tragic circumstances of a 15-year-old from a regional town in New South Wales who was ‘bullied to death’ after a ‘fake nude photo’ of her was circulated extensively on social media. Further, Collective Shout submitted that ‘[a]t least five Australian boys to date (that we know of) have ended their lives due to being tricked by sextortion scammers’, following a significant increase in reports of ‘financial sextortion targeting minors’.
Balancing harms with online access
2.19Alongside the risks of age-inappropriate content, the committee received evidence on the importance of regulation that supports children’s and young people’s online safety and promotes their development and wellbeing.
2.20Indeed, the committee heard that young people’s access to social media is important for social participation. As the Youth Affairs Council Victoria said:
We live in an increasingly digitised world, and social media is an important third space for young people. It's often where they connect, build community, seek support and access information about the world around them, as well as being a crucial space for collective advocacy. This is especially true for marginalised young people, including LGBTQIA+ young people, disabled young people and young people living in regional and rural areas.
2.21UNICEF submitted that as young people ‘disproportionately occupy online spaces more than any other group, the design and regulation of those spaces will have a greater impact on them and for longer than any other generation before them’. UNICEF noted that young people consider being online ‘critical to their healthy development and wellbeing, and that being online is fundamental to their lives’. UNICEF added:
In fact, UNICEF Australia’s recent research found that 81% of Aussie teens who use social media say it has a positive influence on their lives. In the online world, children and young people access important information and vital support, and it is also where they connect, socialise and express themselves.
We know that children face risks online, be it from bullying or exposure to harmful content, but we need to protect children within the digital world, not prohibit them from using it.
2.22Similarly, Ms Elizabeth Thomas, Senior Director, Public Policy, Digital Safety at Microsoft, gave evidence to the committee that emphasised the need for children to safely access online spaces to support their development and social participation:
Empowering children to engage safely online is critical to enable them to make the most of the digital environment, including through access to educational resources, connecting with others, and developing important digital literacy and citizenship skills.
2.23While some inquiry participants supported aspects of the Search Engine Services Code and the SMMA obligation as measures likely to reduce children’s exposure to age-inappropriate material, many inquiry participants questioned the efficacy of the associated age assurance measures in achieving a safer online experience.
2.24For instance, despite the Social Media Minimum Age Bill being passed in December 2024, the committee received evidence indicating ongoing concerns about the impact of the restrictions. The AHRC submitted that it:
… continues to hold serious reservations about the Social Media Ban due to the disproportionate impact it can have on the right to access information (particularly for vulnerable or marginalised groups) and concerns about age assurance.
2.25Indeed, the AHRC considered that the eSafety Commissioner ‘should conduct further consultation and human rights analysis of the impact and implementation of the Social Media Ban, with a larger and more diverse group of children and young people’.
Concerns regarding the privacy risks of age assurance measures
2.26As outlined in Chapter 1, age assurance measures under the Search Engine Services Code and the SMMA obligation will come into effect in December 2025. Among other things, the Search Engine Services Code will require search engine providers to:
(a)implement appropriate age assurance measures for account holders; and
(b)apply tools and/or settings, like ‘safe search’ functionality, at the highest safety setting by default for an account holder its age assurance systems indicate is likely to be an Australian child…
2.27Similarly, the SMMA obligation will require social media platforms to apply age assurance measures to ensure account holders are over the age of 16. Neither the Search Engine Services Code nor the Online Safety Amendment (Social Media Minimum Age) Act 2024 (SMMA Act), which establishes the SMMA obligation, stipulates that a platform provider must use a specific age assurance technology.
2.28As outlined in this section, inquiry participants raised concerns about the implementation of age assurance measures.
Privacy implications of age assurance measures
2.29The committee heard that, under the Search Engine Services Code and the SMMA obligation, Australians will likely need to upload significant personal data, such as identification documents or biometric information, to verify their age online.
2.30Yet many inquiry participants expressed deep reservations about the privacy implications of requiring Australians to provide sensitive personal data to search engine services or social media companies. For instance, Digital Rights Watch submitted:
The introduction of age verification for online content raises profound concerns about privacy, data protection, and proportionality. Invasion of privacy is inherent in any system that requires individuals to prove their age before accessing certain material.
2.31The committee heard that age assurance puts identity data at risk for people of all ages. As Bloom-Ed told the committee:
… the data collection planned has grave implications for young people's privacy and puts their identity data at risk. Additionally, the data collection planned for teens and young people will also impact Australian adults and their privacy, as sites will be assessing information about age from user statements and inference technology.
2.32The committee also heard that for many Australians, and particularly for younger people, maintaining their online privacy is a major concern. Data submitted by the NSW Advocate for Children and Young People (NSWACY) suggests young people are concerned data breaches ‘are becoming more common and invasive age verification methods that collect, and store data are a significant privacy concern and may increase vulnerability’. NSWACY submitted that such concerns have led most young people to use a range of tools to help protect their online privacy, including ‘using incognito mode or VPNs and providing false information’.
2.33Other inquiry participants also highlighted that platform users face substantial risks to their privacy from the storage of their personal information for age verification. For instance, Away from Keyboard cautioned that, without appropriate regulation, the Search Engine Services Code could:
… unintentionally entrench surveillance-based business models. Without tight regulation, age-assurance data and behavioural analytics may be collected far beyond what is necessary to establish age, creating permanent profiles of children, carers and older Australians. These risks are acute for regional and low-literacy communities, where users may be less able to scrutinise privacy policies or exercise their rights.
2.34The Australian Research Alliance for Children and Youth (ARACY) warned that age verification measures must protect young peoples’ privacy or risk their digital disengagement:
Verification regimes that don’t protect privacy will drive disengagement. Any age-verification scheme must prioritise privacy safeguards and transparent data handling, or risk driving young people away from accessing digital spaces altogether. This would negatively affect their mental health as online spaces are where young people find connection to and affirmation from peers.
2.35In considering the privacy implications of age assurance, Cybercy, a cyber literacy and behavioural change consultancy, cautioned that the online protections of children and young people are threatened, in part, due to the size of the global cybercrime market outstripping the cybersecurity market. As Cybercy submitted:
The global cybercrime market is now USD $13.8 trillion, growing at 15% annually. By contrast, the cybersecurity products and services market is USD $432 billion, growing at 12.5% annually.
This imbalance tells a clear story: despite record spending on technology, the economics still favour attackers. For children and young people, this means technical protections will always lag behind the ingenuity of those who exploit them.
Concerns about corporate data collection
2.36Many inquiry participants considered that age verification measures will exacerbate existing concerns about corporate data collection and will, ultimately, increase the risk of misuse of users’ data.
2.37The committee heard that, while statistics are currently limited, the collection of children’s and young people’s online data appears to be prolific. Indeed, data cited by UNICEF indicates that before a child turns 13 an estimated 72 million points of data will have been collected about them. Further, UNICEF argued that it is often unclear to users (of all ages) how their data is being used:
The digital ecosystem is so complex and seamless that often neither children or their adult guardians are fully aware of how their data is being captured and used, nor what the potential benefits and risks are. And while an individual’s data tends to be treated the same way regardless of who they are, children’s data is different - children are less able to understand the long-term implications of consenting to their data being collected.
2.38Further, inquiry participants raised concerns that technology companies lack the ability to adequately protect sensitive user data.
2.39Digital Rights Watch, for example, criticised the requirement for Australians to provide personal data to ‘privacy-invading companies’:
… [The] requirement to age-gate Australian users will provide some of the world’s largest privacy-invading companies with direct access to yet more private data about Australians - whether that’s captured with ID documents or inferred with one of the other age-assurance methods.
2.40Collective Shout’s submission contended that several large social media platforms and video streaming companies, including those subject to the SMMA obligation, have already been ‘found by the US Federal Trade Commission (FTC) to be engaging in vast surveillance of users, with few privacy controls, and inadequate safeguards for kids and teens’. Collective Shout further noted the FTC had sued certain social media companies for ‘collecting and using children’s information without consent’.
2.41TikTok was questioned by the committee about concerns on the scope of the company’s data collection practices, noting TikTok gathers a broad range of information including contact lists, device information, location data, what is watched by children and for how long, scrolling and key stroke patterns in addition to algorithmic profiling.
2.42Ms Ella Woods-Joyce, Public Policy Lead, Content and Safety at TikTok Australia advised the committee that the company’s ‘data and privacy practices are actually broadly consistent with our peers’. Ms Woods-Joyce added that TikTok is:
… very transparent about the data that we collect. In fact, we take a data privacy minimisation approach to things. We don't want more data than we need to make sure that the app is running safely and securely and that it's working as it's intended.
Adequacy of data protection regulations
2.43A number of inquiry participants addressed the adequacy of Australia’s data protection regulations, in light of the likely increase of personal data being uploaded under the age assurance measures. For instance, Digital Rights Watch argued that Australia’s data protection laws are ‘not strong enough to accommodate the mass uptake’ of sensitive information associated with facial recognition technology.
2.44Commenting on the SMMA Act, which establishes the SMMA obligation, the Internet Association of Australia (IAA) observed that:
… the Act explicitly prohibits entities from the collection of government issued identification as the sole means of fulfilling its obligations under the Act. However, we note that platforms are permitted to collect such information if it is being offered alongside other measures. The Act is then vague as to the retention periods for such information that have been collected. We are thus not convinced that the provisions relating to privacy are sufficient and believe that as it pertains to age verification measures, there should be no collection or retention of any identification material by the entities themselves.
2.45The IAA added that its concerns regarding data retention for the SMMA were‘exacerbated’ by Australia’s ‘overly complex and convoluted data retention regime’ which resulted in entities tending to ‘over-collect and retain data longer than is necessary, often due to confusion and fear of non-compliance’. The Queensland Council for Civil Liberties raised a similar concern and observed that unnecessary data retention was identified as an issue in an Australian Government commissioned report in August 2025 on the Age Assurance Technology Trial (AATT). That report, commissioned by the Australian Government through the Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts (but conducted independently of the department and its regulators), noted:
We found some concerning evidence that in the absence of specific guidance, service providers were apparently over-anticipating the eventual needs of regulators about providing personal information for future investigations. Some providers were found to be building tools to enable regulators, law enforcement or Coroners to retrace the actions taken by individuals to verify their age which could lead to increased risk of privacy breaches due to unnecessary and disproportionate collection and retention of data.
2.46The eSafety Commissioner is responsible for monitoring and enforcing ‘compliance with the requirement to not collect government issued ID or use an accredited service under the Digital ID Act 2024, without providing reasonable alternative means’. In September 2025, the eSafety Commissioner released guidance on the SMMA obligation which, among other things, outlined the Commissioner’s expectations regarding privacy-preserving practices and data minimisation. In particular, the eSafety Commissioner stated:
that social media companies’ compliance with the SMMA obligation will not be considered reasonable unless they meet their information and privacy obligations under the Part 4A of the Online Safety Act;
that providers ‘should assess the minimum information and data needed to make decisions appropriate for their service and circumstances’ and ‘avoid handling of sensitive personal information’ where possible; and
that there is no expectation for ‘providers to retain personal information as a record of individual age checks’.
2.47While the IAA acknowledged regulatory guidance from the eSafety Commissioner, the IAA called for clearer examples of what the eSafety Commissioner ‘may request of providers in order to prove compliance so as to reduce confusion and uncertainty’.
2.48Given the concerns raised in relation to data collection, several inquiry participants made recommendations on how regulations for age verification can be amended to protect users’ data and privacy. For example, in relation to the Online Safety Code, the Alannah and Madeline Foundation advocated for a ‘safety-by-default’ approach:
It would be our preference to see codes for industry require a 'safety-by-default' approach, with the highest safety standards in place for all users by default and age assurance employed only as a 'next step' for individuals who seek to access adult materials. We believe this would reduce data harvesting and 'friction' for children who use search engines for appropriate purposes. At present, it is unclear to us whether the code allows for this approach; unfortunately, it does not appear to treat this approach as a preference. We speculate that an approach which prioritises safety and privacy by default is only likely if codes are developed by a regulator answerable to the public, rather than being drafted by industry as is currently the case.
2.49Yet, noting that age assurance measures are likely to eventuate in Australia, the Alannah and Madeline Foundation called for regulatory changes to help ensure the protection of children’s rights, including ‘implementation of 'tranche 2' of the Privacy Act reforms and creation of a strong, comprehensive Children's Online Privacy Code to place appropriate limits around companies' handling of individuals' personal data.’
2.50The committee received many further examples from inquiry participants on prospective measures to minimise the privacy risks associated with data collection by corporations for age assurance purposes. In one key example, the Age Verification Providers Association (AVPA) proposed the use of third-party age verification providers, and related privacy practices, for age assurance purposes. Use of a third-party age verification provider would mean that a user would provide their data for a third-party who would independently verify the user’s age and then report the user’s age status to a platform operator without disclosing any further identifying information. Mr Iain Corby, Executive Director, summarised this process as follows:
Regulations requiring online age assurance should mandate privacy by design using independent third-party checks. Users should have the option of a double-blind architecture, which means the platform can never discover the identity of the user and the age assurance provider cannot tell which platform the user is accessing. They should take advantage of zero-knowledge proof tokens, so platforms don't get any extra data about a person from the age check.
2.51Mr Corby emphasised that third-party providers need to be audited, certified and monitored by data protection authorities to provide confidence to consumers about the safety of their data. Mr Corby added that ‘[h]aving established your age, then the third-party provider deletes all that data—any data they used for that purpose.’
2.52However, the committee also heard concerns about third-party age verification providers’ access to sensitive user data. For instance, the Australian Research Council Centre of Excellence for the Digital Child submitted:
Previous research has explored the significant risk this poses to privacy and interests of consensual and legal adult consumers too. This research cites scepticism over the reliability and efficacy of the proposed arrangements around third-party age verification services or governments preserving privacy and anonymity of its users when storing personal data securely. In an age where data is monetised and considered a valuable commodity; allowing third-parties to host such intimate and personal data raises justified security and privacy concerns for all Australian users.
2.53Broader concerns in relation to enhancing Australia’s privacy regulations are further discussed in Chater 3 of this report.
Concerns regarding the limitations of age assurance technologies
2.54In addition to concerns around the use of data collection and the privacy implications of age assurance by digital platforms, inquiry participants also raised concerns about the accuracy, suitability and reliability of the most prevalent age assurance mechanisms, including those that provide verification of an exact age, age estimation of likely age or age-range and age inference.
2.55For example, the QUT Digital Media Research Centre highlighted that ‘many of the ”best” age-estimation technologies still have unacceptably high error rates’. Similarly the AHRC advised ‘[c]urrent age assurance technologies are not yet capable of implementing the Australian social media ban in a way that avoids significant human rights risks.’ Further, the Scarlet Alliance submitted that ‘[w]hile the recent Age Assurance Technology Trial claimed success, it did not finda single ubiquitous solution that would suit all use cases’.[68] Indeed, the Scarlet Alliance considered that several findings of the AATT suggest ‘flawed technologies’ may be implemented under the Phase 2 Codes.[69]
2.56Further, one of the most common criticisms from inquiry participants was that age-verification measures could be easily circumvented. These concerns are explored further below.
Facial age estimation
2.57Many inquiry participants expressed concern about the accuracy of facial age estimation technology, particularly for women, people of colour, and young people.
2.58The AHRC, for example, explained that there are documented inaccuracies of facial recognition technology for certain demographic groups. It stated:
The Age Assurance Technology Trial found that facial age estimation systems perform less reliably for individuals with darker skin tones and for those aged 16–20, raising serious concerns about equality and discrimination.
2.59Some participants also questioned the ability of facial estimation technology to detect the use of masks and other circumvention methods. Mr Leo Puglisi, for example, noted in the United Kingdom, users have bypassed age estimation with the use of computer-generated images.
2.60In contrast, AVPA advised that standard testing of age estimation technology included the ability to identify the use of masks, deepfakes or AI.
2.61The AATT similarly reported that of the systems tested:
Biometric liveness checks were commonly implemented and aligned with ISO/IEC 30107 (presentation attack detection) standards, helping to guard against spoofing and deepfake risks. Systems were also generally effective at identifying document forgeries, including AI-generated fakes.
2.62Recognising the limitations of facial age estimation, a number of inquiry participants supported its use only as a first-pass estimation process. Mr Corby, for example, proposed that facial age estimation is a low friction estimation method that serves as an adequate first gateway in successive validation techniques. Users closer to the legal minimum may have to undertake further methods to verify their age such as an email address or mobile phone number. He explained:
There has been a lot of noise about facial age estimation and how you can't be more accurate than to within, say, 1.3 to 1.5 years for the top three, which was shown in the trial. We've never argued that it would be possible to implement an exact 16-plus or 18-plus minimum age using the estimation techniques. That's really not why they're there. They're there to help people who are well over those ages.
2.63Mr Corby further stressed that facial age estimation would not be useful for establishing an actual birthdate for the purposes of young people opening a new social media account, which would require age verification with access to an official source confirming actual date of birth.
2.64In giving evidence to the committee, some platforms confirmed that facial age estimation is already being used to some extent. For example, Ms Mia Garlick, Regional Director of Policy with Meta, advised that Meta uses a third-party provider for its age assurance process and users are given the option of providing a video selfie or government ID.
Age inferencing
2.65Most platforms acknowledged that initial implementation of the SMMA obligation would draw on existing data held by the platforms, including where users have self-identified that they are under 16, and from usage patterns that identify users who are likely to be under 16.
2.66However, the committee was also advised that more fine-grained inferences across age brackets, such as distinguishing 13 from 16, was ‘inherently less reliable’, as identified in the AATT, because ‘adolescents often have limited public records, payment credentials or distinct online habits.’
2.67Mr Iain Corby noted the high likelihood platforms would utilise age inferencing, stating:
The reality is that most of the social media platforms will be using all the data that they have on their users at present to assess their age initially, and then it will only be those who are pretty close to the age of 16 who would need to appeal that. Those appeals are never going to be handled with estimation anyway; they can only be handled with a real date of birth.
2.68Mr Corby also highlighted alternate age inference techniques are available such as hand gesture analysis, which has a very high level of accuracy, and which avoids the bias issues that are associated with facial age estimation.
2.69The AHRC advised that age inferencing from behavioural patterns, contextual data and metadata already held by platforms ‘avoids users having to submit additional personal information and reduces barriers to access’. However, it noted the process is intrusive and risks normalising routine analysis of users’ personal content and interactions.
2.70Given the limitations of age estimation mechanisms, and the definitive 16-year age setting of the SMMA restrictions, a number of inquiry participants concluded that a staged or ‘waterfall’ approach to age assurance was the most effective approach for digital platforms.
2.71The Department of Infrastructure, Transport, Regional Development, Communications, Sports and the Arts (DITRDCSA), for example, cited the AATT findings that a ‘waterfall approach – where different age assurance approaches are combined – will boost confidence in age estimates’.
2.72The Australian Information Industry Association (AIIA) reiterated the findings of the AATT that ‘there is no one-size-fits-all solution’ to age assurance. It explained that:
Just as online services vary greatly in how they operate and the risks they pose to children, so too should age assurance measures be tailored to fit those differences. A social media platform with extensive user-generated content and high interaction among strangers presents very different risks compared to, say, an educational website or a search engine.
Age verification
2.73Where a date of birth is required to confirm a user’s exact age, age verification would be necessary.
2.74In addition to the data and security issues raised above, some inquiry participants raised technical concerns about the use of individual identity documentation for age verification.
2.75If users were required to submit photo ID to platforms or to third party verification providers, Digital Rights Watch noted there are ways to circumvent the verification including ‘the ability of children to procure photo ID, perhaps by borrowing that of a parent or older sibling.’ They further noted that ‘there are a number of channels through which a person may purchase a fake ID, either Australian or foreign.’
2.76DITRDCSA noted that the AATT ‘did not reveal any substantial technological limitations to the implementation of age verification technologies in Australia.’ Addressing security and fraud concerns, the AATT report highlighted that, of the age verification systems trialled:
Systems were also generally effective at identifying document forgeries, including AI-generated fakes. However, several providers lacked the ability to check documents against live government databases to determine whether a document had been reported lost or stolen. The evaluation found that security against injection attacks – where malicious code or media bypasses the biometric capture process – is improving but still emerging.
2.77The Australian Research Council (ARC) Centre of Excellence for the Digital Child explained that ‘ultimately the only way age can be clearly and efficiently determined by technology companies is to use government issued ID.’
2.78As noted earlier, the use of government-issued identification documentation for the purpose of complying with the SMMA obligation is permitted, with limitations, under the Online Safety Act. The eSafety Commissioner explained that a provider must offer a reasonable alternative method of age assurance and only where that method is not suitable can Government-issued identification material be collected. Additionally, providers are ‘restricted from collecting information that is of a kind specified in legislative rules made by the Minister.’
2.79In terms of technical implementation, AVPA emphasised that, if proof of an exact minimum age is required, federal and state governments may need to facilitate privacy-preserving one-way blind checks against their own datasets for young people, as opposed to individuals submitting their own identity documents.
2.80This sentiment echoed concerns expressed in the AATT report that age verification for young people may face constraints. The AATT stated that ‘while technically feasible, exact age verification for children is constrained by limited access to hard data’. Further, it outlined that ‘government-backed blind-access APIs [application program interface] to records (e.g., schools, healthcare) may be needed to improve precision.’
2.81However, some participants raised concerns that any reliance on government ID may create barriers for those who have difficulty accessing them. One submission noted ‘[s]pecial consideration should be given to vulnerable children, such as those in care, to ensure they aren't excluded from online access.’
2.82The QUT Digital Media Research Centre similarly noted that age verification mechanisms are likely to result in ‘uneven burdens and exclusions for marginalised communities’.
2.83In recognition of the limitations, Mr Corby of AVPA advised the committee that ‘there should also be a manual process of professionals in the community attesting to your age if you just can't get access to any alternative’, a system that is already available in the UK under a government endorsed scheme.
2.84The AHRC similarly noted that accessible review pathways are required for users to challenge an age assurance outcome, recommending that:
… eSafety amends the Social Media Minium Age Regulatory Guidance to mandate that an informed human in the loop be present and engaged in any challenge to an age assurance outcome.
2.85Review mechanisms for an incorrect age assurance assessment are discussed further in this chapter under ‘Accountability, oversight and transparency’.
2.86Inquiry participants made additional recommendations to strengthen the implementation of age verification. The committee was advised, for example, that ‘[c]hildren who find a way around one-off age verification methods will no longer be protected from such content’, therefore age verification processes should be ongoing, not be a one-time verification.
2.87The committee was also advised that age-verification providers should require certification against IEEE and ISO standards, and be subject to regular auditing.
2.88Additionally, the committee heard that regulators should promote interoperability and the reuse of age verification tokens across services ‘to cut friction to the user experience and minimise the cost to platforms’.
Circumvention
Virtual Private Networks
2.89Regardless of the chosen age assurance mechanism, many inquiry participants argued that tech savvy children and young people will bypass age gateways using Virtual Private Networks (VPNs) which can be used to disguise a user’s location.
2.90Digital rights organisation Electronic Frontiers Australia advised that age estimation technology and age-gating obligations can be easily circumvented, noting ‘[d]etermined youth can, and will, use VPNs, borrowed adult accounts, trade credentials or use technology based hacks.’
2.91Some inquiry participants highlighted that in the United Kingdom’s experience, their age assurance requirements led to a ‘significant surge’ in the use of VPNs, as a ‘swift backlash’ to the age gateways.The QUT Digital Media Research Centre noted that ‘VPN apps quickly became the most downloaded free tools in the Apple apps store’.
2.92Similarly Bloom-Ed warned that Australia must learn from the experiences of the United Kingdom, United States and France ‘where age verification laws led to increased VPN use and access to less regulated platforms.’
2.93In contrast the committee heard from the age verification industry that, when age assurance measures are implemented correctly, VPNs cannot be used to bypass them.
2.94AVPA, for example, argued that breaches experienced in the United Kingdom were in part due to lax assurance protocols on the part of platforms, and that users’ ages can still be determined despite the use of VPNs. Mr Corby explained:
… the platforms, particularly social media, need to look at the nature of the traffic they're getting from VPNs. You can always spot VPN traffic and see whether it looks as if it's likely to be from a user who is under age in Australia. If it turns out that they're never using social media platforms during school hours in Australia, the currency on their browser is set to the Australian dollar and they're using AEDT as their time zone, then you would ask them to prove that they're not in Australia or to do an age check.
2.95DITRDCSA’s submission also highlighted that the AATT ‘debunks the idea that a virtual private network, or VPN, can bypass well-designed age assurance systems’, and that geolocation and VPN detection services ‘can support enforcement by identifying circumvention attempts.’
2.96However, the committee was advised by X Corp that there are no effective means to prevent VPN use as a potential circumvention tool for age restrictions ‘short of a blanket prohibition or the adoption of disproportionate, invasive, and costly technical measures.’
Logged-out usage
2.97Whilst age assurance will be required for logged-in users, logged out browsing will not be subject to age assurance measures. Inquiry participants discussed the risks of users moving to a logged-out state to avoid age assurance requirements.
2.98Some social media platforms, for example, raised concerns that the new SMMA obligation, requiring the accounts of users under 16 to be disabled and preventing the creation of new accounts, would remove a range of account-level safeguards and parental choice protections that the services have developed to strengthen child safety.
2.99Away From Keyboard also noted that loopholes exist for logged-out users noting ‘[f]ilters typically apply to logged-in accounts; search results, link previews and “incognito” browsing remain largely unfiltered.’
2.100However, the committee was advised that some safeguards do exist in logged-out browsing to reduce unintentional exposure to age-inappropriate material. Most significantly, by default, search engine tools and settings must be set to blur online pornographic and high impact violence material for logged out users. The Digital Industry Group Inc. (DIGI) further explained:
The code also requires providers to apply additional protections for all users which are automatically applied without the user needing to opt-in. These include requirements to prevent, for all users, pornography and violence from appearing in search results for search queries that do not intend to solicit the material and autocomplete predictions that are sexually explicit or violent. The Internet Search Engine Services code also requires services to promote trustworthy content over self-harm material, prevent autocomplete predictions seeking self-harm material, and provide crisis information for all users.
2.101Inquiry participants articulated how some of the existing measures to protect children from inadvertently seeing age-inappropriate content operate in practice. YouTube representatives, for example, confirmed age-restricted content is blocked out for all logged-out users of YouTube as a baseline protection as well as disabling participation features such as commenting or uploading videos.
2.102Yahoo similarly advised that it turns:
… SafeSearch settings on by default for all users, whether they're logged in or logged out, and they have to manually change those. For any child, for example, who was to search on Yahoo for adult content on purpose or by accident, that content would not be displayed unless they were to go in and manually change those search settings.
2.103DIGI explained that these measures are bolstered by the requirements of the Designated Internet Services Code, requiring pornography sites to ‘do their part’ to ensure such material is age-gated.
Inadvertent censorship of health information and lawful content
2.104During the inquiry, the committee received evidence of potential unintended consequences from implementing content filtering associated with Australia’s age assurance measures. In particular, inquiry participants expressed concerns that automated content filters may inadvertently block access to sexual health information and block of other lawful content.
Restricting access to essential health information
2.105Under the Search Engine Services Code, internet search engine services must, among other things, implement measures to:
prevent Australian children from accessing or being exposed to pornography and high-impact violence material in search results; and
reduce end-users’ unintentional exposure to online pornography, high-impact violence material and self-harm material.
2.106In particular, internet search engine services will be required to ‘implement ranking systems and algorithms designed to reduce the risk of online pornography and high-impact violence material appearing in search results’.
2.107However, several inquiry participants raised concerns that as Phase 2 codes lack safeguards to protect access to sexual health information, such information will likely be blocked by algorithmic technologies targeting Class 2 material such as pornography.
2.108For example, the AHRC submitted that content moderation systems ‘have a history of over censorship’, including by major platforms that have misclassified LGBTQIA+ content as ‘sexually explicit or inappropriate’, to the determinant of young people ‘who already face significant barriers to accessing inclusive health education’.
2.109Similarly, the Scarlet Alliance warned that the Phase 2 codes mandate approaches that are ‘likely to over-capture and restrict access to consent and relationships education material, sexual assault information, and sexual health, family planning and abortion information, for both young people and adults’.
2.110Furthermore, the committee heard that blocking access to evidence-based sexual health information risks public health and risks peoples’ sexual rights.Bloom-Ed, a peak body for evidence-based relationships and sexuality education, submitted that access to evidence-based sexual health information is a ‘vital component of preventive health’. Limiting access to such information, it argued:
… risks worsening existing health inequities, particularly for those in regional and remote communities where online resources often represent the primary or only means of accessing sexual and reproductive healthcare information. Limiting or blocking access to such information may also inhibit pathways to essential services, including abortion care, HIV prevention and treatment, and other forms of reproductive health support. Thus, the indiscriminate filtering of sexual content has the potential to undermine the health and rights of young people, while creating, or further bolstering, barriers for those who already experience structural inequities.
2.111While the AHRC considered that the Search Engine Services Code ‘contributes positively to the fulfilment of several human rights’, the commission also considered it ‘important to ensure that such measures do not inadvertently limit access to safe and inclusive information - particularly for LGBTQIA+ young people.’ As such, the AHRC considered that ‘[p]rotective frameworks should therefore be designed in ways that uphold the rights of all young people to access developmentally appropriate and non-exploitative resources.’
2.112To help achieve this, the AHRC proposed further definitional clarity, recommending: ‘The Australian Government and eSafety clarify that Class 1C and Class 2 materials excludes legitimate sexual health and educational content.’ The AHRC additionally recommended that:
eSafety works with industry to create safeguards within Schedule 3 - Internet Search Engine Services Online Safety Code (Class 1C and Class 2 Material) to ensure that measures to restrict access to pornography do not inadvertently block access to inclusive, evidence-based sexual health and relationship information, particularly for LGBTQIA+ young people.
Blocking of lawful content
2.113Some inquiry participants raised concerns that automated tools for filtering content risks inadvertently blocking lawful content from adults. For instance, the Eros Association submitted that:
Automated tools that detect “nudity” or “pornography” risk wrongly classifying lawful content, including R18+ and X18+ material. This could result in adults being denied access to entertainment and information they are legally entitled to view.
2.114Submitters also raised concerns that the Search Engine Services Code or the SMMA obligation will enable operators of search engine or social media platforms to make censorship decisions regarding information that is lawful.
Accountability, oversight and transparency
2.115This part of the report outlines inquiry participants’ evidence on the accountability, oversight and transparency standards needed to support the effective implementation and operation of the online safety codes and the SMMA obligation.
2.116The committee heard that the ‘[o]versight of online safety codes must be independent, transparent, and accountable’, and some inquiry participants called for stronger accountability and oversight standards. In particular, inquiry participants’ emphasised the importance of independent oversight, monitoring and review mechanisms.
The importance of independent oversight and monitoring
2.117In giving evidence to the committee, DIGI, which led the development of the Search Engine Services Code, stated that there are a ‘number of ways in which the Online Safety Act will work to ensure that the codes are operating correctly’. DIGI explained:
… the commissioner has oversight of the codes and has powers of enforcement. Under the act, the commissioner also has extensive powers to gather information, including those under the basic online safety expectation processes. Those also cover the expectation on services to take action to protect under-18s from being exposed to this sort of material. Those processes are built into the act. In addition, the codes also provide a range of additional transparency measures. Platforms need to report to the commissioner about the measures that they're implementing, and they also have to explain why those measures are appropriate in accordance with the terms of the code. Appropriateness in the codes is also judged in relation to a range of human rights. Platforms have to consider human rights, including freedom of speech, privacy and children's digital rights. That is an avenue for the commissioner to get sight of how those judgements are being made.
2.118Despite these measures, some inquiry participants raised concerns that industry-led development and implementation of the codes is flawed and will lead to poor outcomes. For instance, Dr Rys Farthing, a policy expert on child rights, explained to the committee the pitfalls of co-regulation:
When you hand industry the pen to write their own codes, you don't necessarily get the strongest safety outcomes, the strongest privacy outcomes or the best outcomes for Australian users.
2.119By way of contrast, Dr Farthing highlighted that legislation for the online privacy code for children includes powers for the Privacy Commissioner to draft the code directly. Dr Farthing noted this was due to it being ‘widely understood that co-regulatory process, where you get a tech lobby group to draft a code, isn't going to produce the best outcomes.’
2.120The QUT Digital Media Research Centre similarly considered that ‘[i]ndustry self-regulation has repeatedly failed, with platforms setting standards that suit their commercial interests rather than the public good’. Accordingly, the QUT Digital Media Research Centre considered that leaving implementation and evaluation of the codes to industry ‘risks regulatory capture, mission creep, and weak enforcement’.
2.121Moreover, the committee heard that oversight of the online safety codes should engage a broad range of independent stakeholders. For instance, the QUT Digital Media Research Centre submitted that:
A sustainable oversight model should be multi-stakeholder, drawing on the expertise of independent academics, civil society organisations, educators, child welfare experts, and privacy advocates. Oversight should not be dominated by industry actors or confined to centralised government agencies alone. Effective accountability requires distributed models of evaluation and consultation, rooted in community needs and local contexts.
2.122Further, youth services provider, yourtown, recommended that ‘oversight mechanisms include independent child rights experts, youth representatives and civil society organisations to ensure the Code reflects community values and the lived experiences of children and young people’. The Australian Research Alliance for Children and Youth submitted that ‘[o]versight of online safety codes must embed direct youth representation’.
2.123To monitor the impact of the Phase 2 Codes and the SMMA obligation, Scarlet Alliance argued that an independent oversight body should be established with representatives from ‘public health and sexuality education organisations, LGBTQI+ organisations, peer and harm reduction organisations and other human rights stakeholders’. Scarlet Alliance considered such a body is ‘essential to minimise the risks of overcapture and restriction of sexuality, LGBTQI+, health promotion, harm reduction and other public interest content for internet users of all ages’.
2.124Digital Rights Watch submitted that the AHRC and the Office of the Information Commissioner (OAIC) should have an increased role in overseeing the Phase 2 Codes:
It is insufficient for the eSafety Commissioner and industry participants to be the ultimate arbiter of the success of the Codes. To ensure that human rights and privacy are respected during the implementation and review of the Code, they must also be overseen by The Australian Human Rights Commission and the OAIC.
Assessing impact
2.125Some inquiry participants considered that current oversight mechanisms are overly focussed on regulatory compliance rather than assessing whether the codes and the SMMA are meeting their intended outcomes.
2.126For instance, the QUT Digital Media Research Centre submitted that to achieve effective regulatory outcomes ‘oversight must shift away from purely technical box-ticking…and instead ask whether interventions are improving’. Such a shift requires ‘long-term investment in independent, public interest research with guaranteed access to platform data’.
2.127Away from Keyboard similarly submitted:
Oversight of online safety codes in Australia currently emphasises process compliance rather than outcome effectiveness. Platforms can self-report on their adherence to codes, but there is no systematic way to verify whether children are actually safer, harmful content is actually reduced, or carers have become more digitally literate. Without rigorous oversight, regulation risks becoming symbolic rather than transformative.
2.128Further, Away from Keyboard made several recommendations to improve the oversight and transparency of the impact of the codes, including mandating public dashboards showing information on key harm reduction metrics and commissioning independent expert evaluations of the codes every two years.
2.129To increase transparency, the AHRC similarly considered that the eSafety Commissioner should update its SMMA obligation guidance to ‘require age-restricted social media platforms to publish annual transparency reports’. The AHRC recommended that the reports ‘include anonymised, aggregated data’ on:
account removals;
age assurance outcomes;
review processes; and
the number of successful challenges.
Review mechanisms
2.130Some inquiry participant considered that the oversight arrangements for the Search Engine Safety Code and the SMMA obligation should include robust mechanisms that ensure age assurance related decisions are subject to review.
2.131For example, the AHRC expressed concern that the Search Engine Safety Code ‘does not prescribe a mechanism for users to challenge the outcome of an incorrect age assurance process.’ The AHRC submitted that this omission was problematic given the automation involved in age assurance technologies that carry ‘known risks of error, demographic bias and limited transparency’. As such, the AHRC recommended that an additional compliance measure be added to the Search Engine Safety Code to require ‘an accessible review pathway for users to challenge an age assurance outcome’.
2.132Further, Mr Joel Canham, an owner of a small online creative community platform, considered that oversight measures should specify ‘what review mechanism exists for wrongful blocking or content removal’. Further, Bloom-Ed recommended that:
protections for educational content be embedded in the safety codes to prevent overreach; and
clear mechanisms be established to ‘prevent industry-led censorship of health and education platforms’.
2.133Unlike the Search Engine Safety Code, the eSafety Commissioner’s regulatory guidance on the SMMA obligation states that providers ‘should offer accessible, fair, and timely complaints or review mechanisms for end-users’. Such mechanisms would address:
adverse outcomes resulting from any age assurance processes;
adverse outcomes resulting from reports of underage accounts; and
account deactivation / removal decisions.
2.134While acknowledging the eSafety Commissioner’s SMMA regulatory guidance, the AHRC contended that the guidance ‘does not go far enough in safeguarding the integrity of these review processes’. In particular, the AHRC recommended that the SMMA regulatory guidance be updated to strengthen provisions for human involvement in reviewable decisions.
2.135Additionally, the AHRC recommended that the independent statutory review of the SMMA obligation—required to commence within two years of the obligation taking effect—be supported by the eSafety Commissioner ‘immediately’ establishing baseline parameters and data collection about the use of social media by under-16s.
Minister’s rule making power
2.136In addition to issues outlined above, the AHRC noted that under the Online Safety Act the Minister for Communications has ‘sole discretion to determine what social media platforms must comply with the Guidance via disallowance instruments’. This approach ‘gives the Minister broad powers to decide’ which social media platforms must comply with the SMMA obligation.
2.137While acknowledging the importance of flexibility in the regulation of the digital environment, the AHRC considered that the ‘absence of clear decision-making standards or safeguards around the exercise of discretion increases the risk of arbitrary or politically motivated decisions’.
2.138The AHRC recommended that the Online Safety Act be amended to establish clear decision-making criteria that are ‘evidence-based, transparent and consistent with the best interests of children’.
Next chapter
2.139The following chapter considers evidence from inquiry participants in relation to complementary and alternative approaches to enhance children’s safety online and concludes with the committee’s view.