Chapter 5 - Regulation of social media platforms

Chapter 5Regulation of social media platforms

The regulation of social media

5.1This chapter considers some of the key elements associated with regulation of social media. These include age verification, the privacy and data issues inherent in using social media, and the role of algorithms.

5.2The chapter concludes with the committee's view of the issues raised during the inquiry and includes recommendations for the Australian Government (government) to consider.

Age verification or age assurance

5.3The committee received substantial evidence on proposals to verify a user's age online. The complexity of the issue is evident in the number of submissions received and the range of issues they traverse.

5.4While the term age verification is explicitly one of the committee's terms of reference, many submitters used the term interchangeably with the broader concept of age assurance.

5.5The committee heard that age assurance is the broad term encompassing a variety of methods that could be used to estimate a user's age, whereas age verification is very specific actions or technologies that could be utilised by a user to prove their age, and is regarded as a higher bar than age assurance.[1]

5.6In an attempt to understand the various technological options available to either government, or to the platforms, the government is currently undertaking a trial of various age assurance technologies in an attempt to assess whether they are as robust and effective as claimed. The trial will also assess their performance in protecting the privacy and the security of users' data, and also whether they might be adopted by users.[2]

5.7The Department of Infrastructure, Transport, Regional Development, Communications and the Arts (department), which has oversight responsibility for the trial, told the committee that while they don't have statutory powers to compel platforms to take part in the trial, they have been 'encouraged by the number of companies approaching us expressing interest in participating in the trial and looking to have their technology as part of the trial'.[3]

5.8The importance of holding the trial as an independent assessment was emphasised by the department, which provided an example from the United Kingdom (UK) of an age verification technology company disputing the findings of the regulator on the effectiveness of the technology:

… in the UK we've seen the head of Ofcom come out and say that they've targeted 18 for their age assurance because they don't believe that facial estimation technologies work well for children at lower ages. Almost immediately we saw one of the providers, Yoti, come out and contest that. That really highlights the importance of getting an independent assessment rather than relying on the word of any one company or another.[4]

5.9The department was questioned on how different jurisdictions such as the UK or the United States (US) were tackling the problem. In response, Ms Gannon from the department's Online Safety Branch told the committee that Australia was in a very similar situation to the UK in terms of trialling different technologies, and the conclusions they are coming to:

… the UK is talking about age assurance now rather than age verification because of concerns that verification is too high a threshold that raises problems with security and privacy. Similarly that's where Australia has moved as well.[5]

5.10However, the department was quick to assure the committee that this is not necessarily a lower threshold, as there had been 'vast improvements' in age assurance technology, without the privacy and security concerns of age verification.[6]

5.11The situation in the US is significantly different and officials reported that several states had also moved away from age verification, as they had encountered complications around free-speech considerations. Instead, some are focusing on content, with New York introducing a 'safer-kids act that focuses on the algorithms and the feeds provided to children rather than access as a whole'.[7]

5.12While there is much debate about the effectiveness of age verification technology, the eSafety Commissioner told the committee that it is difficult but not insurmountable, and that platforms are capable of ascertaining the age of their users if required to do so:

I will say that age verification is very, very, very difficult, but it can be done. I don't believe it is insurmountable. I think that if we asked the major tech platforms right now, 'How many under-13s do you have?'—and we plan to, thanks to the new BOSE determination—or, 'How many under-16s do you have?' they may not know today, because there's probably a degree of wilful blindness because they haven't had to look at these numbers.[8]

5.13The eSafety Commissioner was also optimistic that advances in technologies will be implemented in various jurisdictions around the world, as is required for a global industry:

… there have certainly been advances in age verification technologies. The US NIST has just done a very comprehensive set of testing of a range of age verification technologies, their efficacy and where there might be weaknesses. I think we're actually coming to a place where we also have other regulators around the world. We started a group called the Global Online Safety Regulators Network and we chaired the first year. This year, Ofcom, from the UK, is chairing. They are, of course, looking at age verification. Irish Online Safety is looking at that. Now the DSA is looking at different forms of regulation around some of the major online porn providers. I think you're seeing a real maturing.[9]

5.14Meta's Global Head of Safety, Ms Antigone Davis, explained to the committee why Meta thinks that the ability to identify its users in terms of age was important to their service provision:

When you're trying to provide a service and you want to try to provide age-appropriate experiences, that can vary, say, for someone who's under 18 versus someone who's over 18. Having that information enables us to tailor an experience in that way. The other thing that that information helps us to do is determine what the appropriate defaults are to put in place—privacy defaults, security defaults, safety defaults, content restrictions. It also helps us, if we can, to try to establish a connection to a parent or a guardian. That can be very useful in helping to provide a positive experience online.[10]

5.15The committee also heard that TikTok applies age restrictions on various aspects of its platform. Between its submission and appearance before the committee they listed the various steps they take to provide age-appropriate content:

For example, accounts registered to users under 16 years of age are set to private by default. They are also prevented from sending direct messages, and we don't allow content posted by people under the page of 16 to be recommended to people they don't know in the [For You Feed] FYF. This means content posted by TikTok users aged 13 to 15 will not be seen by the wider TikTok audience on the For You feed, which is the way the vast majority of content is consumed on our app.

These measures represent a high-water mark in terms of industry standards. We also prevent young people from receiving late-night push notifications and give parents and guardians the ability to create further restrictions in relation to these notifications.

Additionally, only accounts registered to people 18 or older can host a livestream on the platform. All TikTok LIVE hosts in Australia are also required to have a minimum number of followers. Every account registered to a person under 18 years of age has a 60-minute daily screen time limit by default.[11]

5.16However, it was unclear to the committee how TikTok enforces the age restrictions on its various products, other than the specialist team in their 'moderation and safety network' finding accounts it suspects are under 13:

There is a specialist team in our moderation and trust and safety network that looks for cues and indications that an account is run by a person who is below the age of 13. When we find them, we suspend them. We are also very transparent about the scale at which we're doing that. We report quarterly on the number of removals, and that's where you're getting this 76 million number from for last year's take-downs.[12]

5.17In line with the other platforms, Snap Inc (Snap), the owners of Snapchat, told the committee its platform is 'not intended for people under the age of 13'. They also differentiate between users between 13–17 and those over 18 with the younger age group being subject to more 'restrictive content privacy settings'. Snap does not allow users to change their age after they have opened their account.[13]

5.18Snap are not of the view that there is a clear solution to providing robust age verification or assurance:

We continue to explore options for age verification, and have been engaging closely with authorities around the world, including the eSafety Commissioner, for many years. Providing effective age verification or assurance that balances user safety, data privacy and security, fairness, accessibility and equity, is a persistent policy challenge with no clear solution.[14]

How should age assurance technology be applied

5.19The committee also heard evidence from various submitters and witnesses on what would be the most effective application of age assurance technology. Theoptions described to the committee were whether age assurance should be applied on the device itself, at the 'app store' level, or on the platforms or individual websites.

5.20The department described those options as 'different points in the tech stack to provide different types of protections', and that these different points 'would also allow differential protections for security and privacy as well'.[15]

5.21The eSafety Commissioner also emphasised the importance of looking at the entire ecosystem when considering all safety elements:

… if you're not thinking about the role of the app stores, the phone manufacturers, the search engines and all of the other players in this space, it's going to be challenging to come up with an efficacious solution.[16]

5.22At their first appearance before the committee in June, Meta said that in their view age verification 'would be a very important mechanism for enhancing the safety and security of services', and that as a company their strong preference is for age verification to take place at the app store or OS level.[17]

5.23Meta explained that the rationale for their position is based in privacy, protection of data and useability:

… we are supportive of having age verification done at the app store or OS level. That level within the ecosystem is the gatekeeping level for the entire ecosystem, and it provides a way to ensure the least privacy-intrusive approach to collecting that information and then utilising that across the ecosystem. The challenges with doing age verification at an individual app level, for example, are that it creates additional privacy intrusions and it will move people from one place that asks for age to other apps that don't ask for age.[18]

5.24In response to questions on whether having age assurance at the OS level is absolving platforms of their responsibility to prevent children from accessing their services, Meta responded that:

We aren't trying to pass responsibility to others … The truth is that you really do need a multilayered system to do this well.[19]

5.25Snap submitted that its preferred option for where age assurance should be applied is at the device level. Its submission discussed the two primary options of ID based solutions, and age-estimation technology, as set out in the eSafety Commissioner's 2023 Roadmapfor age verification (Roadmap).In Snap's view, ID based solutions would require highly sensitive data to be held by online platforms, which would present 'unacceptably high' privacy risks.[20]

5.26With regard to the second option of age-estimation technology, Snap quoted the Roadmap, that the technology is still in its infancy and 'may not perform well for some skin tones, genders, or those with physical differences'.[21]

5.27Snap outlined the advantages, as they see them, of age verification being at the device level:

Adding a level of age verification to this step, and then making this verified age available to all services, would simplify the process for users, reduce the risk of repeatedly providing sensitive ID data to a wide range of apps, and avoid consent fatigue. Users would only need to confirm their age once, which also increases the odds that the information will be accurate. If age is collected and checked at the device level, then that information could be used within the app store to show apps appropriate for the user’s age (meaning that age-inappropriate apps couldn’t be accessed or downloaded; users under 13 would be prevented from viewing or downloading apps that are designated 13+).[22]

What is a social media platform?

5.28In order for a regulatory response to be formulated for social media, the key question of what is social media is a crucial element. The European Union (EU) designates 23 platforms as Very Large Online Platforms (VLOPs) for the purposes of its Digital Services Act. While Australia does not have a similar list, youth advocates argued that deciding what is and is not a social media platform for regulation such as age restrictions is crucial:

There is a key distinction to be made between sharing focused platforms, like Facebook, Instagram and Twitter, which allow content to be accessed by anyone, and communication focused platforms, like Messenger, WhatsApp and Discord, in which content has a direct known and intended recipient or recipients. Communication platforms tend to create more intimate spaces for socialising online. Understanding this difference is crucial for developing effective policies and protections.[23]

5.29The committee heard that regulation must take into account the continuously evolving nature of online services, who are constantly adapting new features and services, which may fall in and out of regulatory definitions:

… what do we define as social media? Obviously, off the top of my head I can think of Facebook, Instagram and TikTok. But the term 'social media' is quite vague in nature. Do we count something like WhatsApp also as social media? It's a more message based platform that a lot of young people are using to communicate with their friends or their family members […] Recently, I had a conversation with someone about how Spotify, for example, is gearing towards becoming a social media platform as well. At the moment Spotify is mostly used for people to consume different kinds of audio media things—think of music, podcasts, da da da. At the same time, there are some features where people can have profiles and they can interact with other profiles. They can comment on certain media and podcast features as well.[24]

Should children be banned from social media?

5.30The overarching issue around verifying someone's age online is whether it can lead to the online environment being safer for those deemed to be too young to make free and informed choices about how they manage their online experience.

5.31The committee heard contrasting views on whether making it safer for children means preventing them from accessing social media until they reach a certain age. However, all inquiry participants were strongly of the view that it should not be seen as a single solution to all of the harms that can occur online.

5.32A significant number of contributors to the inquiry favoured children being prevented from accessing any social media platform. Collective Shout, an advocacy group against the objectification of women and the sexualisationof girls, was strongly of the view that while it may not be absolute in its effectiveness, it is better than no action at all to prevent young people's exposure to the worst of the online environment:

We believe that if we implement age assurance technologies, and if we can delay access to these social media platforms, you'll have fewer children being exposed to porn, to predators, to harmful online content, to bullying…

Even if it's not foolproof, it's still better to go from zero to maybe 70 per cent or 85 per cent because right now there's nothing. There are no protections at all, and we're seeing the results of that in the declining mental health of our young people, especially girls—rising anxiety, self-harm, suicidal ideation et cetera.[25]

5.33Eating Disorder Families Australia (EDFA) were also supportive of restricting the age at which people can access social media, on the grounds that parents are not equipped to manage their access, or put in place any protections that may be available on the platforms themselves:

Parents feel overwhelmed and powerless against the influence of the social media giants. They are no match for the advanced algorithms and targeted messages meant to reach their children, who have not yet developed the maturity and skills necessary to navigate this online world.

EDFA calls for the age limit for social media access to be raised to 16, with effective age verification measures by the social media platforms put in place. Lifting the age limit would be a powerful tool in the parental toolbox.[26]

5.34The Daniel Morcombe Foundation and others[27] concurred with EDFA that young teenagers should be prevented from accessing social media, and age verification technology and processes should be introduced to enforce that.[28]

5.35Professor Selena Bartlett recommend in both her submission, and in her appearance before the committee, that the age of access be raised to protect younger children. In Professor Bartlett's view, the balance has tipped over to the point that the 'harms outweigh the benefits'.[29]

5.36Professor Bartlett sets out her view clearly in her submission that the content currently available on platforms, and accessible through smart phones, leaves children vulnerable to 'inappropriate content, online predators and exploitation and mental health disorders'. Professor Bartlett recommended that restrictions be introduced until safety mechanisms are in place to protect children:

Raise the Age of Access: Increase the minimum age for access to smartphones and social media accounts that contain adult content and messaging apps that are more than PG-rated until robust online safety, protection and security measures are implemented.[30]

5.37The ability for young people to develop the necessary skills and resilience to handle some of the facets of the online world was cited as an argument for restricting their access. The Kokoda Youth Foundation submitted its support for restricting access to social media to the age of 16, in order to provide young people 'more time to mature and develop the necessary skills to handle the challenges and pressures that come with social media use'.[31]

5.38The Australia and New Zealand Academy for Eating Disorders (ANZAED) similarly cited the need for children to be given time to develop and mature, particularly as the early teenage years were often the most prevalent age for a young person to experience an eating disorder:

Increase the minimum age of social media use to 16 years. Currently the minimum age is 13 years. Extending the minimum age will afford children the additional time to develop without the influence of social media. The teenage years are the most prevalent age of onset for an eating disorder in Australia.[32]

5.39In its appearance before the committee, the Australian Parents Council expressed their strong support for increasing the minimum age for access to social media, preferably to the age of 18, on similar grounds:

Children lack the developmental maturity to navigate the complexities and risks posed by social media. Platforms are designed to exploit their natural vulnerabilities, making them particularly susceptible to harmful influences. Just as we protect children from alcohol, gambling and driving until they reach an appropriate age, so too must we protect them from the harmful effects of social media until they are developmentally ready.[33]

5.40Catholic Schools Parents WA similarly reported views expressed by parents at their AGM that younger teenagers were not neurologically able to cope healthily with the impacts of social media:

Parents at the CSPWA AGM were very much in support of raising the social media minimum age limit to 16. There is particular concern around the sense that 'Excessive social media use is rewiring young brains within a critical window of psychological development, causing an epidemic of mental illness'.[34]

5.41Conversely, there were several contributors to the inquiry who were not supportive of banning younger teenagers from social media—or at least did not see it as effective in preventing young people from being exposed to harmful content.

5.42Dr Jessie Mitchell from the Alannah and Madeline Foundation suggested that any regime introduced ought to have the interests and the potential impacts on children at the heart of any decisions made in their interests:

… we support an enforceable requirement for digital platforms, including social media platforms, to treat the best interests of the child as a primary consideration in any decisions affecting children and to demonstrate this approach through the use of child rights impact assessments in order to identify actual or potential adverse impacts on children and to take appropriate steps to prevent, address and mitigate these.[35]

5.43On the specific question of age assurance, Dr Mitchell was circumspect about whether it would have the desired effect of making the online environment safer, while introducing privacy concerns for children's personal data:

Age assurance can play a positive role in children's online safety, for example, by age-gating products and services that are illegal for under-18s and by adjusting some other digital environments to make them safer and more appropriate for children. However, there are risks associated with age assurance. It can give people the impression that a platform is safe for users as long as they're over the age limit, which may not be the case. Age assurance can also lead to interference in children's privacy through inappropriate handling of their personal data, potentially including in sensitive areas like biometrics.[36]

5.44Ms Genevieve Fraser from Dolly's Dream expressed her concern that implementing an age limit would not provide the kind of safe environment that everyone hopes to achieve:

I think one of the greatest dangers of this legislation and the changing of the age limit is that we will go backwards on the safety of children and parents because we have created a false sense of security by changing the age limit.[37]

5.45Alannah and Madeline's Chief Executive, Ms Sarah Davies, emphasised the importance of the platforms being safe for any age group, and highlighted the potential dangers of focusing on limiting access of the basis of age as a solution to that challenge:

I think the issue with raising the age of access to social media is that it's the wrong question. I think the danger is that we all rush off to a red herring and try to find a solution where there isn't going to be one. I think we could have social media for any age as long as there was age appropriate design, safety by design and the overriding principle of the best interests of the child, according to the age of access…[38]

5.46Reset.Tech acknowledged that being able to verify a user's age is an important element of how the online environment is managed and designed. However, while important, in itself it does not tackle unsafe and harmful material on a platform, and hence provide any safety assurances that users won’t be subject to harmful material whatever age they are:

Age verification—or, more likely, let's be honest, age assurance—will help, potentially, fix two specific problems online. These are big problems, don't get me wrong: under-18-year-olds accessing restricted services and then young people who are under the required age of a particular platform accessing that digital platform.

It can help address those two problems but it doesn't address anything else. The platforms will still be unsafe when someone turns 13 or 16 and they're able to legitimately use a platform, and platforms will still carry illegal content or content that violates the guidelines even if we verify that someone who's 17 shouldn't see that on this platform.[39]

5.47Designing the platforms to be safe regardless of who is on them was a point made repeatedly by young people. Safety by design is a concept promoted broadly and supported very strongly by the youth advocates the committee heard from:

… we advocate for a safety-by-design approach, which focuses on anticipating and preventing threats before they occur through the literal design of the platforms—proactivity over reactivity. Responsibility for these platforms should not only solely rest with the users but also with the creators. It is pertinent to hold big tech accountable for the negative impacts of their platforms by legislating these safety features into law to ensure that our protection is foundational rather than an afterthought. We require a comprehensive approach: proactive safety measures from platforms…[40]

5.48Mr Nhat Minh Hoang from the eSafety Youth Council expressed the view that was shared by all the young people in their appearance before the committee that the focus should be on making the platforms safe rather than on restricting who should be on them:

I feel […] that the age restriction, if anything, is a distraction. I feel like if we really want to address the root cause of the problem we need to embed safety by design.[41]

5.49The young people also pointed out that the priorities of the platforms and the safety of the users are not aligned and are often in conflict.

… there is an inherent conflict of interest between what young people want and what the platforms want. The platforms make profit literally by keeping us online as much as possible and by making us watch as many ads as possible. It's not just about us being safe. If we spent too much time on it, that's directly bad for us.[42]

5.50While the imperative of the platforms to keep people online for as long as possible has the capacity to do much harm, so does the content that often generates the most engagement:

There's been a lot of research looking into how a post or any piece of information on social media that's promoting misinformation or disinformation or is radicalising people tends to have higher engagement rates. If you think about it, most of the time these posts are being created in a kind of hyperbolic way to engage people. And they touch upon a lot of different controversial topics. The content creators themselves know this, and they tap into this flaw of the system to make the most out of it. And the systems/the platforms themselves also benefit from it. So now you have this kind of feeding loop of both the harmful-content creators flourishing and the platforms themselves not really addressing the problems because it's actually benefiting their bottom line.[43]

5.51A key element of safety by design, and giving some degree of control and power to the user, is having content moderation and active oversight that users can access in real time:

I think the one thing that's missing is a lot of platform oversight, and I think this is where it really comes to things like content moderation. You will get people who might bully others online or might send inappropriate things. When you are a young person in those online spaces—I know that I, especially, felt like this—you can feel like it's only you there and there's no-one there for you to reach out to. If you reach out your parents, they've got no idea what you mean. You've got no power, either. At school you've got teachers, and you can go to your teachers and say, 'This kid did such and such,' and then you can sort it out, whereas you might email a social platform and they'll send some pre-generated automatic message or they won't reply at all. They'll say, 'Too bad,' or, 'Don't come back,' or that kind of thing. So it's really missing that oversight, especially for young people. I think as we are growing up we need those role models and that support to develop how we interact with the world, and I think that's really something missing from the social media platforms.[44]

5.52Digital Rights Watch also argued that age verification is not a solution to the problems on social media, and given the privacy implications of age assurance technology, could actually cause further issues rather than resolving them:

There are serious challenges with age assurance, let alone age verification, including poor quality tech products that claim to do it. The reality is that VPN capabilities allow people to circumvent it. The widespread privacy violation that comes with it and the lack of any voluntariness for anybody else who wants to get onto the internet—in fact their capacity to be on the internet, depending on where it's used—means social media platforms are circumscribed. That's a problem. It is a problem the way age verification is being proposed as a solution to these problems. Even if it is part of a suite of proposals, there are not only better reforms that can be made in the here and now that have better evidence, better support, better utility and better impact as a policy proposal, which also don't do harm.[45]

5.53Meta and others[46] similarly argued that preventing access for young people would simply move them on to unregulated spaces on the internet:

… if teens want to access services, hard bans can actually just move them to different services, services that are outside of the awareness of regulators, outside of the awareness of parents, for example. Teens use about 40 different apps, on average. That's a US statistic. I don't know the exact equivalent in Australia, but my guess is it's quite close. Their parents probably are very unaware of most of those apps.[47]

5.54The eSafety Youth Council also suggested that young people will find workarounds to any proposed restrictions in access to social media:

Young people are strong-willed, determined and tech-savvy, and we feel that young people may still find a way around these restrictions. For example, young people could use VPNs or generative AI tools to simply 'beat' the age verification system. We question if restricting young people's access will truly address the root cause of the negative influences and impacts of social media.[48]

5.55The Butterfly Foundation voiced their opposition to the introduction of bans for people under 18 on the grounds that there is no evidence that supports a ban, or that suggests a ban would be effective:

While Butterfly supports better age assurance by social media companies, we do not support recent calls for a blanket social media ban restricting access to people under the age of 18. In relation to eating disorders and body image issues there is no evidence that supports the introduction of this type of restriction.[49]

5.56Instead, the Butterfly Foundation and others,[50] recommended multilayered interventions which examine how young people use social media, and look at how education for both them and their parents and carers can address and improve safety online:

Consideration of multiple levels of intervention is required to improve safety for people under 18. These include: understanding the experiences of children and young people online; understanding the ways in which social media interacts with other online environments accessible to people under 18; supporting digital mental health literacy among children and young people; and educating and supporting parents and carers with the knowledge and skills to apply safety technology and to communicate effectively with their children about online content.[51]

5.57Education was also seen as a crucial element to address many of the problems on social media. All of the youth advocates that spoke to the committee supported significant efforts and investment in supporting young people before they engage with social media, and for parents and carers to provide effective oversight to the young person's engagement:

I believe that the No. 1 key thing we can do right now is focus on education. The topic of social media has been compared to a car several times before […] When young people in Australia turn 16, one of the first things they do is go and get their learner's permit. Picture social media like a car. Before they are ready to take it out on the streets, they have to learn. They have to go through a process of learning how to operate the car in a safe manner where they won't harm themselves. We imagine something similar being done for social media—an educational period where you learn more about the dangers, about how to keep yourself and others safe on this platform.[52]

This is really where we come into that conversation of education being a fundamental pillar in actually changing people's experiences on these platforms. We are currently not really educating people on them, but we are going on about banning things and age verification and things like that […] I think the biggest fundamental tool and weapon that we have is education.[53]

5.58According to the young people, education also has to be relevant and evolve with the platforms as they develop:

So it's education, but it's also ensuring that education is both embedded from a young age and consistently updated to remain relevant. What's the point of teaching young people about a platform from five years ago when they're on a completely different landscape now?[54]

5.59Eating Disorders Queensland's (EDQ) submission was framed around the principles underpinning the various measures and protections that have been introduced in the EU under the Digital Services Act (DSA). These are:

Better protecting users by safeguarding fundamental rights online.

Establishing a robust transparency and accountability framework for online platforms, particularly around algorithms.

Providing a uniform framework across Australia.

Collaborating with global partners to learn from them and participate in global research.[55]

5.60In terms of how these principles should support measures to protect young people online, EDQ pointed out that the DSA does not ban minors from online platforms, but instead has put in several rules to protect and empower users:

Ban on Targeted Advertising to Minors:

The new rules prohibit targeted advertising to minors based on user profiling. If a service can reasonably determine that a user is a minor, it cannot use their personal data for targeted ads.

Enhanced Transparency Measures:

Online platforms must now provide clear information on their terms and conditions and offer transparency regarding the algorithms used to recommend content.

Prohibition of 'Dark Patterns':

The DSA bans the use of 'dark patterns'deceptive design tricks that manipulate users into making unintended choices on online platforms.

Data Access for Researchers:

New provisions grant researchers access to data from key platforms, enabling them to study platform operations and the evolution of online risks.

Expanded User Rights:

Users gain new rights, including:

  • The right to file complaints directly to the platform.
  • The ability to seek out-of-court settlements.
  • The option to lodge complaints with their national authority in their own language.
  • The right to seek compensation for rule breaches.[56]
  1. Some of the opposition to banning young people from social media is on the grounds that it places the onus on the individual instead of the platforms to manage the harmful impacts of using the services. Childlight, a research hub at UNSW focused on researching child sexual abuse and exploitation, argued that a ban would penalise the young person for the failures of the tech industry:

… such a ban unfairly penalises children for the negligence of the technology sector, and does not incentivise social media companies to embed child protection into their products and services.[57]

5.62The Australian Medical Association (AMA) concurred that it is how the platforms operate that needs to be addressed rather than moving that emphasis to the young person:

The AMA is concerned increasing age-verification moves the emphasis back on the individual as a consumer, rather than the social media platforms that have control over the content children are being exposed to, using algorithms and commercial tactics to push harmful content at consumers.[58]

5.63The Australian Christian Lobby (ACL) were also not in favour of a prohibition for young people accessing social media. While they acknowledged the pressure on parents to make decisions in this complex area, they were still of the view that it should be theirs to make and not up to the government:

Recognising the potential benefits of limiting access to social media for younger teens, the ACL would not recommend a government-level control as the answer.[59]

A young person-centred approach

5.64Many of those who opposed banning young people from social media expressed that view on the basis that there were more effective mechanisms that could enhance their safety in the online environment. Others emphasised that working with young people is essential to protecting them, and to ensuring that social media platforms are designed with safety at their core.

5.65ReachOut appeared before the committee with representatives from several youth advocacy and mental health bodies[60] to argue for 'nuanced, evidence based and co-designed policy responses' to the 'necessary' reform of social media. One of their recommendations was to put young people at the heart of reforming safety features to be effective and 'fit for purpose'.[61]

5.66Prevention United spoke of how crucial it is to work with young people to generate solutions to the problems on social media, given the generational divide on how young and older people use social media:

Ultimately, young people want to continue to able to access social media, but they want these platforms to be safe. They also want information, education, and resources that are co-designed by young people and respect their ability to make informed decisions. Young people need to be involved in generating the solutions to the problems on social media, since several sources of data show a generational divide in the way that young people and older generations think about the online world.[62]

5.67This view was strongly supported by the Y Australia, who recommended the genuine consultation with young people to ensure policies that are effective and do not disempower those people they are trying to protect:

Australian governments ensure they genuinely consult with young people about social media age restrictions to avoid the risk of creating ineffective and disempowering policy. This work should be led by the eSafety Youth Council and the eSafety Commissioner, complemented by a broader, age-appropriate, youth-led consultation with young people ages 17 and younger.[63]

5.68Education as a crucial step in informing users, parents and carers in how to navigate the complexities of the online environment was cited by the majority of submitters. The engagement of young people in the design of education programs to minimise harm, and maximise the positive outcomes of social media was also broadly commended.

5.69The National Mental Health Commission highlighted the considerable skills and talents of young people that make their contribution to proposed reforms invaluable:

In our consultation, there was also a recognition that many young people are already demonstrating considerable skill in navigating digital technology for example, by thoughtfully curating feeds to suit their interests and preferences or putting self-imposed limits on their use. The importance of co-designing and developing education programs with young people was considered crucial to ensure these were effective and appropriately targeted. This relates both to learning how to mitigate the potential negative effects of digital technology as well as how to maximise the positive effects.[64]

5.70In defending the right for young people, and the families and carers to have agency in the online space, the eSafety Youth Council promoted education as a potentially much more effective way of keeping young people safe online than introducing age restrictions:

Prioritise education for children and young people, as well as their families. It should be the responsibility of families, parents, and young people themselves to regulate what media they can and cannot consume, but they need to be afforded the appropriate knowledge and tools to do so. Relevant and comprehensive education should be readily accessible and focus on supporting young people and families to encourage open dialogue to develop digital literacy, online safety skills, and help-seeking behaviours suited to each young person. Age verification tools cannot account for evolving maturity levels and differing capabilities among young people the way education can.[65]

5.71The human rights of children to have their rights and safety at the heart of any policies was also the foundation of opposition to a social media ban for some submitters.

5.72As noted in Chapter 3, the Australian Human Rights Commission (AHRC) and others cited the Convention on the Rights of the Child (CRC) (which Australia ratified in 1990), and its four Guiding Principles as a framework for how policies should be developed for children online:

children's right to non-discrimination (article 2);

the best interests of the child as a primary consideration (article 3);

children's right to life, survival and development (article 6); and

children's right to be heard and have views taken into account (article 12).[66]

5.73The AHRC pointed out that where policies to protect children from online harm are put in in place that limit their rights, these must be 'lawful, necessary and proportionate'.[67]

5.74In a similar vein, the Human Technology Institute (HTI) from UTS,[68] and the NSW Council for Civil Liberties also emphasised the need for any policies that impact children to take their rights into account.[69]

5.75UNICEF Australia argued that the role of government is to protect young people online, not prohibit them from being there. In its submission to the inquiry, UNICEF Australia emphasised the need to enact proportionate safeguards for young people that considers the benefits as well as the dangers of digital participation:

… we need to protect children within the digital world, not necessarily prohibit them from using it. We can be alert to dangers without being disproportionately alarmed, and we can reduce risk while ensuring that the benefits of digital participation are maintained. Solutions like age-gating (preventing or allowing access to platforms and content based on age) have a role to play as one of several measures which can be used to better protect children, but this needs to be proportionate to risk and balanced with the other rights that children have, like being able to access important information and express themselves.[70]

Privacy and data

5.76There is a substantial amount of work being undertaken to reform privacy protections in Australia. Many of the proposed reforms will impact the underlying theme to discussions held during the inquiry around the privacy of, and access to, the data of social media users. This section sets out the government's reform proposals and discusses some of the concerns raised by submitters and witnesses.

5.77One of the strong sentiments voiced by submitters to the inquiry is that the rights of citizens are not respected by social media platforms in terms of privacy, the use of their data, protection of children, and as consumers. The business model, the operational model, the lack of accountability and transparency conspire to create an online environment that is controlled by huge overseas platforms with little or no regard for Australian sovereignty, laws or the health and wellbeing of citizens.

5.78Concerns on how data is collected, whether the individuals are aware and actively consent to their data being collected and often sold and disseminated, and whether current privacy protections are fit for purpose, were common themes raised by submitters.

5.79The committee heard from various contributors that privacy regulation and legislation has failed to keep up with the rise of social media. According to Digital Rights Watch (DRW) this has had a significant impact on Australian society:

The Privacy Act has been the subject of a years-long review process, and it is quite clear now that our laws are four decades out of date. The result is that the data-extractive business models that have been allowed to proliferate online have come at significant expense to Australian society. Excessive data collection facilitates targeting by predatory industries, including gambling, alcohol, diet and junk food products, essentially allowing for the exploitation of vulnerability for profit. Microtargeting makes online life dangerous for many Australians.[71]

5.80While DRW did not suggest that reform of privacy regulations would solve all of the issues around social media, they did suggest 'it represents a significant and critical first step'.[72]

5.81Similar to the issue of accountability, the onus of who is responsible for the maintenance of an individual's privacy on social media is central to the operation of social media. DRW cited research conducted in the US which indicates that the platforms have a strong preference on where that onus lies:

Research out of the US done by the Markup indicates that the two key industry demands when it comes to privacy law are (1) companies do not want to be able to be sued for violations of privacy and (2) industry wants a situation where consumers have to opt out of rather than opt into digital tracking. At present, those two key demands are satisfied in Australian law, but this is something that could be easily fixed and ought to be changed immediately.[73]

5.82According to the Consumer Policy Research Centre (CPRC), to understand the privacy policies that users are subject to online in Australia would take 'in just one day on average…14 hours for you to consume all that information'.[74] And that it would presuppose that a user in Australia is able to opt out of advertising on the platforms, which they are currently unable to do, unlike some overseas jurisdictions.

5.83Reset.Tech Australia discussed the reaction from platforms to proposals to allow Australians to opt out of targeted advertising, as is the case in some overseas jurisdictions:

We saw digital platforms turn up in Australia en masse and say, 'We couldn't possibly give Australian adults the right to opt out of targeted advertising. We would need to charge users. This would be the end of our service in Australia.' Actually, users have the right to opt out of targeted advertising in Europe, and 20 per cent of American users have the right to opt out of targeted advertising in state-by-state reforms that give them that choice in America. This is something the platforms can do, but they turn up in Australia saying it's not possible.[75]

5.84The Privacy Act 1988 (Privacy Act) was the subject of a lengthy review, with a view to strengthening privacy protections to 'ensure the collection, use and disclosure of people's personal information is reasonable, reflects community expectations and is adequately protected from unauthorised access and misuse.'[76]

5.85The committee heard that reform of the Privacy Act is fundamental for government and users alike, in the effort to regulate the online experience. Submitters highlighted the transformational potential that fit for purpose privacy regulation has in alleviating some of the harms that can occur in an unregulated online space. Per Capita told the committee that:

… if we get our privacy laws fit for purpose—and we haven't had a decent update of privacy in 40 years—that is a building block to start to undermine the business models that are driving these problems, while also opening up new possibilities.[77]

5.86DRW agreed that privacy reform is one of the critical tools in creating an environment that meets the expectations of the user and society:

Privacy reform is the obvious contender. In part, it's because it will assist not only to deal with harm experienced by people who are targeted by predatory industries but also to elevate the quality of information ecosystem by restricting the capability or the incentive to monetise misleading content, mis- and disinformation, and it's not an interference with people's rights. In fact, it's empowering. It gives people the opportunity to take back power and make online spaces in the shape that suits them, rather than what suits the business model of large American corporations.[78]

5.87The Privacy Act review was undertaken by the Attorney-General's Department (AGD), borne out of the recommendations of the Australian Competition and Consumer Commission's (ACCC's) Digital Platforms inquiry in 2019.[79]Thegovernment responded[80] to the review in September 2023, and set out five key focus areas:

Bring the Privacy Act into the digital age

Bring the scope and application of the Privacy Act into the digital age by recognising the public interest in protecting privacy and exploring further how best to apply the Act to a broader range of information and entities which handle this personal information.

Uplift protections

Uplift the protections afforded by the Privacy Act by requiring entities to be accountable for handling individuals’ information within community expectations, and enhancing requirements to keep information secure and destroying it when it is no longer needed. Reforms to the Notifiable Data Breaches (NDB) scheme will assist with reducing harms which may result from data breaches and new organisational accountability requirements will encourage entities to incorporate privacy-by-design into their operating processes. New specific protections will also apply to high privacy risk activities and more vulnerable groups including children, especially online.

Increase clarity and simplicity for entities and individuals

Provide entities with greater clarity on how to protect individuals’ privacy, and simplify the obligations that apply to entities which handle personal information on behalf of another entity. The reforms will increase the flexibility of code-making under the Act, reduce inconsistency and improve coherence across different legal frameworks with privacy protections, and simplify requirements for transferring personal information overseas, particularly to those countries with substantially similar privacy laws.

Improve control and transparency for individuals over their personal information

Provide individuals with greater transparency and control over their information through improved notice and consent mechanisms. We will also explore the scope and application of new rights in relation to personal information and increased avenues to seek redress for interferences with privacy, through a direct right of action permitting individuals to apply to the courts for relief for interferences with privacy under the Privacy Act and a new statutory tort for serious invasions of privacy.

Strengthen enforcement

Increase enforcement powers for the OAIC, expand the scope of orders the court may make in civil penalty proceedings and empower the courts to consider applications for relief made directly by individuals. A strategic assessment of the OAIC and further consideration of its resourcing requirements, including investigating the effectiveness of an industry funding model and establishing litigation funds, will enhance the effectiveness of Australia's privacy regulator.

5.88Of the review's 116 recommendations, the government agreed to 38, agreed inprinciple to 68, and noted the remaining 10.[81] The government's response set out the next steps for the process, which will be led by the AGD and include:

development of legislative proposals which are 'agreed', with further targeted consultation to follow;

engagement with entities on proposals which are 'agreed in-principle' to explore whether and how they could be implemented so as to proportionately balance privacy safeguards with potential other consequences and additional regulatory burden;

development of a detailed impact analysis, to determine potential compliance costs for regulated entities and other potential economic costs or benefits (including for consumers); and

progressing further advice to Government in 2024, including outcomes of further consultation and legislative proposals.[82]

5.89The government introduced its first tranche of legislative reforms in the Privacy and Other Legislation Amendment Bill 2024 (bill) in September 2024. The bill would implement 23 of the 25 legislative proposals that were agreed in the government response to the Privacy Act review.[83]

5.90The bill comprises 3 schedules, privacy reforms; serious invasions of privacy; and new criminal offences for the menacing or harassing release of personal data using a carriage service (also known as doxxing):

Schedule 1 of the bill will amend the Privacy Act to enhance its effectiveness, strengthen the enforcement tools available to the privacy regulator and better facilitate safe overseas data flows. It will require the development of a children's online privacy code, streamline information-sharing in emergencies and following eligible data breaches, and increase transparency when entities are automating significant decisions which use personal information.

Schedule 2 of the bill will introduce a new statutory tort to provide redress for serious invasions of privacy.

Schedule 3 of the bill will amend the Criminal Code Act 1995 to introduce new criminal offences to target the harmful practice of doxxing.[84]

5.91The bill was referred to the Senate Legal and Constitutional Affairs Legislation Committee for inquiry and report by 14 November 2024.

5.92The key proposals in the bill include:

establishing a Children's Online Privacy Code (COPC);

creating a statutory tort for serious invasions of privacy;

new powers for the Minister to direct the Commissioner to develop and register Australian Privacy Principles (APP) codes and conduct public inquiries; and

a new civil penalty for acts and practices which interfere with the privacy of individuals (but which fall below the threshold of 'serious') and a civil penalty infringement notice scheme for specific APP and other obligations.[85]

5.93The establishment of a code to protect children's data addresses a key concern of many contributors to the committee's inquiry. As noted above, the bill includes provisions to establish the COPC within 24 months, and the government has indicated it will provide $3 million funding to the Office of the Australian Information Commissioner to develop the COPC over three years.[86]

Protection of children's data

5.94Google stated in its opening address to the committee that keeping users, particularly those under 18, safe in the online environment, and protecting their privacy was a complex task that required input from various stakeholders:

We take these issues very seriously because better understanding the ages of our users across multiple products and services, while respecting their privacy, is a complex challenge. An effective solution requires multi-stakeholder input from government, regulators, industry bodies, technology providers, child safety experts and others to develop workable solutions, and we're committed to doing our part.[87]

5.95The importance of protecting children in the online environment was of paramount importance to many submitters. The danger for children in particular was highlighted by the b kinder Foundation, which cited Professor Philip Morris AM from the National Association of Practising Psychiatrists:

Many children don't have impulse control, they often don't understand privacy and risk, they may not be competent to make relevant decisions, and they can be exposed to completely inappropriate and damaging material on social media.[88]

5.96DIGI's submission also included a recognition that children's privacy and data should be treated differently from other social media users, and that protections should apply across the entire online environment:

DIGI agrees there needs to be widespread protections for children’s privacy and their data. DIGI sees a key opportunity to standardise these protections in the current Privacy Act Review and has long supported specific privacy protections for minors be expanded upon within the Privacy Act, in order to provide minors, or their guardians, confidence that a baseline standard of privacy exists no matter which online service they are using.[89]

5.97Concerns around how children's data is used ranged from the type of data harvested, to the implications of data being collected and stored for someone's entire life. The Alannah and Madeline Foundation advocated for a ban on children's data being collected for commercial use:

… we should regulate to ban the harvesting, sale and exchange of all children's data for commercial use, including their biodata and their neural data. That includes their image, their voice and their brain response to stimuli as well as demographic data about who they are, where they live, who their parents are, how old they are and what gender they are.[90]

5.98Reset.Tech Australia questioned who is collecting the data, and how that data could conceivably be used well into the future:

Who is purchasing that data? Are insurance companies purchasing that data when you're 30, to think about what your fees are going to be? The cumulative impact of having a lifetime of data following you around is shocking and a little bit scary, and it's something that's unique to this generation coming up. Thankfully, I don't have data about my health problems, my digital use when I was a child, to follow me into my 50s, but certainly today's generation will.[91]

5.99This view was echoed by several of the young people who appeared before the committee, Mr Sina Aghamofid, a ReachOut youth advocate, highlighted the anxiety that young people have in handing over various forms of ID to various parties if they are required to verify their age for example:

I personally think that, with the technology and the way it's been presented to us, age verification at the moment isn't where it needs to be. It depends on what form of age verification we're looking at. Whether it's each individual young person giving their identification, which is usually a birth certificate at that stage, and whom they give that to, whether it's just to one provider, the app store or each individual social media company—there's a lot of anxiety about privacy concerns that comes with that.[92]

5.100Mr Nhat Minh Hoang, a member of the eSafety Youth Council had similar concerns:

… it's really necessary to repeat this concern with privacy. If we go through this process of verifying people's identity and there's all of the different sensitive information, especially for young children and young people, how are we putting in place the infrastructure or different policy to protect this kind of identity?[93]

5.101DIGI supported the introduction of the COPC as proposed by the government in its response to the Privacy Act review.[94] To this end, DIGI cited the UK's Age Appropriation Design Code as a model to ensure that services 'have regard to the best interests of the child' in their design.[95]

5.102UNICEF Australia also voiced their support for the COPC, which it suggested was 'backed by appropriate resourcing for our independent regulators to fulfil their role in Australia's strengthened digital framework.'[96]The Foundation for Alcohol Research and Education (FARE) also viewed the COPC as essential to protecting children.[97]

5.103The Alannah and Madeline Foundation cited how privacy protections apply to children in some overseas jurisdictions:

The things that we would really like to see happen are already in existence in other parts of the world. We know that tech broadly play market by market, and they look at what's happening in that market and go to where the lowest common denominator of operation is, so none of these things actually would be new. The first one is that all apps, all websites, all digital products and all services should automatically provide the highest level of privacy setting by default for all users under the age of 18. It's really not hard. It happens in Europe. It happens in the UK. It happens in Canada. It's pretty straightforward. Just follow it.[98]

5.104This view was shared by the CPRC who compared the time a user would need to spend managing their privacy settings on each app or website. In Australia it is two minutes, in the EU it is 3.1 seconds:

… that really came as quite a shock. We knew that would be the case, but we didn't know that the difference would be so large. One thing is that the EU has far stronger privacy protections. They also now have a duty of care built in under their Digital Services Act for very large online platforms, so there are positive obligations on data collection and sharing. More broadly than that, when they ask whether you want your data to be shared or for cookies to collect information about you, it's very simple to accept all or reject all.[99]

5.105The use and lack of protection of children's data without consent or regulation was discovered in the UK following the introduction of its Children's Privacy Code. EdTech, or educational technology used in schools was found to be collecting children's location data, as well as their activities, for advertising purposes.

5.106Dr Jessie Mitchell from the Alannah and Madeline Foundation discussed the need to actively monitor and regulate EdTech products and platforms, a lesson borne out of the UK's experience:

One particular experience that was noted in the UK that was of interest to us was the place of EdTech. I think that was a form of digital technology that perhaps not everyone thought about when the Children's Privacy Code was first introduced…The Information Commissioner has since clarified the meaningful responsibilities of EdTech platforms under the children's code in the UK.[100]

5.107The research on Edtech also highlighted deliberate attempts by industry to pass on the regulatory burden onto schools rather than take responsibility for it themselves:

What was observed there by groups like the Digital Futures Commission was that you had EdTech platforms attempting to shift quite a lot of regulatory burden onto schools, positioning schools, through contracts, as being the primary providers of their products and therefore minimising the expectation on industry to lift privacy protections for children. The Information Commissioner has since clarified the meaningful responsibilities of EdTech platforms under the children's code in the UK, but that was one area where there was a real sticking point observed, where industry appeared to be very reluctant to make significant changes at their own end.[101]

Algorithms and recommender systems

5.108The committee also received substantial information on how algorithms and recommender system operate and are used to dictate the online experience of all social media users. This also included how they can promote harmful behaviours, and inappropriate content, which can have a detrimental impact of the mental health of users.

5.109The committee is aware of the work being done in this space—by the Australian Government and in overseas jurisdictions—to address what many see as one of, if not the fundamental design feature of social media platforms.

5.110How social media platforms operate has changed markedly in the recent years, partly as a result in the sheer volume of users and content that now sits on the platforms. While platforms originally displayed content generally in a chronological order, that has been replaced by algorithms that distribute content based on personal usage and preferences.

5.111In its submission to the inquiry, the Office of the eSafety Commissioner (eSafety Commissioner) explains that algorithms utilised by social media platforms are predominantly recommender algorithms that makes suggestions based on the behaviour of the user:

An algorithm can be described as 'anywell-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output'. Algorithms are fundamental computing instructions, are often considered Inherently neutral, and have long been used to aid decision-making online and offline. However, recent developments such as widespread adoption of machine learning and an exponential increase in available data have created ever-more sophisticated and complex algorithmic decisions.

A recommender system is a system that prioritises content or makes personalised content suggestions to users of online services. A key component of a system is its recommender algorithm, the set of computing instructions that determines what a user will be served based on many factors. This is done by applying machine learning techniques to the data held by online services, to identify user attributes and patterns and make recommendations to achieve particular goals.[102]

5.112Recommender algorithms can be further broken down into three main recommender systems: content-based recommender systems; collaborativefiltering recommender systems; and knowledge-based recommender systems. A description of all three is set out below:

Content-based Recommender Systems: Content-based recommender systems operate by suggesting items to a user that are similar in attributes to items that a user has previously demonstrated interest in. Content-based recommender systems evaluate the attributes of an item, such as its metadata (e.g. tags or text). Although different recommendation systems can vary in their exact composition, most content-based recommender systems operate by creating a profile of a user that outlines their interests and preferences, such as alternative rock music or drug store makeup. The system will then compare this user profile to existing items in its database in order to identify a match. A user's profile is based on explicit and implicit feedback that a user provides on recommendations as a way to indicate their preferences and interests further. For example, a user may explicitly indicate preferences by rating a particular item, or implicitly by clicking on certain suggested items and not others. The algorithm uses this information to categorize these preferences and refine recommendations going forward. Unlike other systems discussed below, content-based recommender systems do not typically account for the preferences or actions of other users when making recommendations. Rather, their recommendations are primarily based on a given user's past interactions.

Collaborative-filtering Recommender Systems: Collaborative-filtering recommender systems operate by suggesting items to a user based on the interests and behaviours of other users who are identified as having similar preferences or tastes. The process is considered collaborative because these recommender systems make automated predictions (known as filtering) about a user's interests and preferences based on data about other users who have similar preferences and interests. Collaborative-filtering recommender systems make these determinations by mining user behaviour, such as purchase records and user ratings, and through techniques like pattern recognition.

Collaborative-filtering recommender systems are considered particularly useful because they allow comparison and ranking of completely different types of items. This is because these algorithms do not need to know about the attributes of an item. Rather, they only need to know which items are bought together. In addition, because collaborative-filtering recommender systems make recommendations based on a vast dataset of other user behaviours and preferences, researchers have found that this approach yields more accurate results compared to other techniques. For example, keyword searching offers a more narrow assessment of datasets. Keyword searching is typically deployed during behavioural targeting, which is when advertisers use information on a user's browsing history and behaviour to customize the ad targeting and delivery process.

In addition, some researchers consider collaborative-filtering recommender systems to be able to deliver more accurate results for users with both mainstream and niche interests when compared to the other types of recommender systems outlined in this report. This is because a programmer can control how many users in the database should be considered as part of the data set for calculating recommendations; as a result, the programmer can optimize the algorithm to balance recommending popular and niche results. According to researchers, collaborative-filtering recommender systems are modelled after the ways individuals solicit feedback and recommendations from their social circle offline. Such recommender systems seek to automate this process based on the notion that if two similar individuals like an item or piece of content, there are also likely several other items that they would both find interesting.

Collaborative-filtering recommender systems rely on two primary types of algorithms: user-based and item-based collaborative-filtering algorithms. Both types rely on users' ratings on items. In the user-based category of algorithms, the algorithm scores an item a user has not rated by combining the ratings of similar users. In the item-based approach, the algorithm matches a user's purchased and rated items with similar items, and then combines these similar items into a list of recommended products for the initial user.

Knowledge-based Recommender Systems: Knowledge-based recommender systems make suggestions based on the attributes of a user and an item. These systems typically rely on data-mining methods and advanced natural language processing (NLP) to identify and evaluate an item’s attributes (e.g. price or technical specifications such as HD or BluRay functionality). The system then identifies similarities between Item A's attributes and User A's preferences (e.g. preference for high-end equipment) and makes recommendations based on its findings. For example, such a system could identify attribute similarities between a job seeker's resume and a job description. This recommender system does not typically consider a user's past behaviours, and it is therefore considered most effective when engaging with a new user or a new item.

Most recommendation systems deploy a hybrid of these three filter categories. Traditionally, both content-based and collaborative-filtering recommender systems rely on explicit input from a user. For example, algorithms can collect user ratings, likes, or reactions to an item or piece of content. These systems can also use implicit user input, which is drawn from data on user activities and behaviours. These can include a user's clicks, search queries, and purchase history, as well as a user adding an item to their cart, completing a purchase, and reading an entire article. Using both of these data types, these systems can be refined so that they deliver more personalized recommendations.[103]

5.113According to the eSafety Commissioner's submission, algorithms as a technical process are neither harmful or not, but 'they directly affect the content users are shown […] and can reduce or exacerbate online harms. This includes the potential to amplify harmful and extreme content [...] especially where they are optimised for user engagement'.[104]

5.114The central concerns for submitters with how algorithms are utilised by social media platforms, is in the lack of transparency in how they exert control of the user's content, and the inability of users to control their own online experience.

5.115With regard to how young people experience social media, ReachOut proposed that the platforms be designed for safety rather than engagement, which would give back control to the young person:

… we seek policies that compel social media platforms to step up and design their products for safety, not engagement, with greater transparency and user control. This could look like putting an end to sticky features like infinite scroll, mandating safety features and increased social media literacy programs for young people, increasing algorithm transparency and giving users more control over their algorithms so that they are in charge of the content they see.[105]

5.116One of the issues with algorithms that are designed for engagement rather than safety, is that research has shown that negative content gets more engagement than positive. Dr Timothy Graham, an Associate Professor in Digital Media at Queensland University of Technology, discussed findings from research he conducted in 2019, which was supported by more studies undertaken during the Voice campaign in 2023:

Baked into the architectural and algorithmic design of these platforms is the mandate to increase user engagement and attention. The dilemma facing us today is that platforms are designed for quantity over quality. They prioritise and recommend content that elicits strong reactions and gets user attention rather than content that is high quality or factually sound. Research shows that content expressing fear, outrage and division just gets more clicks, shares and comments.

Going back a few years, a 2019 study I conducted showed that emotionally negative content on the platform Reddit is rewarded with comparatively more attention by users. Recently, research I have published on the voice to parliament referendum discussions online also supports this conclusion.[106]

5.117The transparency of how algorithms work, or are directed to work, is one of the other central concerns around the operation of social media. As discussed in the committee's Second interim report:digital platforms and the traditional news media, around the propagation of mis and dis-information, the key to understanding how the platforms use algorithms, recommender systems, and business tactics is to allow access to the technical operational data. Similar to Recommendation9 of the committee's previous report,[107] this can only be achieved by providing legislative authority to regulators to mandate regular routine transparency reports in a similar fashion to the EU's Digital Services Act.

5.118As has been pointed out by contributors across the various terms of reference for the committee's inquiry, understanding the way digital platforms operate through algorithms and recommender systems plays only one part of the complex strategy required to regulate the online environment to protect users from the worst elements of the online environment. As the committee has heard, the only way to mitigate the risks involved in social media is to force platforms to put safety at the centre of their operating system, which will involve making them fully accountable for the content served to users:

I think that comes back to these requirements around duty of care, risk assessments and risk mitigations. Algorithms are just a system that platforms deploy. You could make sure that platforms have to adequately risk assess and risk mitigate that system. I think a lot is made about the invisibility of algorithms—not quite knowing what they are and not being able to see the code—and I think that's almost misleading in a lot of ways. The recommender system, or the algorithm that drives Twitter recommender systems, for example, is available on GitHub. If you want to go and see TikTok's algorithm, they've got a transparency centre. You can get on a plane and actually go and read the code—see it. That's not the issue.[108]

Committee view

5.119As was evident throughout the inquiry, and discussed earlier in this report, social media, and the broader online environment is a highly complex space that requires a complex regulatory response. The core of this complexity is not simply technological, it is also in the fact that while there are many harms that can be caused by social media, it also brings significant benefits to users, and is integral to the ways people interact in the modern world, particularly young people. This makes the task of regulating social media to promote and protect those benefits, while minimising harm, a complex policy problem to solve.

5.120The vast majority of Australians use social media platforms in some form. Asnoted in the committee's second interim report, digital platforms have become a part of daily life for many. The Australian Competition and Consumer Commission (ACCC) has noted that as of January 2023, there were 21.3 million active users of social media platforms—representing 81 per cent of the total Australian population.

5.121The extent to which social media has grown across society and in our personal lives, as well as how it is used, is illustrated in the ACCC's latest Digital Platform Services Report. This report provides detailed figures on the growth of social media, the size and scale of each platform, and how each platform monetises their operations.

5.122Meta is not only the largest social media platform in terms of users, encompassing Facebook, Instagram and Threads, but it is also the biggest distribution platform for business, and advertisers. The scale of the company provides an extraordinary income from advertising revenue that dwarfs all of the other platforms combined.In 2022 Meta earned around $5billion in Australia from advertising, with YouTube its nearest rival earning around $400million.

5.123According to the ACCC, social media platforms are also diversifying their income by selling or receiving commission from virtual goods, and offering premium services through subscriptions. The rise of influencers has also begun to generate a significant increase in users and revenues for both the platforms and the influencers.

5.124How platforms exercise the power and vast revenue generated is at the heart of the committee's deliberations in this inquiry. With so many users comes the ability to exert huge influence on how those users consume content in whatever form they choose to do so. To date there have been very little guardrails around the integrity and veracity of that content.

Transparency and Accountability

5.125Social media companies influence what Australians see and hear online and do so in a range of ways. This can be through deliberate decision-making taken by the management of the platform, or it can be through technological processes such as algorithms or recommender systems programmed by those in control of the platform.

5.126There is a lack of transparency about these systems, and decisions of social media platforms. There is also a lack of control provided to the user. Theplatforms have increasingly limited the ability of the user to tailor their own experience of social media, in favour of the push for continuous engagement to provide users' data and generate advertising revenue.

5.127As well as this lack of transparency, there are consistent efforts made by platforms to avoid accountability, and the committee heard this through evidence given by Meta about its own company. While the eSafety Commissioner has some powers under the Online Safety Act 2021 to regulate digital platforms’ broader systems and processes, Australians are subject to substantially fewer protections and options for remedy or redress than many other jurisdictions, including the United States (US), and Europe. To compound this, most platforms, including Meta, have a limited, or no, local legal presence in Australia, which makes serving or enforcement of legal options very difficult for Australian users. Tactics used in Australia to avoid legal redress, include claiming the protection offered under the Section 230 of the US Communications Decency Act of 1996 that prevents most civil suits against platforms, even in jurisdictions outside the US.

5.128The committee is strongly of the view that platforms, particularly Meta, are actively choosing not to provide the same levels of protections for users, transparency, and accountability mechanisms in Australia that they do in other jurisdictions. And in doing so, they are rapidly diluting their social licence to operate in Australia.

5.129In response to the platforms' inaction, the committee recommends that the government introduce a legislative duty of care to ensure that platforms are accountable for the content and operation of their platforms, and to hasten the transformation of their business models to put the user's safety and positive experience at the heart of it.

Online safety

5.130The committee heard throughout the inquiry of how large platforms such as Meta, by design, abuse the market power they have amassed and do not prioritise the safety of their users above profit.

5.131The committee heard of the numerous online harms that can be found online and heard harrowing stories from users and their families who had experienced the worst aspects of social media. These experiences put into sharp focus the need for urgent action to make social media as safe as place as possible, and for platforms to fundamentally change how they operate to put safety at the heart of their business model.

5.132The committee did hear valuable evidence from many youth mental health organisations, as well as young people themselves, who together made clear that there are numerous measures and features that platforms could incorporate immediately that would enhance the safety of users.The committee strongly urges the platforms to incorporate safety-by-design features, as a priority.

5.133The committee also supports calls from young people that online safety features, and regulation to address online harms be co-designed with users to ensure that it is effective and targeted.

5.134Many witnesses also identified the need for further research into the nature and extent of the impact of social media use to allow targeted solutions to be developed.

Age verification

5.135The committee is therefore highly supportive of the age assurance trial currently being undertaken by the Australian Government, and the age verification roadmap process that is guiding many of the Australian Government's efforts to make the online environment a safe place for young people.

5.136A significant part of the inquiry focused on the need for age verification online, and as a corollary, whether there should be age restrictions for access to social media.

5.137The committee heard from various experts on whether the technology currently offers robust solutions to provide effective age assurance, that would also protect the privacy of users and their data. There were mixed views on whether that is currently possible, or even possible in the short term.

5.138The committee recognises the complexity in that process and notes the Australian Government's intention to introduce legislation into the Parliament in November to set a minimum age for social media use.

5.139The committee is also cognisant of the voices of many in the inquiry who advocate for a digital environment where everyone feels safe and empowered. This demands a strong focus on educating and equipping both users, and parents and carers on how to safely navigate the online space.

Recommendation 1

5.140The committee recommends that the Australian Government consider options for greater enforceability of Australian laws for social media platforms, including amending regulation and legislation, to effectively bring digital platforms under Australian jurisdiction.

Recommendation 2

5.141The committee recommends that the Australian Government introduce a single and overarching statutory duty of care onto digital platforms for the wellbeing of their Australian users, and requires digital platforms to implement diligent risk assessments and risk mitigation plans to make their systems and processes safe for all Australians.

Recommendation 3

5.142The committee recommends that the Australian Government introduce legislative provisions to enable effective, mandatory data access for independent researchers and public interest organisations, and an auditing process by appropriate regulators.

Recommendation 4

5.143The committee recommends that the Australian Government, as part of its regulatory framework, ensures that social media platforms introduce measures that allow users greater control over what user-generated content and paid content they see by having the ability to alter, reset, or turn off their personal algorithms and recommender systems.

Recommendation 5

5.144The committee recommends that the Australian Government prioritise proposals from the Privacy Act review relating to greater protections for the personal information of Australians and children, including as part of its suite of ongoing privacy reforms, such as the Children's Online Privacy Code.

Recommendation 6

5.145The committee recommends that any features of the Australian Government's regulatory framework that will affect young people be co-designed with young people.

Recommendation 7

5.146The committee recommends that the Australian Government support research and data gathering regarding the impact of social media on health and wellbeing to build upon the evidence base for policy development.

Recommendation 8

5.147The committee recommends that one of the roles of the previously recommended[109] Digital Affairs Ministry should be to develop, coordinate and manage funding allocated for education to enhance digital competency and online safety skills.

Recommendation 9

5.148The committee recommends that the Australian Government reports to both Houses of Parliament the results of its age assurance trial.

Recommendation 10

5.149The committee recommends that industry be required to incorporate safety by design principles in all current and future platform technology.

Recommendation 11

5.150The committee recommends that the Australian Government introduce legislative provisions requiring social media platforms to have a transparent complaints mechanism that incorporates a right of appeal process for complainants that is robust and fair.

Recommendation 12

5.151The committee recommends that the Australian Government ensures adequate resourcing for the Office of the eSafety Commissioner to discharge its evolving functions.

Ms Sharon Claydon MP

Chair

Member for Newcastle

Footnotes

[1]Ms Bridget Gannon, Acting First Assistant Secretary, Online Safety Branch, Department of Infrastructure, Transport, Regional Development, Communications and the Arts (DITRDCA), Proof Committee Hansard, 2 July 2024, pp. 2–3.

[2]Ms Bridget Gannon, Acting First Assistant Secretary, Online Safety Branch, DITRDCA, Proof Committee Hansard, 2 July 2024, p. 3.

[3]Ms Bridget Gannon, Acting First Assistant Secretary, Online Safety Branch, DITRDCA, Proof Committee Hansard, 2 July 2024, p. 3.

[4]Mr Andrew Irwin, Assistant Secretary, Online Safety Branch, DITRDCA, Proof Committee Hansard, 2 July 2024, p. 3.

[5]Ms Bridget Gannon, Acting First Assistant Secretary, Online Safety Branch, DITRDCA, Proof Committee Hansard, 2 July 2024, p. 7.

[6]Ms Bridget Gannon, Acting First Assistant Secretary, Online Safety Branch, DITRDCA, Proof Committee Hansard, 2 July 2024, p. 7.

[7]Mr Andrew Irwin, Assistant Secretary, Online Safety Branch, DITRDCA, Proof Committee Hansard, 2 July 2024, p. 7.

[8]Mrs Julie Inman Grant, Commissioner, Office of the eSafety Commissioner, Proof Committee Hansard, 21 June 2024, p. 61.

[9]Mrs Julie Inman Grant, Commissioner, Office of the eSafety Commissioner, Proof Committee Hansard, 21 June 2024, p. 58.

[10]Ms Antigone Davis, Vice President and Global Head, Safety, Meta, Proof Committee Hansard, 28 June 2024, p. 9.

[11]TikTok, Submission 203, pp. 2–3.

[12]Ms Ella Woods-Joyce, Director, Public Policy, TikTok Australia, Proof Committee Hansard, 28 June 2024, p. 28.

[13]Snap Inc, Submission 40, p. 7.

[14]Snap Inc, Submission 40, p. 7.

[15]Ms Bridget Gannon, Acting First Assistant Secretary, Online Safety Branch, DITRDCA, Proof Committee Hansard, 2 July 2024, p. 7.

[16]Mrs Julie Inman Grant, Commissioner, Office of the eSafety Commissioner, Proof Committee Hansard, 21 June 2024, p. 58–59.

[17]Ms Antigone Davis, Vice President and Global Head, Safety, Meta, Proof Committee Hansard, 28 June 2024, p. 9.

[18]Ms Antigone Davis, Vice President and Global Head, Safety, Meta, Proof Committee Hansard, 28 June 2024, p. 9.

[19]Ms Antigone Davis, Vice President and Global Head, Safety, Meta, Proof Committee Hansard, 28 June 2024, p. 18.

[20]Snap Inc, Submission 40, p. 7.

[21]Snap Inc, Submission 40, p. 8.

[22]Snap Inc, Submission 40, p. 8.

[23]Arjun, eSafety Youth Council, Proof Committee Hansard, 30 October 2024, p. 1.

[24]Mr Nhat Minh Hoang, eSafety Youth Council, Proof Committee Hansard, 30 October 2024, p. 4.

[25]Ms Melinda Tankard Reist, Movement Director, Collective Shout, Proof Committee Hansard, 10 July 2024, p. 28.

[26]Ms Jane Rowan, Executive Director, Eating Disorders Families Australia, Proof Committee Hansard, 10 July 2024, p. 32.

[27]For example, Ms Tania Ferraretto, Submission 26; Mr David Jackson, Submission 27; Dr Barry Oliver, Submission 30; Collective Shout, Submission 163; Australian Primary School Teachers for Media Literacy, Submission 74.

[28]Ms Tracey McAsey, General Manager, Daniel Morcombe Foundation, Proof Committee Hansard, 10 July 2024, p. 40.

[29]Professor Selena Bartlett, private capacity, Proof Committee Hansard, 1 October 2024, p. 19.

[30]Professor Selena Bartlett, Submission 29, p. 2.

[31]Kokoda Youth Foundation, Submission 32, p. 2.

[32]Australia and New Zealand Academy for Eating Disorders, Submission 3, p. 1.

[33]Ms Karen Robertson, Executive Board Member, Australian Parents Council, Proof Committee Hansard, 30 September 2024, p. 31.

[34]Catholic School Parents WA, Submission 121, p. [3].

[35]Dr Jessie Mitchell, Advocacy Manager, Alannah and Madeline Foundation, Proof Committee Hansard, 10 July 2024, p. 39.

[36]Dr Jessie Mitchell, Advocacy Manager, Alannah and Madeline Foundation, Proof Committee Hansard, 10 July 2024, p. 39.

[37]Ms Genevieve Fraser, Advisory Board Member, Dolly's Dream, Proof Committee Hansard, 10 July 2024, p. 44.

[38]Ms Sarah Davies, Chief Executive Officer, Alannah and Madeline Foundation, Proof Committee Hansard, 10 July 2024, p. 45.

[39]Dr Rys Farthing, Director of Research, Reset.Tech Australia, Proof Committee Hansard, 10 July 2024, p. 5.

[40]Arjun, eSafety Youth Council, Proof Committee Hansard, 30 October 2024, pp. 1–2.

[41]Mr Nhat Minh Hoang, eSafety Youth Council, Proof Committee Hansard, 30 October 2024, p. 15.

[42]Arjun, eSafety Youth Council, Proof Committee Hansard, 30 October 2024, pp. 1–2.

[43]Mr Nhat Minh Hoang, eSafety Youth Council, Proof Committee Hansard, 30 October 2024, p. 15.

[44]Mr William Cook, ReachOut Youth Advocates, Proof Committee Hansard, 30 October 2024, p. 9.

[45]Ms Elizabeth O'Shea, Chair, Digital Rights Watch, Proof Committee Hansard, 10 July 2024, p. 13.

[46]See, for example, Australian Gaming and Screens Alliance, Submission 59; Qoria, Submission 81; Atvion, Submission 82; Stymie, Submission 138; Project Paradigm, Submission 155; Australian Medical Association, Submission 160.

[47]Ms Antigone Davis, Vice President and Global Head, Safety, Meta, Proof Committee Hansard, 28 June 2024, p. 11.

[48]eSafety Youth Council, Submission 60, p. 3.

[49]The Butterfly Foundation, Submission 49, pp. 13–14.

[50]See, for example, The Centre for Research on Education in a Digital Society, UTS, Submission 83; Stymie, Submission 138; The Department of Media and Communication, University of Sydney, Submission 154; Project Paradigm, Submission 155.

[51]The Butterfly Foundation, Submission 49, pp. 13–14.

[52]Raghunaath, eSafety Youth Council, Proof Committee Hansard, 30 October 2024, pp. 4–5.

[53]Mr Sina Aghamofid, ReachOut Youth Advocates, Proof Committee Hansard, 30 October 2024, p. 8.

[54]Mr Sina Aghamofid, ReachOut Youth Advocates, Proof Committee Hansard, 30 October 2024, p. 6.

[55]Eating Disorders Queensland, Submission 80, pp. 3–4.

[56]Eating Disorders Queensland, Submission 80, pp. 3–4.

[57]Childlight UNSW, Submission 111, [p. 2].

[58]Australian Medical Association, Submission 160, p. 3.

[59]Australian Christian Lobby, Submission 175, p. 4.

[60]ReachOut appeared and co-submitted with batyr; Beyond Blue, Orygen, Black Dog Institute, and headspace National Youth Mental Health Foundation.

[61]Mr Ben Bartlett, Director, Government Relations and Communications, ReachOut, Proof Committee Hansard, 1 October 2024, p. 2.

[62]Prevention United, Submission 11, p. 26.

[63]The Y Australia, Submission 108, p. 5.

[64]National Mental Health Commission, Submission 128, p. 6.

[65]eSafety Youth Council, Submission 60, p. 3.

[66]Australian Human Rights Commission, Submission 79, p. 4.

[67]Australian Human Rights Commission, Submission 79, p. 5.

[68]Human Technology Institute, UTS, Submission 146, pp. 3–4.

[69]NSW Council for Civil Liberties, Submission 147, p. 4.

[70]UNICEF Australia, Submission 84, p. 4.

[71]Ms Elizabeth O'Shea, Chair, Digital Rights Watch, Proof Committee Hansard, 10 July 2024, p. 2.

[72]Ms Elizabeth O'Shea, Chair, Digital Rights Watch, Proof Committee Hansard, 10 July 2024, p. 3.

[73]Ms Elizabeth O'Shea, Chair, Digital Rights Watch, Proof Committee Hansard, 10 July 2024, p. 3.

[74]Ms Chandni Gupta, Deputy Chief Executive Officer and Digital Policy Director, Consumer Policy Research Centre, Proof Committee Hansard, 10 July 2024, p. 50.

[75]Dr Rys Farthing, Director of Research, Reset.Tech Australia, Proof Committee Hansard, 10 July 2024, p. 5.

[76]Attorney-General's Department (AGD), Privacy Act Review Report – Government Response, p. 1, (accessed 27 July 2024).

[77]Mr Peter Lewis, Founder, Centre of the Public Square, Per Capita, Proof Committee Hansard, 10 July 2024, p. 6.

[78]Ms Elizabeth O'Shea, Chair, Digital Rights Watch, Proof Committee Hansard, 10 July 2024, p. 7.

[79]Australian Competition and Consumer Commission, Digital platforms inquiry – final report (accessed 27 October 2024).

[80]AGD, Privacy Act Review Report – Government Response, p. 3 (accessed 27 October 2024).

[81]AGD, Privacy Act Review Report – Government Response, p. 1 (accessed 27 July 2024).

[82]AGD, Privacy Act Review Report – Government Response, p. 3 (accessed 27 October 2024).

[83]Privacy and Other Legislation Amendment Bill 2024, Explanatory Memorandum, p. 3.

[84]The Hon Mark Dreyfus KC, MP, Attorney-General, Second Reading Speech, Proof House of Representatives Hansard, 12 September 2024, p. 21.

[85]Preliminary Digest - Privacy and Other Legislation Amendment Bill 2024, Parliamentary Library, Bills Digest No. 16, 2024-25, p. 1.

[86]Preliminary Digest - Privacy and Other Legislation Amendment Bill 2024, Parliamentary Library, Bills Digest No. 16, 2024-25, p. 8.

[87]Ms Lucinda Longcroft, Director Government Affairs and Public Policy, Google Australia and New Zealand, Proof Committee Hansard, 28 June 2024, p. 33.

[88]b kinder Foundation, Submission 21, p. 1.

[89]DIGI, Submission 57, p. 12.

[90]Ms Sarah Davies, Chief Executive Officer, Alannah and Madelaine Foundation, Proof Committee Hansard, 10 July 2024, p. 46.

[91]Dr Rys Farthing, Director of Research, Reset.Tech Australia, Proof Committee Hansard, 10 July 2024, p. 8.

[92]Mr Sina Aghamofid, ReachOut Youth Advocates, Proof Committee Hansard, 30 October 2024, p. 3.

[93]Mr Nhat Minh Hoang, eSafety Youth Council, Proof Committee Hansard, 30 October 2024, p. 3.

[94]AGD, Submission 13, p. 2.

[95]DIGI, Submission 57, p. 12.

[96]UNICEF Australia, Submission 84, p. 3.

[97]FARE, Submission 170, p. 17.

[98]Ms Sarah Davies, Chief Executive Officer, Alannah and Madelaine Foundation, Proof Committee Hansard, 10 July 2024, p. 46.

[99]Ms Chandni Gupta, Chief Executive Officer, Consumer Policy Research Centre, Proof Committee Hansard, 10 July 2024, p. 51.

[100]Dr Jessie Mitchell, Alannah and Madeline Foundation, Proof Committee Hansard, 10 July 2024, p. 39.

[101]Dr Jessie Mitchell, Alannah and Madeline Foundation, Proof Committee Hansard, 10 July 2024, p. 39.

[102]Office of the eSafety Commissioner, Submission 1, p. 15.

[103]New America, An Overview of Algorithmic Recommendation Systems, March 2020 (accessed 28 October 2024).

[104]Office of the eSafety Commissioner, Submission 1, p. 15.

[105] Mr Ben Bartlett, Proof Committee Hansard, 1 October 2024, p. 2.

[106]Dr Timothy Graham, Proof Committee Hansard, 1 October 2024, p. 35.

[107]Joint Select Committee on Social Media, Second interim report: Digital platforms and the traditional news media, Recommendation 9, pp. 99–100.

[108]Dr Rhys Farthing, Director of Research, Reset.Tech Australia, Proof Committee Hansard, 10 July 2024, pp.13–14.

[109]Recommendation 1 of the committee's second interim report, Digital platforms and the traditional news media.