Chapter 3Social media and online harms
Overview
3.1The committee heard evidence on social media and online harms from a range of experts, including researchers, academics, mental health professionals, online safety experts, service providers, community organisations, advocacy groups, and young people.
3.2While social media use is not confined to younger age groups, much of the evidence received focused on the impact of social media on young people. This is reflected in the content that follows.
3.3It is clear from the evidence presented to the committee that there are significant risks associated with social media use. However, many of these risks appear to be driven by the business models of advertising-based social media platforms, which optimise recommender systems to promote content based on engagement. Concerningly, this often leads to the boosting of harmful material.
3.4In addition, the committee was told about the lack of protections offered by social media platforms, as well as the absence of adequate support and redress mechanisms. To this end, participants overwhelmingly supported action to address the risk of social media harms and improve online safety.
3.5While there were significant concerns raised about the recent decline in the mental health of young Australians, the causal link with social media appears unclear. Instead, evidence presented to the committee suggested a complex and interwoven set of pressures underlying the decline. In this light, social media was seen as an amplifier of existing pressures and risks, rather than a cause of harm in its own right.
3.6Numerous participants also spoke of the benefits of social media, particularly for young people and those from diverse or marginalised communities. Many of these participants urged the committee to take a nuanced, evidence-based view of social media use and to involve young people in the development of solutions to address online harm.
3.7That said, the committee is aware that the impact of social media depends very much on individual vulnerabilities, as well as factors like the amount of time spent online and the type of content consumed.
3.8The committee also heard about the need for deeper consideration of the impacts of social media—particularly those related to users' mental health—which are beyond the ability of this inquiry to adequately address. As noted by several contributors, further research will be required to build the evidence base to ensure that the risks of social media use are addressed effectively and appropriately without losing the benefits that social media offers.
3.9While this chapter focuses on evidence provided to the inquiry by experts and organisations, the committee also heard evidence from a number of parents and other individuals with lived experience of the harms arising from social media. These experiences are the focus of Chapter 3.
3.10In addition to the discussion of possible enforcement options in Chapter 2, some potential approaches to the regulation of social media are outlined in Chapter 5.
3.11Please note that this chapter contains content that some people may find distressing. Reader discretion is advised.
Social media use in Australia
3.12Social media use in Australia sits within a broader landscape of online engagement. As stated by Project Paradigm, online engagement is now 'an integral aspect of Australian culture' with 25.21 million Australians having internet access and children as young and seven and a half owning a smartphone with internet accessibility.
3.13Indeed, Prevention United cited data from the Australian Communication and Media Authority (ACMA), which showed that 'just under half of young people aged 6–13 years own or use a smartphone'. Further, almost 100 per cent of adult Australians own a smartphone, while by the time children reach 16–17 years, 100 per cent of them have a device that is not shared with others.
3.14ReachOut, Beyond Blue and Black Dog Institute (BDI) noted that for young people, digital technology has become 'integral to their daily lives' and encompasses 'education, social interaction, entertainment and more':
Adolescents inhabit a 'hybrid reality', seamlessly blending online and offline experiences, which significantly influences their thinking and information processing.
3.15As noted in the committee's Second interim report: digital platforms and the traditional news media, Australians' online engagement includes large amounts of time spent on social media platforms. Overall, it has been estimated that 78.3per cent of Australians use social media, with Facebook and YouTube having the highest number of average monthly active users, followed by Instagram, TikTok and Snapchat.
3.16According to Prevention United, this includes 'a significant proportion' of adults. An ACMA study found that around 85 per cent of adults aged 18–34 years and 67 per cent of those over 35 years had used at least one social networking platform in the previous six months.
3.17In relation to young people's social media use, ReachOut, Beyond Blue and BDI reported that 93 per cent of Australian adolescents use it on a daily basis and spend an average of two to three hours on platforms such as TikTok, Snapchat and Instagram.
3.18Similarly, Project Paradigm cited research by the University of Sydney involving 1200 young people that found over 75 per cent had used YouTube or Instagram and nearly 70 per cent had used TikTok or Snapchat. Prevention United reported that other platforms used by young people include Reddit, Discord, Pinterest and Twitch. The typical age at which young people began using social media (Instagram and Snapchat) was late primary school—either with or without their parents' permission.
3.19This aligned with evidence from Orygen and headspace National Youth Mental Health Foundation (headspace), which stated that young people aged 12–13 years use an average of three social media platforms, while those aged 14–17 years use four or five platforms. Platform use increases with age, with young people aged 18–24 years using twice as many platforms on average as those aged 12–17 years.
3.20Evidence provided to the committee suggested that social media use also varies by gender, with young women aged 14–24 years spending more time on social media than young men of the same age.
Risks associated with social media use
3.21While multiple participants acknowledged the benefits of online engagement (discussed later in this chapter), they also highlighted risks associated with exposure to harmful content and the associated potential for harm.
3.22According to submitters such as eSafeKids, Children and Media Australia, the University of Queensland's National Centre for Youth Substance Use Research (UQ NCYSUR), and the Public Health Association of Australia, and Independent Schools Australia (ISA), harmful content includes:
self-harm or suicide;
promotion of unhealthy food, alcohol, tobacco and gambling;
discrimination and hate speech;
gory or violent material;
exploitative sexual content; and
drug use.
3.23Social media-based scam advertisements were also raised as a risk by some participants.
3.24The Office of the eSafety Commissioner (eSafety Commissioner) noted that some harmful content is 'illegal and restricted content', which sits at the 'more serious and extreme end' and is divided into two categories of material:
Class 1 – likely to be refused classification under the National Classification Scheme (including content that depicts child sexual abuse and exploitation material, advocates terrorist acts, or promotes, incites, or instructs in matters of crime or violence; and
Class 2 – likely to be classified R 18+ or X 18+ under the National Classification Scheme (including adult pornography and high-impact material that may be inappropriate for children under 18 years old, such as material featuring violence, crime, suicide, death, and racist themes).
3.25eSafeKids argued that young people's exposure to harmful content was not marginal but 'mainstream', with 62 per cent of those aged 14—17 in Australia exposed to harmful content online.
3.26According to research by the eSafety Commissioner:
45 per cent of young Australians reported being treated in a nasty or hurtful way online;
almost two thirds of young Australians aged 14–17 years were exposed in the last year to negative content (e.g. drug taking, suicide or self-harm, or gory or violent material); and
28 per cent of young Australians aged 14–17 years were exposed to content that promoted unhealthy eating weekly or more often.
3.27Participants such as the University of Sydney's Department of Media and Communications (USYD MECO), Orygen and headspace, and Prevention United were careful to draw a distinction between online risks and online harms, with risk not necessarily equating to harm:
Online risks are problems that can be potentially harmful to children. However, a child will not necessarily experience harmful consequences when exposed to online risks. By contrast, harm is defined as 'a negative consequence for a child's emotional, [physical], or mental wellbeing'.
3.28According to Prevention United, the CO:RE classification of online risk describes five major online risks. These are:
content – exposure to, or engagement with potentially harmful content (e.g. violent or sexual content, hate speech, discrimination, false information);
contact – a child is targeted by, or experiences, harmful contact from an adult (e.g. online stalking, sexual harassment);
conduct – a child is a victim of, or participates in harmful peer content such as cyberbullying;
contract – exploitation by, or exposure to, potential harms such as fraud, account hacking of financial scams; and
cross-cutting – exposure to privacy violations (e.g. leaked images) or mental health risks that do not fit into a single category.
Drivers of harm on social media platforms
3.29Various participants acknowledged that many of the risks present on social media platforms pre-date the internet and continue to exist in the offline world. In addition, the eSafety Commissioner noted a 'fluid interplay between social media, websites, messaging apps, gaming platforms, dating apps, as well as ephemeral media', with social media influencing—and being influenced by—other media sources, including news, popular culture, marketing and advertising.
3.30For many participants, it is the business models of social media platforms that serves to amplify these risks and increase the likelihood of harm. For example, Collective Shout described social media business models as 'inherently unsafe', with platforms designed to be highly addictive and to maximise the time spent engaging with them.
3.31The eSafety Commissioner explained how advertising-based revenue models incentivise online services to increase engagement, 'as the more time users spend online, the more advertising revenue is generated'.
3.32Revenue considerations were also identified by the Australian Human Rights Commission (AHRC) as an incentive for advertising-based platforms to optimise their recommender systems to promote content based on engagement—which it described as a 'key driver for risk'.
3.33Likewise, Digital Rights Watch (DRW) viewed online harms as 'a symptom of data-extractive business models of digital platforms and advertisers that dominate our digital ecosystem':
… social media platforms reward content that keeps people on their platforms for as long as possible. This is because the more time that a person spends on a platform, the more data can be collected about them, and in-turn increases the opportunities platforms can offer to advertisers to sell targeted ads.
3.34A similar view was expressed by eSafeKids, which described the impact of revenue models that employ features such as push notifications, variable reward, infinite scroll and autoplay to maximise user engagement:
Technology company revenue is generated from data collection and user engagement. This commercial priority can profoundly shape the design of online products and services resulting in sensationalised content, the spread of fake news, misinformation, disinformation, malinformation and harmful and illegal content.
3.35The persuasive design features of social media platforms were also discussed by the Australian Gaming and Screens Alliance (AGASA), which highlighted the role of persuasive design in promoting problematic, addictive and/or disordered use:
'Persuasive design' is an important component of the development of social media and other screen products, and is consistently involved in the risk of disordered use. It uses psychology, neuroscience and AI user tracking to increase the degree of user engagement with the product and to customise the user experience in ways that increase the length of time spent consuming it. All of these elements seek to keep people's attention on screens at levels that can be problematic, addictive and/or disordered.
3.36The eSafety Commissioner noted concerns about the potential of recommender systems to 'exploit and exacerbate excessive use of online services', with certain cohorts, including children, seen as particularly vulnerable.
3.37According to Ms Elizabeth O'Shea of DRW, the ability of platforms to 'micro target' content presents dangers for many Australians, with 'excessive data collection' enabling targeting by predatory industries. These include 'gambling, alcohol, diet and junk food products, essentially allowing for the exploitation of vulnerability for profit'.
3.38In addition, the eSafety Commissioner cautioned that recommender systems that 'optimise for user engagement' may result in exposure to 'increasingly extreme content'.
3.39The way in which algorithms and recommender systems work to promote and incentivise the creation of extreme content was explained by DRW:
Polarising, extreme or controversial material performs well on these metrics, meaning that such content gets boosted by engagement and amplification algorithms. This is made worse by content recommendation systems that take people down algorithmic 'rabbitholes' and revenue sharing schemes that create direct financial incentives for content creators to create and share content that is highly engaging and tailored for virality.
3.40In addition to amplifying misinformation and extreme views, the AHRC noted that recommender systems can also hide ideas or viewpoints that do not align with their existing worldview. This can lead to 'echo chambers' or 'filter bubbles' 'where people are only served content that reinforces the content previously shown to them'.
3.41The AHRC went on to explain that echo chambers, along with a lack of content moderation, play a role in spreading 'misinformation and disinformation, reinforcing hate speech and prejudicial content online and allowing for amplification of extremist views and conspiracy theories'.
3.42This view was reinforced by the Department of Home Affairs, which also highlighted the potential for algorithms to be exploited by malicious actors to amplify content—often disinformation—in service of a particular goal:
One such tactic is to use a network of fictitious personas to share and comment on posts to create the appearance of a mass movement and manipulate recommender algorithms into treating such content as genuinely popular in an attempt to influence a target audience. ... Research indicates foreign actors are increasingly using disinformation campaigns in a bid to increase polarisation, reduce trust in government, and foment extremism in democratic societies.
3.43The 'manipulative power' of algorithmically curated feeds was underscored by Dr Susie Alegre, who noted their potential impact on the cognitive autonomy on social media users. Dr Alegre continued:
While arguments about content focus on the right to freedom of expression, the systemic risks of social media come from the ways content is curated through recommender algorithms which affects individual's freedom of thought and opinion with consequent threats to society.
3.44Various participants provided examples of the impact of recommender systems on users of social media. For example, Dr Zac Seidler of Movember described how social media 'co-opts' and 'mutates' the needs of young men (who may be feeling marginalised and ostracised) for entertainment, motivation and guidance:
Vulnerability and hurt is routinely replaced with anger via increasingly radicalising content that many young men tell us they are acting on. Algorithms feed them this content because it is engaging, keeping young men on their platform for ever-increasing amounts of ad revenue regardless of the harm to their mental health or broader social connections.
3.45The Butterfly Foundation drew the committee's attention to data showing that—compared to a user without an eating disorder—TikTok's algorithm targeted people who had a diagnosed eating disorder with:
2.3 times more appearance-related videos;
4.2 times more diet-related videos; and
more than 40 times more eating disorder-related videos.
3.46In addition, UQ NCYSUR explained that searching for substance-related content during its research led to it receiving increased advertisements and content promoting substance use. Indeed, at the end of its research, over 90 per cent of the content recommended to UQ NCYSUR's TikTok account glamorised substance use.
3.47However, some participants, including eSafeKids, also highlighted the complexity around the ethics of algorithms and recommender systems, which can 'be used in ways that are problematic and harmful' but can also be helpful in moderating content 'at speed and scale'.
3.48Similarly, the eSafety Commissioner outlined the potential for recommender systems to both 'amplify harmful and extreme content' and have beneficial effects for users.
The need for a nuanced view of social media use
3.49While advocating strongly for action to address the risks discussed above, a number of submitters urged the committee to take a nuanced, evidence-based view of social media use. For example, the eSafety Commissioner stressed the importance of evidence-based policy responses, as well as a balanced approach to social media's benefits and risks.
3.50Orygen and headspace asked the committee to consider how regulation and policy levers could be used to help young people benefit from social media, while also protecting them from its risks. It argued that any future policy approaches 'must be responsive to the shifting technological landscape and be grounded in evidence of young people's use'.
3.51Likewise, while supporting 'anything that will reduce harm and improve the health, wellbeing and safety of children', eSafeKids also cautioned against 'fear mongering, scare tactics, knee jerk reactions and quick fixes'.
3.52Similarly, Suicide Prevention Australia (SPA) argued that social media can both facilitate protective factors against suicide—such as access to mental health resources and social connections and support—and increase risk factors such as bullying and exposure to self-harm content. SPA observed:
In a world of immediacy and instant access, this inquiry provides an opportunity to adapt social media to our needs, rather than restrict its use to the point that the positive aspects are diminished.
3.53Mr Ben Bartlett of ReachOut recognised the well-known 'issues and problematic elements of social media' and called for 'nuanced, evidence based and co-designed policy responses' that minimise the risks and maximise the benefits of social media use.
3.54This sentiment was also reflected in evidence from participants such as Associate Professor Nasim Salehi, Professor Leila Green, Dr Diarmuid Hurley, Dr Cindy Branch-Smith, Dr Mansoureh Nickbakht, and Associate Professor Francis Mitrou, as well as UNICEF Australia, which described the need to balance social media risks and benefits:
We know that children face risks online, be it from bullying or exposure to harmful content, but we need to protect children within the digital world, not necessarily prohibit them from using it. We can be alert to dangers without being disproportionately alarmed, and we can reduce risk while ensuring that the benefits of digital participation are maintained. Solutions like age-gating … have a role to play as one of several measures which can be used to better protect children, but this needs to be proportionate to risk and balanced with the other rights that children have, like being able to access important information and express themselves.
3.55Further, while the Association of Heads of Independent Schools Australia (AHISA) advocated for strong legislative protections against access to inappropriate and harmful content, it also noted that 'the weight of evidence' has established that 'use of digital technologies is inherently neither harmful nor beneficial to young people'. Instead, this depends on the 'specific features of the social media environment and particular social media platforms, as well as individual circumstances'. AHISA explained:
It is not social media itself but aspects of it which pose risks for young people. Negative behaviours offline seem to carry over into digital environments. …The American Psychological Association sums up this position in its Health Advisory bulletin of May 2023: the effects of social media on young people 'are dependent on adolescents' own personal psychological characteristics and social circumstances - intersecting with specific content, features, or functions that are afforded within many social media platforms.'
3.56This complexity was also reflected in evidence from the eSafety Commissioner, which argued that the impact of online experiences was 'highly individualised'. According to the eSafety Commissioner, this makes decisions about limiting online access and participation 'incredibly complex', as 'restrictive measures that may benefit one child may be ineffective, or even harmful, for another'.
3.57A similar view was expressed by PROJECT ROCKIT, which contended that a growing body of research was pointing to a nuanced and complex relationship between digital technologies, youth mental health and participation, 'defying simplistic interpretations found in public discourse'.
3.58For this reason, a number of participants advocated for further research to understand the nuances of social media use and its impact on users.
Diverging views of young people and their parents/carers
3.59In addition to the complexities discussed above, evidence from ReachOut, Beyond Blue and BDI also highlighted some divergence in views between young people and their parents/carers in relation to the impact of social media:
Recent research with Australian parents and carer givers found that nearly 60 per cent of respondents were concerned about their teenagers' use of social media. However, young people themselves ranked social media outside their top ten issues of concern, behind cost-of-living pressures, climate change and other issues.
3.60Prevention United referred to a survey by Orygen and the Policy Institute (Kings College London), which found 'marked differences in the way that younger and older generations think about social media'. This study found that:
76 per cent of Baby Boomers (born 1946–1964) said that social media has had a negative impact on the mental health of young people;
67 per cent of Gen Xs (born 1965–1980) said that social media has had a negative impact on the mental health of young people; and
54 per cent of Gen Zs (born 1997–2009) said that social media use was a driver of poor mental health.
3.61In contrast, younger generations—Gen Z and Millennials (born 1981–1996)—were 'more likely to report benefits associated with social media and the online world, for example, self-expression, community building, emotional support and managing loneliness'.
3.62The divergence in views between adults and young people was also discussed by the eSafety Youth Council, which described the 'disconnect' between the perceptions of adults and young people:
Young people feel a disconnect between adults' perceptions of our online worlds and our everyday experiences. We move seamlessly between online and offline experiences, as we don't perceive a divide between online and 'real' worlds.
3.63A similar sentiment was expressed by Ms Abi Cooper, the Youth Advocacy Lead for batyr:
Social media is not a part of our world; it is our world. It's woven into who we are, how we learn and play and grow. It's a part of our culture, our language and our community in a way that's impossible to separate and often really hard for others, especially adults, to fathom.
3.64Orygen and headspace also noted research showing that young people 'generally hold more nuanced and positive perspectives' than older generations regarding 'the relationship between social media and mental health'.
3.65According to Prevention United, the generational divide also impacts the effectiveness of existing interventions. For example, it noted that conventional online education focuses primarily on 'extreme risks or harms', while young people's experiences of social media primary involve 'lower level and everyday issues'. For young people, this mismatch 'tended to highlight an intergenerational divide and reinforced the idea that "adults don't get it"'.
The need to involve young people in the discussion and design of solutions
3.66Various participants called for young people to be included in discussions about social media, as well as the co-design of potential solutions. For example, DrSarah Squire of the Butterfly Foundation noted how integral social media was to young people's lives, including as a way to connect with peers and access information and support:
So we ask the committee to listen to and reflect on the views of young people, who have grown up as digital natives and for whom the real and online worlds are intertwined in multiple ways.
3.67In addition, the eSafety Commissioner noted that 'developing robust solutions requires listening to children and young people and understanding their life experiences'.
3.68In line with this, multiple participants highlighted the importance of involving young people in the development of policy and regulations that impact them. This included Mr Leo Puglisi, Founder and Chief Anchor of 6 News Australia, who told the committee:
It's really important, and it's critical that in all of the discussion that we have about it, young people are spoken to, are consulted and are made part of the discussion. If we are talking about legislation affecting young people, we want to make sure that young people are heard as part of this. We want to be able to understand our perspectives so that we can work together to make sure there are better outcomes, better safety and better use of social media for whatever purpose the young people might have on it ...
3.69To this end, Prevention United offered the views of its Youth Advisory Group about what young people want in order to have safe online experiences (see Figure 3.1).
Figure 3.1What young people want for safe online experiences

Source: Prevention United, Submission 11, Appendix 1 (Prevention United, Policy brief: The impact of screen time and social media on the mental health of young Australians), p. 21.
3.70ReachOut, Beyond Blue and Black Dog Institute also pointed out that young people had already identified 'practical, implementable and sensible reforms' to improve their online safety:
… such as giving them more control of algorithms and the content they see, easier recognition, verification and surfacing of credible mental health information, improved online safety education and limits on 'sticky' platform features like endless scrolling which experts also argue must be modified based on a users' social and cognitive abilities.
3.71A number of contributors also highlighted Article 12 of the United Nations Convention on the Rights of the Child (UNCRC), which states:
1. States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.
2. For this purpose, the child shall in particular be provided the opportunity to be heard in any judicial and administrative proceedings affecting the child, either directly, or through a representative or an appropriate body, in a manner consistent with the procedural rules of national law.
3.72Some participants, such as the New South Wales Council for Civil Liberties, also pointed to Article 13, which recognises children's right to freedom of expression and their right to access information and ideas through any media of their choice.
3.73As noted by submitters like the eSafety Youth Council, Article 12 means that 'children have the right to say what they think should happen when adults are making decisions that affect them and to have their opinions taken into account'.
3.74This view was also reflected in evidence from the AHRC, which noted that while the UNCRC 'requires that the best interests of the child be a primary consideration in all aspects of the digital environment', this needs to give regard to children's rights. This includes their rights to 'seek, receive and impart information, to be protected from harm and to have their views given due weight', as well as 'ensuring transparency over the criteria applied to determine best interests'.
3.75To this end, the Y Australia raised concerns that the current narrative around social media age restrictions was being shaped by adults' views, without the perspectives of the young people who would be most affected by the change:
The Y is concerned that the current debate around age restrictions on social media playing out in both media and politics is missing the voices and perspectives of young people, the group most impacted by the proposed reforms. While the narrative has been shaped by adults' perceptions of what's in children's best interests, at the time of writing the government and media are insufficiently upholding children's rights to give their opinions freely on issues that affect them, as per the Convention on the Rights of the Child.
Positive aspects of social media use
3.76While inquiry participants recognised the risks associated with social media use, many also underscored its positive aspects. For example, eSafeKids stated that online services and social media provide platforms for 'learning, self‑expression, creativity, socialising, autonomy, agency and connection with peers'. In addition, social media platforms can 'foster a sense of community and belonging, supporting positive interactions'.
3.77Likewise, Prevention United submitted that social media provides a place where young people can:
… connect, interact, share experiences, offer and seek support, form communities, and establish a collective voice for advocacy. They are also platforms where young people can access news, information, entertainment and much more.
3.78Ms Sarah Davies of the Alannah and Madeline Foundation noted that social media and technology are 'hugely positive if they are safe to use and age appropriate to use, because they give children and young people access to people like them, tribes and community support'.
3.79These views were supported by Prevention United's Youth Advisory Group, which identified a range of positive impacts of social media (see Figure 3.2).
Figure 3.2Positive impacts of social media

Source: Prevention United, Submission 11, Appendix 1 (Prevention United, Policy brief: The impact of screen time and social media on the mental health of young Australians), p. 9.
3.80Likewise, in a survey of over 10 000 students, PROJECT ROCKIT found that 71per cent described being online as a mostly positive part of their lives.
3.81These findings were also reflected in recent research by UNICEF Australia, which found that 81 per cent of Australian teenagers who use social media say that 'it has a positive influence on their lives'. According to UNICEF Australia, young people have told them that 'being online has become critical to their healthy development and wellbeing, and that being online is fundamental to their lives'.
3.82This also appears to be reflected in international research. For example, SPA observed that research undertaken by the United States Office of the Surgeon General found that '58 per cent of adolescents reported that social media helps them feel more accepted, 67 per cent that they have people who can support them through tough times, 71 per cent who use it as a place to show their creative side, and 80 per cent who feel more connected to what's going on in their friends' lives'.
3.83Further, Orygen and headspace pointed to data from the eSafety Commissioner, which showed that 90 per cent of people aged 12–17 years 'engaged in at least one type of positive online behaviour, such as sharing uplifting content and supporting friends'.
3.84The positive aspects of social media were also highlighted by young people who appeared before the committee. Raghunaath, a member of the eSafety Youth Council, described how social media kept him connected with family and friends overseas:
Six years ago, I lived about 6,000 kilometres away from Adelaide. Hence, about 90 per cent of my family and 50 per cent of my friends still live overseas. The way that I stay involved in their lives, the way that I keep up to date with what's going on with birthdays, celebrations, is true social media— Instagram, specifically. If it was to be restricted, that would mean that I would not be able to communicate with my friends as much, and with my family, and I would miss out on important events that happen in life.
3.85Ms Layla Wang, a ReachOut Youth Advocate emphasised the importance of social media to maintaining connections while growing up in rural Australia:
I come from rural Australia. There's no cell phone service for me, so during my time in high school I used social media platforms to interact with my friendship groups and to stay in contact to organise friendship events and catch-ups and also to get through my extracurriculars and whatnot. I think social media is really important for that, so I can still have access to my friends. Because I don't have cell phone service, I would otherwise not be able to contact them. I'd really like that to go on the record.
3.86The benefits of social media have also been established in academic research. For example, the USYD MECO argued that 'social media provides a wide range of benefits to children and young people' with research consistently demonstrating that:
… access to all kinds of media, including social media, is integral to support children's learning, development, social skills, digital, verbal, written and visual literacies as well as critical media literacies. Young people use social media in a range of beneficial ways including for information, education, homework, accessing jobs, civic engagement, housing and for engaging in self-help and peer support in many different kinds of circumstances.
3.87Similarly, ReachOut, Beyond Blue and BDI pointed to studies that showed the role of social media in helping young people to manage anxiety and depression during the COVID-19 pandemic. In addition, social media was found to provide connection for people who struggle with social relationships in the offline world. Further, online peer support was also found to act as a 'buffer against stress', particularly for young people from marginalised groups.
3.88The importance of social media to people from diverse or marginalised groups was highlighted by multiple participants. For example, the NSW Service for the Treatment and Rehabilitation of Torture and Trauma Survivors (STARTTS) noted that social media can sometimes be:
… the only place they feel is safe to access for support, community building, and self-expression (as can be the case for youth from refugee backgrounds, neurodivergent youth, and those who identify as LGBTQIA+).
3.89Similarly, research by USYD MECO found that 'some groups of young people, such as culturally and linguistically diverse Australians and Indigenous young people, are more likely to use social media to socialise, maintain familial and cultural ties and learn about the world'.
3.90According to Orygen and headspace, the eSafety Commissioner has reported that 51 per cent of LGBTQIA+ young people 'feel more comfortable to be themselves online than in person'.
3.91The benefits of social media—particularly for members of the LGBTQIA+ community, Aboriginal and Torres Strait Islander children, those from culturally and linguistically diverse backgrounds, and children with disability—were also highlighted by the Australian Medical Association, which:
… implores the committee to acknowledge the importance of social media as an outlet and information source for a range of groups of children in the 13–16 age group looking for support through life-transitioning stages. There are also certain priority populations of children, who have diverse and unique needs for online access during their younger years, for a range of unique and valid reasons.
3.92A summary of research on the benefits of social media for young people in diverse or marginalised communities was provide by Prevention United (seeTable 3.1).
Table 3.1Benefits associated with social media use for diverse or marginalised communities
| |
First Nations Youth | Self-determination through content creation and education, re-establishing kinship ties/social connections, connecting with elders on collaborative projects, healing and uniting dislocated communities |
Refugee and asylum seeker youth | Addressing complex mental health challenges, maintaining a sense of social connectedness, storytelling, sharing lived experience, engagement and sense of belonging |
LGBTQIA+ youth | Access to safe spaces to experiment with identity and orientation, connection to community, peer socialisation and support to circumvent and mitigate physical and emotional safety issues, decreased feelings of social isolation |
Rural youth | Bridging information gaps between rural and urban students, decreased feelings of social isolation |
Neurodivergent young people | Connection, understanding and a means of reframing societal perceptions of neurologically based conditions such as autism in a positive way |
Disabled young people | Enhanced social participation, better sense of independence and autonomy, improved literacy skills. |
Young people with experience of mental ill‑health | Informational and emotional support, sharing and learning from experiences, anonymity, ease of access, a sense of connectedness and reduced feelings of isolation |
Young people at risk of disordered eating | The body positivity movement, when led by people in larger bodies has been used to improve awareness of bodily diversity, reduce stigma and create a community of support. The #bodypositive hashtag has over 19.3 million posts on Instagram alone. |
Source: Prevention United, Submission 11, Appendix 1 (Prevention United, Policy brief: The impact of screen time and social media on the mental health of young Australians), p. 10.
Online harm and the negative impacts of social media
3.93The following section discusses some of the specific negative impacts arising from social media use, namely:
harms to society, including threats to social cohesion and democratic processes and institutions;
health impacts, including mental health impacts; and
scams and child sexual exploitation.
Harms to society
3.94While the potential for social media to benefit Australian society was recognised by multiple participants, many observed that it also has the ability to cause significant damage, particularly through the spread of mis- and disinformation. For example, the AHRC emphasised that mis- and disinformation can have devastating effects on human rights, social cohesion and democratic processes.
3.95Ms Jeni Whalan of the Department of Home Affairs explained further:
Social media can provide benefits to our democracy. It can increase political participation. It can expand the opportunities of Australians to gain political knowledge. It can expose people to different viewpoints and provide new connections between people and communities. Social media democratises the information environment, in some respects, allowing anyone with a smartphone and an internet connection to reach a global audience, but social media can also be used to spread harmful disinformation, which can erode trust in democratic institutions and deepen societal polarisation and division.
3.96Similar views were expressed by Dr Keith Heggart, Associate Professor Simon Knight, Dr Damian Maher, and Associate Professor Bhuva Narayan:
Social media has proven to be fertile ground for the spread of hateful misinformation. This has a deleterious effect upon civil society and social cohesion. It is also a direct challenge to the validity of democratic institutions. Furthermore, the ubiquity of social media has meant that it has become easier to share and then to organise around these cases of hateful misinformation. It is not an exaggeration to say that the spread of hateful misinformation, including violent extremist material, poses a significant threat to the health of Australian democracy.
3.97In addition, the Department of Home Affairs highlighted research showing that 'foreign actors are increasingly using disinformation campaigns in a bid to increase polarisation, reduce trust in government, and foment extremism in democratic societies'.
3.98Social media's role in 'amplifying narratives and conversations as part of the "attention economy"' was also noted by the New South Wales Government, which suggested this was 'critical to a deficit in trust of government institutions, social cohesion, and optimism that are so importance for healthy democracies'.
3.99The New South Wales Government raised particular concerns about the 'proliferation of online hate that seeks to incite fear and division among communities, often along racial or religious lines'. As an example, it stated that social media platforms have become unsafe places for Aboriginal people 'because of the extent and lack of consequence for racially motivated hate speech'.
3.100Similarly, STARTTS stated that social media had amplified harmful and divisive rhetoric around refugees and people seeking asylum.
3.101Concerns about the proliferation of hate speech were also underscored by the Online Hate Prevention Institute, which noted its role in removing multiple original copies of terrorist manifestos and videos of attacks.
3.102Further, the Australian Federal Police (AFP) observed that there are 'a concerning number of young people being radicalised online … and accessing and sharing propaganda and violent extremist material'. The AFP contended that young people are being deliberately targeted for radicalisation as they can be more susceptible due to a range of factors, including 'social dislocation, peer influence, mental health challenges, neurodiversity factors, active online engagement with extremists, and triggering or traumatic events'.
3.103According to the AFP, social media, along with online gaming and gaming platforms, provide a way for violent and extremist concepts to be shared in 'more relatable ways'—with algorithmic-based preferencing working to increase engagement by offering related content.
3.104Similar views were expressed by Ms Whalan, who stated that young people were susceptible to radicalisation in the online environment used by extremists to network, promote ideologies, and recruit members. According to Ms Whalan, social media platforms can provide a gateway to more extreme content:
Social media platforms are a gateway from which at-risk users can be led to fringe or alternative platforms that are unwilling or unable to effectively moderate content, where they can be further exposed to harmful content and ideologies. Children and young people are particularly vulnerable to radicalisation from this type of content.
3.105In addition, Dr Keith Heggart, Associate Professor Simon Knight, Dr Damian Maher, and Associate Professor Bhuva Narayan highlighted the ways in which social media can 'enable and amplify online gender-based violence'. These include:
… mechanisms such as anonymity, rapid dissemination of gendered hate speech, flaming, outing, doxing, partner surveillance, revenge porn, non‑consensual image sharing, grooming, exploitation, and targeted harassment.
3.106Similar views were shared by Ms Allegra Spender MP, who argued that exposure to harmful content online can 'amplify offline harms, including domestic abuse, sexual harassment, and extremism'.
3.107To this end, Dr Timothy Graham urged greater recognition that 'pro-social' and 'pro democratic' outcomes are simply not the driving force for large online platforms. Instead, their commercial imperatives appear to generate the opposite:
We need greater recognition that very large online platforms like X and Facebook are not, and never were, designed to produce pro-democratic or pro-social outcomes. Baked into the architectural and algorithmic design of these platforms is the mandate to increase user engagement and attention. The dilemma facing us today is that platforms are designed for quantity over quality. They prioritise and recommend content that elicits strong reactions and gets user attention rather than content that is high quality or factually sound. Research shows that content expressing fear, outrage and division just gets more clicks, shares and comments.
3.108Further discussion of the rise of mis- and disinformation on digital platforms, including its impacts and the role played by algorithm-based recommender systems can be found in the committee's Second interim report: digital platforms and the traditional news media.
Health impacts (including impacts on mental health)
3.109Most of the evidence around the health effects of social media centred on problematic screen time, as well as its potential to negatively impact the mental health of users.
Problematic screen time
3.110The committee heard evidence from various participants that problematic screen time has negative impacts on health and wellbeing, including disrupted sleep patterns, less exercise, and fewer face-to-face interactions. Problematic screen use generally involves levels of use that have a 'substantial negative impact on one or more important areas of life such as schooling, relationships or mental or physical health'.
3.111eSafeKids referred to a report by the Australian Institute of Family Studies, which found that 'screen time may have a negative effect on weight, motor and cognitive development, social and psychological wellbeing, anxiety, hyperactivity and attention'. In addition, it cited concerns about myopia and muscular skeletal disorders linked to screen time.
3.112Professor Wayne Warburton of AGASA also spoke about the impact of disordered screen use, which has been linked to 'poor sleep; behavioural and emotional regulation problems; aggressive and violent behaviour; school refusal; mental health issues such as depression, anxiety and suicidality; negative impacts on social and emotional development; and a range of physical health problems'.
3.113In relation to social media more specifically, Prevention United noted that young people are susceptible to the design features of social media platforms that aim to maintain users' attention and engagement. According to Prevention United, 65 per cent of young people report 'they are likely to spend more time than intended on social media'.
3.114The persuasive design elements of social media were also noted by eSafeKids, as well as Professor Warburton who observed that this enhances the addictive qualities of social media and increases the likelihood of problematic use.
3.115Ms Layla Wang, a ReachOut Youth Advocate told the committee that research by ReachOut had found that the addictive nature of social media was a key concern for young people, ranking ahead of privacy, harmful content and bullying. In particular, she noted that survey participants were 'often alert to algorithmically driven viewing, resulting in skewed content that leads to echo chambers where misinformation proliferates'.
3.116In terms of problematic social media use, Orygen and headspace reported that 33 per cent of young people who responded to its 2022 survey were considered to have 'problematic social media use'. According to Orygen and headspace, the risks of problematic social media use include 'procrastination, social comparison, impacted sleep, exposure to disinformation, and decreased physical activity'.
3.117This was also reflected in evidence from SPA, which noted the results of research by the US Office of the Surgeon General that found harmful impacts 'especially on adolescent girls for poor mental health, cyberbullying-related depression, body image and disordered eating behaviours, and poor sleep quality linked to social media use'.
3.118In addition, AGASA pointed to studies showing reductions in cognitive function among populations with problematic social media use, as well as changes in the brain's sensitivity to social rewards and punishments.
3.119However, the committee also received conflicting evidence about the impact of screen time, with the BDI's Future Proofing Study finding that 'higher daily screen time is unlikely to be the cause of depression and anxiety in adolescents'. Instead, the association is likely to be bidirectional, 'meaning that mental health symptoms and screen use influence each other':
For example, if a young person is feeling down or stressed, they might spend more time on screens to distract themselves. But the more time they spend on screens, the more likely they are to encounter things that could make them feel even worse. And so, it could become a cycle where their mood affects their screen time, and their screen time affects their mood.
Mental health impacts
3.120A range of participants observed that there has been an increase in rates of mental ill health in Australia in recent years. For example, Orygen and headspace stated that young people are 'reporting symptoms of depression and anxiety that exceed any time in the past'.
3.121Orygen and headspace also drew the committee's attention to the results of the 2022 headspace National Mental Health survey, which found that 47 per cent of young people are experiencing 'high or very high psychological distress'. Inaddition, they cited results from the 2020–22 National Survey of Mental Health and Wellbeing, which 'showed that 39 per cent of young people aged 16–24 years had experienced depression, an anxiety disorder and/or a substance use condition in the 12 months prior to being surveyed'.
3.122According to Prevention United, 'this equates to 1.2 million young Australians' and represents an increase of 50 per cent 'in the prevalence of these conditions since 2007. Prevention United also highlighted data from the Household, Income and Labour Dynamics in Australia survey, which showed that 42.3 per cent of young people aged 15–24 years were psychologically distressed in 2021, up from 18.4 per cent in 2011. In addition, those born in the 1990s appear to have experienced a greater decline than other age cohorts.
3.123Other participants such as the NMHC observed that, over the last 17 years, the number of indicators of psychological distress for those aged 13–25 years have been increasing at a higher rate than for the rest of Australia.
3.124The former Queensland Premier, the Hon Steven Miles MP pointed to a 2023 report by the Chief Health Officer Queensland, which showed a significant decline in the mental health and wellbeing of young Queenslanders, especially young girls, with a three-fold increase in self-harm injury hospitalisations for girls up to 14 years of age between 2008–09 and 2020–21.
3.125According to AHISA, there is a 'popular view', informed by analyses such as Jonathan Haidt's The Anxious Generation, which 'links the upturn in major depressive episodes and greater anxiety in US teens with social media use'.
3.126However, AHISA notes that Jonathan Haidt's detailed analysis also identifies other factors that are responsible for affecting young people's mental health, including 'numerous changes in approaches to parenting and children's lives over recent decades'.
3.127While it recognised that the decline in youth mental health occurred over the same time period that the use of smartphones and social media became ubiquitous, Prevention United also warned that 'correlation does not prove causation'.
3.128A similar view was expressed by UNICEF Australia, which noted the complexity of factors influencing young people's mental health, as well as the lack of clear evidence supporting a causal relationship between social media use and declining mental health:
There undoubtedly needs to be more focus on the mental health of young people in Australia which research shows is declining. But while social media is often attributed as the cause of this youth mental health crisis, evidence says that the reality is much more complex. Some research suggests a correlation between social media use and mental ill-health, but causation is unclear given young people with mental ill-health may use social media more often or in less healthy ways than their peers, and that is before we even consider the protective factors that social media can provide for mental health. Mental health is complex and influenced by a multitude of factors in a young person's life, and studies across the globe have yielded conflicting results as to the associations between youth mental health and social media use.
3.129Indeed, the range of factors impacting young people's mental health was highlighted by a number of participants. For example, Ms Nicola Palfrey of headspace described the challenges facing young people and argued that attributing the rise in distress solely to social media was simplistic and dismissive of young people's views:
Over the past five years, one-third of their life, a 15-year-old will have lived through a global pandemic; most likely one or two natural disasters; the cost of living going through the roof; increasing family violence; notions of never being able to own their own home; and job insecurity. All of these determinants of health are what these young people are living with and have been living with. To think that it's just social media in and of itself causing a rise in distress is simplistic and not helpful. It is dismissive of the real challenges that young people tell us they face and their distress about their future.
3.130To this end, the NMHC contended that rather than being a 'key driver of distress' itself, digital technology instead amplifies the effects of other drivers. The NMHC identified 'six highly complex and interrelated drivers of increased distress, each reflective of significant cultural changes to the world in which young people live'. These include family and parenting factors, education pressures, employment pressures, physical health, sociocultural shifts, and uncertainty about the future.
3.131In addition, submitters such as AHISA, Prevention United, Orygen and headspace, and the NMHC noted that young people's experiences of social media depend on multiple factors, such as the amount of time spent online, the content consumed, and the degree of disruption to sleep and exercise. Further, they contended that the risk of harm may be increased by 'individual vulnerabilities, cultural context, socio-economic status, age and gender'.
3.132The variation in young people's experiences of social media was conveyed in the results of PROJECT ROCKIT's survey of over 10 000 students, which found both positive and negative impacts on mental health (see Figure 3.3).
3.133Further, PROJECT ROCKIT considered it 'evident that the relationship between digital technologies, youth mental health and participation is nuanced and complex, defying simplistic interpretations found in public discourse'.
Figure 3.3Positive and negative influences of social media on young people's mental health

Source: PROJECT ROCKIT, Submission 149, p. 3.
The need for further research
3.134The mixed evidence in relation to social media impacts was noted by a range of participants who suggested more research was needed to better understand the relationship between social media use and mental ill health.
3.135For example, the Australian Research Alliance for Children and Youth described a deficiency of Australian data on the impact of social media on child and adolescent wellbeing. It also noted that the international evidence reveals mixed impacts, linking social media use to both benefits and harms.
3.136The mixed evidence around the impacts of social media on mental health was also described by ReachOut, Beyond Blue and BDI:
The existing evidence on the impact of social media on mental health is largely focused on young users and is mixed. There is considerable anecdotal evidence of instances of profound harm, mixed academic evidence of impacts and benefits, and strong evidence from users about their experience of both positive and negative impacts of social media use.
3.137In a similar vein, Orygen and headspace stressed that Australia still has some way to go in understanding the impact of social media use on mental health:
While the pervasive influence of social media on daily life is well-documented, its effects on the mental health and well-being of young people is still being understood. In recent years, numerous research studies have sought to elucidate the relationship between social media use and mental ill-health in young people, however, the findings remain mixed.
3.138To this end, the NMHC highlighted the need for further research into 'all areas of digital technologies':
While the evidence base is rapidly emerging, large scale, longitudinal studies are limited on this topic, making it difficult to draw conclusions around causation. Further research into all areas of digital technologies will be critical in continuing to strengthen our understanding of these emerging issues.
3.139The following sections summarise the evidence provided to the committee in relation to specific harms—namely eating disorders, cyberbullying, self-harm and suicide.
Eating disorders
3.140The Butterfly Foundation submitted that eating disorders are 'prevalent and serious mental health conditions' which affect over 1. 1 million Australians each year (4.5 per cent of the population), 'with a lifetime prevalence of 10.5 per cent'. While eating disorders affect all genders and demographics, the rates are highest among women aged 15–29 years. There is also a high mortality rate for eating disorders, with 1273 deaths in 2023. According to the Butterfly Foundation, the total cost of eating disorders in Australia in 2023 was $67 billion (including health system, productivity and individual costs).
3.141In addition, Eating Disorders Families Australia (EDFA) observed that the last decade has seen a 'rapid rise in the incidence of eating disorders' among young Australians, with prevalence among those aged 10–19 years rising by 86 per cent since 2012.
3.142A range of submitters argued that social media has played a part in this increase. For example, Dr Hannah Jarman pointed to research showing that 'exposure to appearance ideals on social media can increase body dissatisfaction'.
3.143Indeed, the Butterfly Foundation stated that 61.7 per cent of respondents to its 2023 BodyKind Youth Survey said that 'social media made them feel dissatisfied with their bodies'. These results were also highlighted by Dr Jarman, who noted this was an increase of 12 per cent since 2022.
3.144Further, Ms Jane Rowan of EDFA told the committee that its October 2023 survey of parents and carers had found that '81percent of parents and carers believe that social media had either contributed to the eating disorder or impeded recovery'. Ms Rowan also conveyed parents' observations that social media 'exacerbates concerns about appearance, food and exercise, creating an environment that normalises and promotes unhealthy and dangerous behaviours'.
3.145In addition, Dr Sian McLean stated that 'people with lived experience of an eating disorder frequently attribute social media as interfering with recovery'.
3.146The Butterfly Foundation also reflected on the 'significant body of evidence' linking social media and negative body image and eating disorder risk. It pointed to a recent review of studies examining social media use among people aged 18–30 years which found that exposure to image related content about body image and food choice was associated with higher body dissatisfaction, food restriction and overeating.
3.147Similarly, Professor Selena Bartlett highlighted a review which linked social media use and 'body image concerns, eating disorders, disordered eating, and poor mental health'. Professor Bartlett stated that this link is driven by:
… social comparison, the internalisation of thin/fit ideals, and self-objectification. The negative impacts are further exacerbated by harmful social media trends, pro-eating disorder content, appearance-focused platforms, and excessive photo investment.
3.148According to the Butterfly Foundation, there is also evidence that eating disorder risk is more strongly associated with social media use than with traditional media exposure.
3.149However, as with social media's impact on mental health more broadly, participants such as Dr Squire of the Butterfly Foundation urged the committee to consider the benefits of social media alongside its harms. This included its role as a source of connection to peers, as well as a preferred source of health information for a large proportion of young people:
Social media is also a place to where young people turn to access reputable health information and find support services. For example, anyone in Australia who searches for content on eating disorders on TikTok, Instagram or YouTube is automatically directed to the Butterfly website and our national helpline. When we surveyed almost 3,000 young people aged 12 to 18 last year, 46 per cent said that they preferred to receive information about body image via social media. This was the most preferred option, followed by other young people or programs and talks in schools.
3.150In addition, while the NMHC observed the negative impacts of 'pro-ED' or 'pro-ana' communities'—such as low self-esteem and sustaining anorexic behaviours and beliefs—it also highlighted the potential of social media to support earlier intervention:
Research focused on eating disorders has also identified that it is possible to predict severity of mental ill-health up to eight months in the future based on what is being shared on Instagram. This presents the possibility to predict a person's need for support via an algorithm. It is important to consider how online tools can be used to support earlier intervention, beyond redirecting to resources when someone is already in crisis.
3.151Finally, Dr Squire noted that as social media is only one factor in the development of body dissatisfaction, there is a need for a 'whole-system approach' to addressing 'body dissatisfaction and the impact of appearance ideals'.
Cyberbullying, self-harm and suicide
3.152According to Prevention United, 'cyberbullying is often an extension of face-to-face bullying', with social media 'considered a "new tool to harm victims already bullied by traditional means"'.
3.153The concept of cyberbullying as an extension of offline bullying was also reflected in commentary by Mr Sina Aghamofid, a ReachOut Youth Advocate, who noted the human behavioural aspect of bullying and Arjun, Co-Chair of the eSafety Youth Council, who described cyberbullying and offensive behaviour as the hardest harms to prevent because 'it's not a platform thing; it's more of a user thing'.
3.154Further, STARTTS explained that social media platforms 'can enable relentless bullying that infiltrates personal spaces, which has made victims feel unsafe, even in their homes'.
3.155STARTTS stated that cyberbullying can have adverse impacts on the mental health of both those who bully and those they target, 'leading to an increased risk of depression, anxiety, somatic complaints and suicide'.
3.156Similarly, Safe on Social noted that cyberbullying can lead to severe psychological distress for school aged children, which impacts on mental health and academic achievement, as well as contributing to school refusal.
3.157According to the Synod of Victoria and Tasmania, Uniting Church in Australia (Synod of Victoria and Tasmania, Uniting Church), a 2018 survey by the Australia Institute found that 44 per cent of women and 39 per cent of men in Australia had experienced one or more forms of online harassment, including abusive language, being sent unwanted sexual messages, and cyber hate.
3.158In addition, schools are now finding that 'most incidents of bullying and harassment in school have a social media component, and that "social media heightens bullying and social exclusion'''.
3.159While cyberbullying can affect all demographics, STARTTS noted that it is 'correlated with aged and gender, with older teen girls more likely to be affected'.
3.160Further, research by the eSafety Commissioner has found that young people in marginalised cohorts face higher rates of online bullying and discrimination:
This includes higher rates of hate speech received by LGBTQIA+ young people (31% compared to 15%), young people with a disability (23% compared to 14%), and Aboriginal and Torres Strait Islander young people (29% compared to 11%).
3.161The disproportionate impact on particular cohorts appeared to be borne out in evidence provided by Short Statured People of Australia Inc. (SSPA), which described 'continual harassment, violence and abuse' experienced by short statured people in public, community and online spaces. According to SSPA, the lack of inhibition arising from the anonymity of social media has helped to exacerbate and enable 'a virtual crowd of offensive and demeaning behaviour'.
3.162Some participants, such as Ms Katherine Berney of the National Women's Safety Alliance, noted that women are also disproportionately targeted by cyberbullying, which can also extend offline:
Online bullying can take many forms, including cyberstalking, hate speech, doxxing and coordinated harassment campaigns. Women, particularly those who are in public-facing roles or advocacy positions, are disproportionately targeted by these forms of abuse. The anonymity and reach provided by social media allows perpetrators to evade accountability while amplifying their impact. For many women the trauma of online abuse extends offline, leading to real-world consequences, such as stalking and harm.
3.163According to SPA, cyberbullying is 'linked to greater rates of self-inflicted damage and suicide ideation'. In addition, it described how the promotion of trends online, including via social media apps, can increase the risk of self-harm for at-risk adolescents:
One international study identified that 14.8% of young people who were admitted to mental hospitals because they posed a risk to others or themselves had viewed internet sites that encouraged suicide in the two weeks leading up to their admission.
3.164This was also reflected in evidence from Professor Bartlett, who referred to a systematic review that 'provides compelling evidence that viewing self-harm and suicide-related images on the internet can act as a precursor to these behaviours'. Professor Bartlett described this as 'particularly concerning' given that algorithms used by social media platforms 'often amplify harmful content to susceptible individuals'.
3.165Professor Barlett also told the committee about a study in the United Kingdom which found that 75 per cent of young people encounter self-harm content online before the age of 14 years.
3.166However, Orygen and headspace pointed out that social media can also 'play a significant role in supporting the mental health of young people', including via access to health information and the early identification of at-risk young people. In addition, Orygen and headspace highlighted a recent study which found that 'young people who are regularly online are 41 per cent less likely to die from suicide and 61 per cent less likely to be hospitalised for self-harm':
These findings underscore the importance of social media in facilitating connection and social support for young people, both of which are important protective factors during the developmental stage of adolescence. When young people feel socially supported and connected, they are less likely to feel isolated. Social media can also provide young people with opportunities to safely explore and express different aspects of their identity. Authentic self-presentation via social media is positively correlated to improved wellbeing. This is of particular importance for LGBTQIA+ young people, with 51 per cent stating that they feel more comfortable to be themselves online than in-person.
3.167While it argued that more needs to be done 'to reduce exposure to harmful suicide-related content', the NMHC also highlighted research showing that social media platforms can be important sources of support:
Research exploring self-harm has highlighted that young people accessing online content about self-harm were likely to already be engaging in self-harm behaviours. Young people were found to be using the internet to find support from other people who had lived experience of self-harm. They found this helpful due to the immediate nature of support available online, the stigma encountered when seeking support face-to-face, and the long wait times associated with accessing professional support. Similar findings have been highlighted for social media use amongst people living with a mental health condition.
3.168Further, the NMHC noted that 'while online posts may express distress and potentially intensify this distress, there is currently little evidence to suggest social media causes this distress'.
3.169This aligned with the NMHC's earlier evidence about the drivers of young people's distress. This view was also echoed by SPA, which noted that these drivers are 'not exclusive to social media but can be exacerbated by its use':
For example, news outlets and news reported across social media require short, attention-catching material, which can increase anxiety across issues such as climate change, cost-of-living, housing affordability, employment prospects, and other current events.
3.170In addition, SPA also noted that 'ethical social media can positively influence opportunities for connection, conversation, self-esteem, health promotion, and access to critical medical information'.
3.171Further, SPA highlighted the role social media plays as a source of information and support, and described it as 'a vital help-seeking pathway for young people, especially those who cannot rely on traditional methods for help-seeking'. Tothis end it noted that '73 per cent of young people regularly [use social media] to search for mental health information or [have] done so in the past'.
3.172Similarly, PROJECT ROCKIT pointed out that although young people from marginalised cohorts are more likely to experience online bullying and discrimination, they are also 'significantly more likely' to use online platforms to seek emotional support, make friends, access physical, sexual and mental health information, discuss social and political issues, and connect with people from different backgrounds.
3.173As with evidence in the previous section about early intervention for eating disorders, SPA pointed to the potential for AI to be used to 'redirect a person in suicidal distress to a suicide prevention chat resource to ensure access to helpful resources and timely support'. However, it also stressed that without proper investment in the underlying online evidence base, AI generated data could instead result in the 'dissemination of inaccurate and harmful content'.
3.174A similar point was raised by Dr Louise La Sala of Orygen, who said platforms' algorithms could be harnessed to try and deliver support to young people at the time it is needed. According to Dr La Sala, this was also raised by young people themselves:
In the work we've done with them, speaking to the algorithms, young people have said things to us like, 'This platform knows when I'm looking for information about self-harm' or, 'This platform knows when I'm looking at content about mental health. Why can't it give me help in that moment? Why can't it identify that I might need some assistance?'
Scams and child sexual exploitation
3.175The following section discusses evidence from a wide range of experts and organisations on the role that social media plays in facilitating scams and child sexual exploitation, including sextortion.
Scams
3.176Online scamming, committed by highly organised criminal syndicates, and fuelled and enabled by social media platforms, is a globally recognised problem. According to the International Justice Mission, the amount lost to scamming operations worldwide as of the end of 2023 was AUD$96billion.
3.177In Australia, scams cost Australians $2.74 billion in 2023 and affected more than half a million people in 2022–23. The AFP noted that 'investment scams have the greatest financial impact, with Australians losing hundreds of millions annually'.
3.178Many of these scams were facilitated via social media platforms. In 2023, the Australian Competition and Consumer Commission (ACCC) found that over 16000 Australians reported losing money to a scam that began online or on a social media platform.
3.179However, Ms Catriona Lowe of the ACCC suggested that its figures may underrepresent the scale of the problem given that the stigma and shame associated with being scammed may make people reluctant to report their experience. Ms Lowe pointed to research showing that between 30 and 40 per cent of people do not report being caught by scams.
3.180Ms Rosie Thomas, Director of Campaigns at CHOICE, explained how social media platforms enable scammers by allowing them 'to contact consumers via direct messages and fraudulent advertisements'. According to Ms Thomas, 'the losses that flow from these contacts are big and they're growing', with social media scams stealing $95 million from consumers in 2023—an increase of 250per cent since 2020.
3.181The platforms associated with most losses in 2023 were WhatsApp (47 per cent), Facebook (20 per cent), online dating sites (9 per cent), and Instagram (9 per cent). As noted by CHOICE, this means that 76 per cent of all losses in 2023 arose from contact with a Meta-owned platform.
3.182The problem is only likely to get worse given the 'barrage of fraudulent content across digital platforms … while scams only become more complex and increasingly difficult to identify'. Indeed, according to CHOICE, 88 per cent of people believe scams 'have become more sophisticated or harder to spot recently'. In addition, 79 per cent of people are afraid 'that other people in their life might not spot a scam'.
3.183While most other harms focused predominantly on the risk to young people, overall, older Australians were viewed as more likely to be the targets of financial scams on social media.For example, Tattarang suggested that people over the age of 50 are 'not as accustomed to recognising fraud and deception on the content fed to them by the platforms' advertising algorithms and who remain trusting of statements made online by prominent people'.
3.184This was also reflected in evidence from the ACCC, which stated that 'Australians aged 65 and over are disproportionately targeted by scammers'.
3.185STARTTS also noted data showing that 'people from culturally and linguistically diverse backgrounds report higher losses from scams than other groups, despite likely underreporting'. In addition, people from refugee backgrounds and those seeking asylum are regularly targeted by threat-related immigration scams.
3.186Speaking from a different perspective, Mr David Braga of the International Justice Mission told the committee how the lack of action by social media companies also supports the trafficking and mistreatment of people who have been recruited—often fraudulently—to work in the 'scamming industry':
Social media also plays a role in facilitating the scamming industry run by organised crime and fuelled by a workforce who is often deceptively recruited by ads on social media. Individuals are trafficked across country borders and confined inside gated scam compounds. They are then forced and coerced with threats and actual violence to scam Australians and others, often, again, using social media.
3.187According to Mr Braga, there are currently over 300 000 people in scamming compounds in the Asia-Pacific region, with the vast majority assumed to be held there against their will.
3.188Currently, there are no obligations on digital platforms to reduce scam content and a range of participants highlighted the lack of support and redress mechanisms for people who fall victim to scams on social media platforms. Forexample, CHOICE stated that currently, there are 'no minimum standards of protection or support afforded to users of digital platforms, including social media sites'.
3.189The ACCC stated that since the establishment of its National Anti-Scam Centre (NASC) it had seen increased engagement with digital platforms and an improvement in response times when raising issues with platforms. The initial success of theNASC, along with other initiatives to combat scams, was praised by the Australian Banking Association as 'a strong platform for further action'.
3.190However, the ACCC also revealed concerns about the 'differential experience of members of the public' who report scams to the platforms and emphasised that there was still more to be done:
… we are seeing increased engagement, but there is certainly still a deal more that platforms could do both in their direct engagement with us in providing more detail on the responses that they're taking when we refer matters to them and, in particular, with regard to there being a general lack of transparency in the way they respond to consumers when they make reports, which results in consumers reporting more to us for us to take action.
3.191In addition, CHOICE noted that the 'faceless and borderless nature of many digital platforms and social media sites also makes it difficult for users to seek and obtain support and redress'.
3.192This was underscored for the committee by Mr Bruce Meagher of Tattarang who highlighted the 'jurisdictional arbitrage' engaged in by social media platforms in order to 'avoid effective accountability to Australian regulators and courts and deny redress to the victims of these scams and other harms':
By structuring their businesses so that all relevant operations are managed and controlled by US based companies with no relevant entities based in Australia they can frustrate attempts at service, refuse to comply with codes of practice, refuse to comply with legislation, render voluntary their compliance with injunctions and other court orders and force litigants to go through a convoluted process to sue or get a court order enforced in the US. Furthermore, they claim absolute immunity for virtually all their activities thanks to section 230 of the Communications Decency Act 1996.
3.193To illustrate this point, Mr Meagher told the committee about the experience of Dr Andrew Forrest AO, whose image has been used in scam advertisements on social media platforms since March 2019.According to Mr Meagher, Dr Forrest had to commence proceedings in the United States because Meta advised that 'neither it nor its Australian counsel would accept service and nor did it submit to the jurisdiction of the Australian courts'.
3.194Tattarang asserted that this is a situation not replicated in any other sector of the Australian economy:
In other sectors of the economy which have the potential to pose serious systemic or other risks we require foreign corporations to submit to the Australian jurisdiction through licensing regimes and other mechanisms.
For example, foreign banks must have local subsidiaries that hold Australian banking licences because the banking system is so critical to the economy and society.
3.195Unsurprisingly, participants were highly critical of the failure of social media platforms to protect Australian users, particularly as their actions in other countries had shown it was possible to do more:
We know social media platforms can do more to protect people from scammers, because internationally they are doing more. It is unacceptable that these multinational companies, with significant resources and some of the best technology in the world, are choosing to switch on scam protections in some jurisdictions but not here in Australia.
3.196As an example, CHOICE pointed to judgements in international cases, such as that of Dutch billionaire John de Mol, who won a summary judgement in a Dutch court, which ordered Facebook to remove scam Bitcoin advertisements featuring his likeness.
3.197This appeared to stand in stark contrast with the experience of Dr Forrest in Australia. According to CHOICE—and supported by the evidence provided by Tattarang—Meta's failure to respond to Dr Forrest's legal proceedings seemed due in part 'to complex legal structures that mean Meta is not concerned about the enforceability of any potential liability in Australia'.
3.198Indeed, the committee heard that rather than remove the scam advertisements featuring Dr Forrest, Meta has allowed them to proliferate over the last five years. Mr Simon Clarke of Tattarang contrasted this lack of action—and the financial losses of scam victims—with Meta's growing profits:
In five years the scam ads in connection with Dr Forrest and others have done nothing but proliferate further—five years of targeting vulnerable people, five years of countless people losing their life savings.
In those five years the Facebook platform has never issued a warning to users that they've been displayed a fraudulent ad or taken any steps at board level to do anything about the fraud online. In those five years Facebook's profit has moved from A$94.1 billion to A$202 billion.
3.199Indeed, along with the current lack of regulation, the revenue earned from scam advertisements was seen as a key reason for the platforms' lack of action. Forexample, Tattarang argued that:
The business model of the advertising platform seeks to maximise profits by permitting content that, among other things, seeks to use images of prominent people like Dr Forrest to drive engagement with those advertisements.
3.200A similar sentiment was also expressed by Ms Thomas:
Is it any wonder that they aren't doing enough to protect us? The advertising revenue they generate from scam ads is likely to be significant, though we can't really say, because there is no transparency about this, and it is unclear what they do with the money they receive from what is sophisticated international crime.
3.201Further, Mr Meagher told the committee that Meta's failure to act on scam activity in smaller markets, coupled with its 'proper policing' of larger markets such as the US and Europe, was driving higher levels of fraud in places like Australia, Japan, New Zealand and Canada:
I don't think there are very good statistics in Australia, but I understand that the Japanese … do keep quite good statistics, and they say that about 30 per cent of the ads on the Japanese Facebook platform are fraudulent and scams. So there's a disproportionate impact. The effectiveness of the regulatory regimes in those two major markets has a disproportionate impact—a negative impact—on Australia and other similar jurisdictions.
3.202However, others also noted the role played by other parts of the scamming 'ecosystem', including data firms, banks and telecommunications companies. For example, Reset.Tech Australia suggested that one reason scams have become increasingly undetectable is because of the amount of information they have about targets. This creates 'an illusion of legitimacy'. While some of this information may have been stolen by scammers or come from data breaches, Reset.Tech Australia pointed out that this is unlikely, given how much of our personal data is legally available for purchase:
Data about people's banking providers and habits are widely and easily for sale without strong vetting procedures. This data can be purchased alongside other information, such as recent online purchases, the type of car they own or the ages of their children. For example, the Xandr file documents [show] how data about customers of NAB, Suncorp, Westpac, AMP Bank, BankWest, CommBank, Macquarie Bank, Adelaide Bank and Allianz could all be legally purchased from Oracle (BlueKai, Datalogix, AddThis).
Online child sexual abuse and exploitation
3.203According to the AFP, the growth of the internet and social media platforms has facilitated the proliferation of child sexual abuse material (CSAM). It contended that:
… self-generated [CSAM], inappropriate contact, online grooming, coercion, and sexual extortion can take place on any interactive app or site. Offenders seek to engage with children on popular platforms including Facebook, WhatsApp, and Skype, and further obfuscate their offending through virtual private networks and encrypted technology.
3.204Similarly, UNICEF has stated that 'the online sexual abuse and exploitation of children is one of the fastest growing and increasingly complex threats to children's safety in the digital world'.
3.205Further, the AFP argued that online child sexual abuse was 'becoming more prevalent, commodified, organised and extreme'. It stated that, since the inception of the AFP-led Australian Centre to Counter Child Exploitation (ACCCE), reports of online child exploitation have nearly tripled—from 14 285 reports in 2018–19 to 40 232 reports in 2022–23.
3.206Growth in online child sexual abuse was also revealed in data from the National Centre for Missing and Exploited Children (NCMEC). In 2023, NCMEC received over 36 million reports related to the circulation of CSAM—an increase of 87 per cent from 2019. The reports to NCMEC comprised over 105 million files from public and electronic service providers. While these reports related mainly to CSAM, there were also increased reports of sextortion and the use of generative AI. Of these notifications, the vast majority—over 30 million—related to Meta-owned platforms, namely Facebook, Instagram and WhatsApp.
3.207Various participants also noted the increase in 'self-generated' CSAM, which the eSafety Commissioner indicated 'can result from both consensual and coercive or manipulative experiences, such as sextortion, grooming and screen capturing'.
3.208While it noted the increased production of CSAM, the Synod of Victoria and Tasmania, Uniting Church urged caution in using the NCMEC figures as an indication of the scale of online abuse. For example, it noted that 90 per cent of the CSAM reported by Meta platforms was 'the same or visually similar to previously reported content'. In addition, 'just six videos were responsible for more than half of the child sexual abuse material that was reported in that period'.
3.209However, the Synod of Victoria and Tasmania, Uniting Church did not shy away from the role social media companies play in facilitating child sexual abuse. For example, it referred to reports of CSAM being shared 'on a vast scale' via WhatsApp, as well as the growing use of social gaming environments to groom children.
3.210It also cited a 2021 Australian Institute of Criminology report on the way child sexual abuse live-streaming offenders access victims. The report found that social media is often used by offenders to contact both potential victims and facilitators of child sexual abuse. In addition, the 'find a friend' function on Facebook can allow facilitators to find offenders.
3.211The role of social media companies was also highlighted by other participants. For example, a Childlight NSW survey found that men who had offended against children were 'avid users of youth-focused social media platforms'. Compared to other men in Australia, they were:
4 times more likely to use YouTube;
4.31 times more likely to use Instagram;
2.52 times more likely to use Snapchat; and
1.6 times more likely to use Facebook Messenger.
3.212Destiny Rescue also underscored the 'significant role' social media platforms play in 'facilitating child sexual abuse and exploitation, providing anonymity to predators, easy access to children as well as being a platform for dissemination of CSAM'.
3.213While it noted the focus on 'online non-contact offending, Project Paradigm underlined the 'intermingled' and 'multidirectional' nature of online and offline offending.
Sexual extortion ('sextortion')
3.214Sextortion is a form of online blackmail involving the capture and use of sexual images to extort 'further sexual images, sexual favours, or even money' by threatening to expose the images to family, friends and acquaintances.However, when the victim of extortion is a child, the term sexual extortion of children is used.
3.215Sextortion, or the sexual extortion of children, can occur on any interactive service, including social media platforms. According to eSafeKids, there has been a significant rise in recent years, with NCMEC receiving 26 781 reports of sextortion in 2023, versus 139 reports in 2021.
3.216Similarly, Childlight UNSW pointed to 'an exponential increase in the online sexual extortion of children, particularly boys, on social media':
In the last three years, organised crime groups overseas have set up deceptive accounts on youth-focused platforms such as Instagram, Facebook or Snapchat. While pretending to be teenaged girls, members of the organised crime group contact teenaged boys, luring them to encrypted apps such as WhatsApp, where they solicit a naked image, and then blackmail the boy for money.
3.217This appeared to reflect evidence from the eSafety Commissioner which reported that between July 2023 and March 2024, sextortion was the most reported form of image-based abuse (59 per cent of total reports), with boys and young men accounting for most of the reports.
3.218Similarly, in 2023, the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) received over 300 reports of sextortion cases per month. However, it is thought that this accounts for only one in every ten cases, with many incidents likely remaining unreported 'due to the fear, intimidation and isolation tactics applied by the offenders'.
3.219The AFP noted that 'the majority of victims are boys, extorted via social media through a mix of self-generated and digitally altered images'.
3.220This was reinforced by Ms Melinda Tankard Reist of Collective Shout, who spoke of the growing number of boys at risk of sextortion. Devastatingly, MsReist told the committee that 'to date, five Australian boys have ended their lives due to being tricked by sextortion scammers … blackmailing them after an exchange of nudes'. In addition, Ms Reist noted that 'almost all' of the sextortion scams targeting minors were enabled by Instagram.
The failure of social media platforms to protect children
3.221In light of the evidence presented above, numerous participants pointed to the failure of social media companies to protect children from online child sexual abuse and exploitation. For example, Childlight UNSW argued that 'the disproportionate use of child-focused social media sites by child sex offenders, clearly demonstrates that social media companies have failed to execute their child protection responsibilities'.
3.222Childlight NSW also contended that the 'relative absence of content moderation obligations' under the 'Web 2.0 business model' has encouraged and empowered users to share their preferred content—including CSAM:
The relative lack of regulation of social media and other electronic service providers has resulted in the delivery of abuse-facilitating services and opportunities to child sex offenders whose preferences cannot be catered for in other, better regulated sectors.
3.223In addition, eSafeKids highlighted a number of gaps and shortcomings in the current reporting of CSAM by digital platforms:
… whilst YouTube takes steps to detect child sexual abuse, it does not use these tools on Chat, Gmail, Meet and Messages. Additionally, Google is not blocking links to known child sexual abuse and exploitation material.
… As you can see from the above data sharing child sexual abuse material is prolific on [Meta] platforms. However, when content is detected on a Facebook user's account, it does not mean Meta owned services Instagram and/or WhatsApp will be notified. Therefore, an account may be removed from one but can continue operating on the other. This points to gaps in safety measures.
3.224To this end, Mr Braga noted there are already technological solutions that digital platforms could put in place to prevent CSAM being shown:
There are examples like SafeToNet, who have created a technology they call HarmBlock that does exactly that. That stops a device from being able to show child sexual abuse material. There's another social media platform called Yubo which has also put in place protections against child sexual abuse material being shown through their platform. I raise those as examples of what is technically possible.
We're talking about some of the most sophisticated technology companies on the planet. It shouldn't be beyond the realms of possibility for them to also put in place these types of controls and increasingly improve it, using the AI capability to increase the level of precision around detection and blocking it. In our view, this could operate like an X-ray machine at an airport, so it happens in real time on the device, detects the material and blocks it, stopping it from being shown.
3.225In addition, at least one participant noted that platforms' arguments against content regulation tend to fall away once their revenue is threatened. As Dr Zac Seidler of Movember told the committee:
There's a really interesting example that people from Childlight told me about. … Pornhub has billions and billions of videos of child sexual abuse on their platform. They continue to say, 'We don't know how much there is. We're not really sure. We don't actually understand, but we promise we don't have any.' … There was a researcher who gained access and spent hours and hours finding all of the CSAM content. They presented it to a friend, who happened to be a friend of the CEO of Visa. Visa is the way in which you make payments on Pornhub. He called the CEO of Pornhub and said, 'We're out unless you get rid of this content.' Suddenly over a billion videos were gone within 24 hours.
3.226The regulation of social media companies in relation to online harms is discussed in Chapter 5.