Chapter 5 - Extremism and the online environment

Chapter 5Extremism and the online environment

5.1Throughout the inquiry some witnesses drew out the difference between online and 'in real life' extremism. The Centre for Culture and Technology (CCAT), Gender Research Network (GRN) and Perth Extremism Research Network (PERN) defined online extremism:

…as internet activism that advocates, supports, or recruits for ideologically extreme political positions in online spaces. Online extremism often manifests as violent rhetoric levelled against politically opposed groups, online harassment of individuals, or recruitment campaigns for extremist groups.[1]

5.2It is difficult to know who engages in online extremism, as that engagement usually occurs in 'largely anonymous forums…[that] frequently move as they are shut down for breaking Terms of Service'. The CCAT, GRN and PERN observed that generally 'on these websites it is evident that posters tend to be young men, often but not exclusively college or university aged'.[2]

5.3The Addressing Violent Extremism and Radicalisation to Terrorism Network (AVERT Research Network) submitted that right wing extremists (RWEs) were some of the first 'adopters of internet technology, recognising its huge potential as a communications and mobilisation tool'.[3]

5.4The AVERT Research Network opined that people are searching online spaces for community:

Especially since many traditional community forums and spaces are decreasing IRL or 'in real life'. As more time is spent online, more communities are found online. One doesn't need to travel, formally sign up to an organisation or even attend a meeting to feel and be part of a network or community. Rather, individuals and groups can engage in an extremist movement through 'the connective tissue' of the internet.[4]

5.5The Australia/Israel & Jewish Affairs Council (AIJAC) suggested that the threats made by nationalist and racist violent extremist groups online have not resulted in a serious terrorist attack in Australia:

This is perhaps partly because security agencies are extremely effective in monitoring and infiltrating such groups, but to a much greater degree it reflects the lack of capability and possibly even genuine aspiration to conduct mass-casualty attacks.[5]

5.6The AIJAC saw nationalist and racist violent extremist (NRVE) groups that operate online:

…as little more than extremist fraternities and, despite their hateful rhetoric, [they] have never translated their millenarian beliefs about a "race war" into real-world violent action or even inspired the "leaderless resistance" they claim to champion.[6]

5.7When they have conducted 'real life' activities, they 'are overwhelmingly characterised by graffiti and vandalism as well as hateful, intimidating posters, flyers and messages'.[7]

5.8Extremists continue to identify innovative ways to utilise new technologies in their planning. For example, Mr Mike Burgess, Director-General of Security, Australian Security Intelligence Organisation (ASIO), revealed that extremists have attempted to use artificial intelligence (AI) in their planning. He stated that ASIO is:

…aware of offshore extremists already asking a commercially available AI program for advice on building weapons and attack planning. If the programs refuse to provide the requested information, the extremists try to bypass the ethical handbrakes.[8]

5.9Tech Against Terrorism identified 'more than 5,000 piece of AI-generated content produced by terrorist and violent extremist actors (TVEs)'. That content is used 'to augment current practices of creating and disseminating TVE propaganda across both Islamist and far-right ideologies'. Its:

…investigations indicate[d] that there is a low risk of widespread adoption at the moment. However these experiments do indicate an emerging threat of TVE exploitation of generative AI, in the medium to long term, for the purposes of producing, adapting, and disseminating propaganda.[9]

5.10Examples of the ways in which TVEs could use generative AI are outlined in Figure 5.1.

Figure 5.1Terrorist and violent extremist applications of generative AI

Source: Tech Against Terrorism, Early terrorist experimentation with generative artificial intelligence services, November 2023, p. 3.

5.11Mr Burgess warned that 'the internet is the most potent incubator of extremism. AI is likely to make radicalisation easier and faster'.[10]

5.12Submitters recognised that RWEs use different online platforms to disseminate their ideas, connect with similarly minded people, recruit people and intimidate others.[11]

5.13Those online platforms have been used to promote 'fringe' ideas, particularly among young people. They are likely to come into contact with these ideas online, even if they are not actively seeking them out:

Not only are contemporary young people avid users of the Internet/social media, but the sheer intensity of their networked engagement means coming into contact with 'fringe' ideas. Accordingly, online hate groups target their sites, chat room discussions, iconography, and posts to a youth constituency. This has produced a small but significant 'dystopian' cluster of extreme right youth on the Internet in Australia.[12]

5.14The AVERT Research Network stated that the internet is where many budding RWEs 'first encounter extreme ideologies and communities'. The internet exposes new recruits to extreme ideas and assists RWEs spread those ideas across national borders and between different communities and subcultures.[13]

5.15The AVERT Research Network referred to an academic review which stated:

Today, the internet is no longer just one part of the spectrum of extremist activism – it has become a primary operational environment, in which political ideologies are realized, attacks planned, and social movements made.[14]

5.16The Australian Muslim Women's Centre for Human Rights (AMWCHR) explained that the anonymity afforded to individuals on the internet can embolden them 'to adopt extreme beliefs, and provide an environment to express those beliefs'.[15]

5.17Scholars with expertise in Literary Studies, Cultural Studies, and Medievalism Studies suggested that a variety of online platforms are used by RWEs to form 'a vast, interconnected online ecosystem that includes not only social media platforms, but also exploitation of social aspects of mainstream entertainment platforms such as YouTube, video-games and books'.[16]

5.18The AVERT Research Network explained that the loosely regulated and globally connected online environment:

…has enabled Australian RWEs to build a de-territorialised movement and connect with follow RWEs globally. RWEs hold online forums and conversations, like, share and comment on each other's posts, and attack the various 'out-groups' they stand against.[17]

5.19Dissemination of NRVE material online 'is the most common and widespread activity undertaken by extremists'. Individuals who share extremist material online often do so 'to provoke a negative reaction from the public and to intensify ideological fervour within an ideological community'. There are other motivations behind sharing this kind of material, including:

establishing new social networks and collective identity;

engaging in rebellious and 'edgy' behaviour;

seeking attention and validation from likeminded users; and

expressing a legitimate interest in history, as well as local and global politics.[18]

5.20The online environment provides extremists with a powerful avenue to spread their views. As the Australian Federal Police (AFP) reported, young people appear to be particularly exposed to the propaganda shared by extremists:

The online environment has allowed violent extremists to magnify their impact, particularly during COVID-19, by sharing extremist propaganda and material from anywhere in the world. Australia is experiencing an increase in young people being investigated by the AFP, with the youngest being 13 years of age. Young people are more vulnerable to being targeted by extremists and influenced to adopt extremist ideology.[19]

Promotion of extremism online

5.21The Christchurch terrorist used the internet to attract attention to his attack. In the minutes leading up to his attack on two mosques 'he posted an anonymous message to an online discussion board called 8chan, which was known to attract people with white supremacist and anti-immigrant views'. The post 'directed readers to his Facebook page' where they could read his manifesto and view a livestream of the attack that he filmed with a GoPro camera.[20]

5.22Despite efforts to remove them, footage of the attack and the manifesto continue to circulate on the internet. The New Zealand Chief Censor classified the footage and the manifesto 'as objectionable…making it illegal to possess or distribute them. Both are, however, still available on websites based outside New Zealand'.[21]

5.23Using online misogyny as an example, the CCAT, GRN and PERN reflected on the role that online communities can play in intensifying the views of extremists:

Online communities enable people who might identify as incels to connect with others who also feel sexually unsuccessful…However in doing so, these communities also act as an echo chamber, reinforcing feelings of disenfranchisement, anger, and disgust, potentially also exposing them to more extreme content. Incels have reported that they found themselves making highly misogynistic comments that they feel they would not have made in 'real life', as those present on the board reward such content and there is no one outside of it to call them out.[22]

5.24ASIO stated that '[o]nline platforms remain significant enablers of radicalisation, the spread of propaganda, and other violent extremist activity'. RWEs use those platforms to create 'rapid and easy connections between individuals globally'. The propaganda shared by RWEs online can 'aid recruitment, radicalise and inspire new adherents or reinforce existing beliefs'. ASIO highlighted three particular challenges posed by the online environment:

Some propaganda, such as detailed instructions for committing violence, can build violent extremists' capability.

The reach of extremist content online means individuals are radicalising very quickly—in days and weeks.

It remains concerning that a small number of minors continue to be attracted to violent extremist propaganda and ideologies, with most of the radicalisation of minors occurring online.[23]

5.25Mr Burgess summarised his concerns with the online environment:

Based on what I see, the internet is the world’s most potent incubator of extremism.

And social media is the world’s most potent accelerator of that extremism.

The digital platforms are not generating new security threats, but almost certainly they are amplifying, accelerating and aggravating existing ones.

I am particularly concerned about the implications for young people, the most enthusiastic participants in the digital ecosystem.[24]

5.26The AFP echoed ASIO's concerns:

The prevalence of the internet has enabled a borderless environment for extremists. Technological advances are contributing to an online environment whereby extremists can connect via new and innovative platforms on a global scale that feed digital algorithms.[25]

5.27The AFP submitted that the online environment has enabled NRVE individuals and groups to more readily connect with each other. That has contributed to '[t]he rapid growth and increasing geographical diversity of NRVE groups in recent years'.[26]

5.28The AFP submitted that online environments pose unique challenges, particularly those that 'are becoming increasingly saturated with violent extremist content'. There are online environments where the volume of violent extremist content risks 'creating echo chambers lacking alternative content and reinforcing extremist narratives'.[27]

5.29The Office of the eSafety Commissioner (eSafety) reflected on the outcome of the spread of extremist propaganda on online platforms:

Counter-terror experts and former extremists have described hateful and extremist online content and activity as having a propagandising effect that desensitises individuals to hateful attitudes towards perceived outgroups, normalises and glorifies acts of violent extremism, and exposes susceptible individuals to potential recruitment and further radicalisation by established extremist individuals and groups.[28]

5.30Research conducted by the Australian Institute of Criminology (AIC) found 'evidence [that] shows significant (if varying) proportions of violent extremist offenders—as many as 60 percent—now either radicalise primarily online or have significant online influences in their radicalisation'.[29]

Use of online platforms to intimidate

5.31The AMWCHR shared data from the latest Islamophobia in Australia Report which indicated that 44 per cent of reported Islamophobic incidents took place online. A significant number of those incidents were connected to groups promoting RWE ideology:

Of the online perpetrators, one-third (36%) were associated with far-right groups and/or ideology. Social media provided a fertile ground for hate groups through the free exchange of divisive and hateful viewpoints, which are largely unregulated and unmonitored. Far-right alternative media and social media outlets reframe, recontextualise and reproduce news stories by carefully selecting information from non-mainstream sources to justify their ideological agenda. Accordingly, news items appearing to be neutral are recrafted as partisan and combined with disinformation and propaganda for the public to consume.[30]

5.32The AMWCHR observed that online platforms, including those that are not explicitly extremist, shared footage of the Christchurch terrorist. Exposure to that 'extremely distressing footage has had severe physical and mental impacts on the victims, their families, and surrounding communities'. The availability of that footage online 'provides content for other extremists to aspire to, and several 'copycat' attacks that cite the Australian terrorist as inspiration have already occurred'.[31]

Extremist activity on online platforms

5.33The Organisation for Economic Cooperation and Development (OECD) compiles an annual list of the 50 most popular online services for the dissemination of terrorist and violent extremist content (TVEC).[32] Online services include:

…social media platforms, online communications services, file sharing platforms, and other online services that enable the uploading, posting, sharing and/or transfer of digital content and/or facilitate voice, video, messaging or other types of online communications.[33]

5.34Dr McSwiney, Dr Richards, Dr Sengul, Callum Jones, and Cam Smith made a distinction between 'mainstream social networking sites (such as Facebook, Instagram and Twitter) and alternative social networking sites (such as Gab, Truth Social and Telegram)'. In their view:

Mainstream social media platforms and social networking sites remain an important space for right-wing extremists to congregate and organise, and represent opportunities for right-wing extremists within the Australian context to gain exposure to mainstream online discourse, which can afford opportunities to bring about the mainstreaming of extremist discourses. Right-wing extremists are active on a range of mainstream social media platforms, such as YouTube and TikTok, as well as mainstream social networking sites such as Facebook, Instagram and X (previously Twitter).[34]

5.35Mr Joshua Fisher-Birch, Content Review Specialist and Researcher, Counter Extremism Project (CEP), made a similar distinction:

Online right-wing extremism spreads because of determined propagandists, a lack of content moderation and inaction on social media and communications platforms. Telegram and X, formerly known as Twitter, are two crucial platforms for the growth of right-wing extremist groups. Telegram is an echo chamber and site for radicalised individuals, and X is an online location for appealing to individuals not necessarily part of extreme-right movements. Telegram remains the most critical communications app for sharing propaganda texts, photos and videos. It rarely removes channels or accounts, making it a favourite of the extreme right.[35]

5.36The Multicultural Youth Advocacy Network agreed that RWEs use mainstream social media platforms to spread and promote their ideology:

Studies indicate the platforms such as Twitter, YouTube, and community forums like 4chan and Reddit play a significant role in garnering public backing for far-right initiatives and coordinated acts of violence. Theseplatforms, alongside other social media networks, discussion sites, specific online gaming environments, chat services, talk radio, and traditional print media are facilitating the spread of far-right ideologies in various ways.[36]

5.37Some inquiry participants drew attention to the business model of social media platforms. Dr Imogen Richards, for example, highlighted that those platforms 'are not designed to encourage nuanced debate but are architecturally designed to promote addictive material in pursuit of virality'. The algorithms that those platforms use to prioritise and promote content focus on driving user engagement with the platform. To increase user engagement, the algorithms will often promote extreme content which 'presents a potentially quite significant challenge, not only in addressing the spread of right-wing extremist materials but also in promoting and advancing a healthy democratic public sphere'.[37]

5.38Dr Richards opined that social media companies also use their platforms to manipulate their users:

…these platforms are actually used to convince people to behave and vote in a certain way, largely through disinformation, fomenting misinformation and so on…[T]here should be more clarity about what is happening. Thereis that, as well as the by-default algorithmic promotion of virality through psychologically addictive materials.[38]

5.39The Victorian Government recognised that a range of social media platforms 'are popular among young people and are more commonly used for benign purposes including general online communication, gaming, and content sharing'. For that reason, it explained that using these platforms 'is not necessarily indicative of an escalation in an individual's radicalisation trajectory, but nonetheless highlights an increased possibility of exposure to NRVE content and discourse'.[39]

5.40eSafety is concerned about '[t]he spread of terrorist and extremist material on social media and its role in online radicalisation…both here in Australia and overseas'. It is particularly:

…concern[ed] about how violent extremists weaponise technology like live-streaming, algorithms and recommender systems and other features to promote or share this hugely harmful material.

The tech companies that provide these services have a responsibility to ensure that these features and their services cannot be exploited to perpetuate such harm, which goes to the heart of eSafety's safety by design principles and mandatory industry codes and standards.[40]

5.41Under the Online Safety Act 2021 (OSA), the eSafety Commissioner has the authority to declare an Online Crisis Event (OCE):

…when material that depicts abhorrent violent conduct is shared or spread online in a manner likely to cause significant harm to the Australian community, in circumstances warranting a rapid, coordinated, and decisive response by industry and government, even if the material is on a platform or hosting service outside Australia.[41]

5.42During an OCE, the eSafety Commissioner may 'request or require [internet service providers] to prevent access to the relevant material for a limited time by taking steps like blocking domain names, URLs, and IP addresses'. In doing so, it is intended that the spread of material related to the OCE is restricted.[42]

5.43The National Counter-Terrorism Plan stated:

The Online Content Incident Arrangement (OCIA) is Australia’s domestic crisis response protocol aimed at stopping the viral spread of online terrorist and extreme violent content related to an Online Crisis Event, such as a terrorist incident involving an online element (e.g. a livestream).[43]

5.44The OCIA ‘outlines the steps to be taken once an online incident is identified that involves terrorist or extreme violent material that is widely accessible to Australians online’. The arrangement is voluntary and ‘designed to enhance communication flows and information sharing between government and industry stakeholders during a potential Online Crisis Event’.[44]

5.45eSafety used its powers under the OSA to send transparency notices to a range of large online platforms 'to find out what they are doing to protect Australians from terrorist and violent extremist material and activity'. It 'will publish appropriate findings in due course'.[45]

5.46Those transparency notices require the online platforms 'to answer a series of detailed questions about how they are tackling the issue'. The companies that operate the online platforms had 49 days to provide responses to the questions.[46]

5.47Mr Nathan Smyth, Deputy Secretary, National Security and Resilience, Department of Home Affairs (Home Affairs), informed the committee that his department is:

…deeply concerned about violent extremist content online. IMVE movements exploit the online environment, including social media platforms, and encrypted applications for propaganda, radicalisation and recruitment. They exploit illegitimate, popular grievances and change their online rhetoric with major world events and trends. They push certain crisis narratives to undermine confidence in public institutions and radicalise individuals through creating a sense of urgency. The live streaming of incidents continues to be a trend with those attacks. Extremist groups deliberately target vulnerable and young individuals via social media for radicalisation as they're more susceptible to influence. IMVE actors rely on platforms subject to less internal regulation to spread their propaganda.[47]

5.48The AIC argued that content available in the online environment can be more persuasive than offline material:

The quality and variety of online violent extremist content can make it far more persuasive than content available offline. Professionally produced and edited videos, live streamed violent extremist attacks and demonstrations, and even video games have become immersive and emotionally compelling online vehicles for violent extremists to disseminate their ideas, 'perform' violence, and radicalise larger numbers of people. Content is also made more persuasive by the adaptability and volume of content, and the currency and regularity of updates, which promote the perception that violent extremists are 'on top of' current events and capable and active in fighting for the cause.[48]

Deplatforming of extremist actors from social media

5.49Many mainstream social media and social networking platforms have policies that 'preclude certain types of speech'. Those policies present significant challenges to RWEs, including:

…the risk of being removed from these spaces (known as "deplatforming"). This ability to platform extremist content illustrates the role that mainstream social media and social networking platforms play as gatekeepers of online discourse.[49]

5.50Academics from Victoria University contended that an increase in content moderation on social media platforms including Facebook, YouTube and Twitter resulted in those sites 'becoming gradually less hospitable to RWEs. This resulted in a shift to much smaller platforms that specifically catered to RWEs, such as 8chan and Gab'.[50]

5.51These alternative platforms have 'more relaxed approaches to content moderation'. However, they also tend to have smaller numbers of users, some of whom are likely to 'already adhere to right-wing extremist ideology, hampering the opportunities for right-wing extremists to effectively recruit and radicalise nonextremist users'.[51]

5.52There is also evidence that extremists 'continually adapt their methods to technological developments'. They adjust their practices 'to avoid content moderation. On mainstream online platforms, for example, they have been developing tactics to evade automated detection tools'.[52]

5.53The OECD reported that some online platforms are 'rely[ing] more heavily on automated tools to detect and remove TVEC, which has generally increased the removal of lawful content and unjustified censorship'. Many of those platforms have not provided sufficient details on how their content moderation systems operate 'and most of them either have no notifications and appeal mechanisms in place, or do not provide any information in this regard'.[53]

5.54A complaints-based regulatory regime is administered by eSafety. That regime addresses:

cyber abuse material targeted at Australian adults (adult cyber abuse);

cyberbullying material targeted at Australian children (cyberbullying);

non-consensual sharing, or threatened sharing, of intimate images (image-based abuse); and

Class 1 and Class 2 material (illegal and restricted content).[54]

5.55eSafety has the power to 'issue removal notices to the online service on which the content is available and the hosting service provider that hosts the content'.[55]

5.56eSafety reported that most of the complaints it receives 'relate to child sexual exploitation and abuse material'. It indicated that no more than one per cent of the complaints it receives 'relate to content that advocates terrorist acts or material that incites or instructs in matters of crime or violence' (see Table 5.1).[56]

Table 5.1Number and percentage of complaints made to the eSafety Commissioner under the online content scheme by ground or category of harm

Reason

Number

Percentage of total

Child sexual abuse/child abuse/paedophile activity

28 971

87%

Extreme, offensive or adult content

1744

5%

Sexually explicit

1501

5%

Promotion of incitement or instruction in crime

367

1%

Violence

304

1%

Violent extremist material

190

1%

Advocates terrorism

48

<1%

Source: eSafety annual report 2022-23, p. 218

Extremist activity on gaming platforms

5.57It is estimated that 3.2 billion people play video games worldwide. As a very small number of those people commit violent acts, exposure to video games on its own does not lead to radicalisation.[57]

5.58The AVERT Research Network stated that video games and the culture around gaming have become arenas for radicalisation, particularly for young people. It cited research that indicated that video games, in conjunction with other factors, influence perceptions of violence. Video games can also 'fulfil psychological needs of competency and social connection', which can be leveraged by extremists during the radicalisation process.[58]

5.59The OECD indicated that 'gaming services and associated platforms are increasingly used by terrorists and extremists to disseminate digital propaganda and for purposes of radicalisation and recruitment'.[59]

5.60There are also communities on gaming platforms that promote prosocial views:

There are lots of other gaming communities as well – some are actively anti-racist, some are actively pro-queer, some are actively feminist, some are all of these and more – but there's a subculture within gaming that is racist, homophobic, misogynist, and people who are part of that often associate their identity, their sense of self with that culture.[60]

5.61Young people are generally unreceptive to radical perspectives in online environments. However, there is a small minority who are susceptible to those narratives. Policy responses should focus on how to identify that vulnerable minority and ensure that their radicalisation journey is cut short.[61]

5.62Extremist recruitment processes on online gaming platforms may be less obvious than other recruitment techniques. They may use gaming platforms to insidiously spread propaganda and potentially radicalise others. Ms Atitaya (Angela) Suriyasenee, Research Intern, Cyber, Technology and Security, Australian Strategic Policy Institute (ASPI), discussed how extremists modify popular online games to align with their views:

…they use these online spaces to create or alter online environments to mask or to realise their violent intentions…and use sandbox games, where they can enact violent fantasies and construct alternative realities that align with their extremist views.[62]

5.63Her colleague Mr Henry Campbell, Strategic Engagement and Program Manager, Northern Australia Strategic Policy Centre, and Program Manager, Counterterrorism, ASPI, described how people can become exposed to radical views and ideas on Discord:

It is a social media-esque platform that's designed for in-game communication over microphone or to just share content—memes, messages and that sort of thing—but it has an inherent bent towards privacy, so you can also see users going slowly down chains of private servers until they may end up in a far more extreme environment than perhaps they intended to. This could happen in one session, or this could happen over a course of months as they socialise on these platforms.[63]

5.64Videogames can be used by extremists 'with the specific aim of targeting young people'. For example, a former American white supremacist explained that extremists 'target "marginalised youth" using popular games such as Fortnite, Minecraft and Call of Duty'.[64] He explained that in recruiting him:

They appealed to my desperate need for identity, community and purpose. I was bullied and they provided safety. I was lonely and they provided family. That's how they draw people in, with a sense of belonging and 'humanitarianism'.[65]

5.65Extremists are also known to:

…have created their own games, by modifying popular existing games, which have been posted freely on mainstream gaming sites, and used in-game chat functions and game adjacent platforms (such as Twitch) in recruitment. Such games allow RWEs to act out violent fantasies, such as murdering Jewish people, people of colour and queer people. Playing games, including mainstream popular titles, together also allows RWEs to reinforce each other's beliefs, and bond over so-called "dark humour".[66]

5.66In addition to creating and distributing their own video games, extremists have reinterpreted:

…the storylines of popular games to suit their worldviews in gaming forums. This might include pointing to a game with a conspiracy theory at the heart of its narrative as reflecting realities of corrupt government, or to the inherent superiority of a fantasy game species, such as elves, to draw false and racist parallels with reality.[67]

5.67Ms Suriyasenee mentioned two initiatives adopted by other countries that aim to provide young people with healthy online experiences. She referred to the Center for Digital Youth Care in Denmark and Next Gen Men, a Discord forum operated by a Canada-based organisation.[68]

5.68The Center for Digital Youth Care provides professional counselling to vulnerable young people through digital media. Each year it provides approximately 2000 online counselling sessions to young people.[69]

5.69It also develops and implements digital educational tools for Danish municipalities to implement. Interested counsellors are provided with training to support young people in their local area.[70]

5.70Home Affairs submitted that online gaming platforms are being used to 'radicalise and recruit an increasingly younger cohort of internet users'.[71]

5.71eSafety has not conducted research into ideologically motivated extremism. However, it has conducted research into online hate, including in relation to children and young people's experiences of hate on online gaming platforms.[72] The main findings of that research include:

20% of teen gamers had seen or heard other players share or use hate speech.

11% had seen or heard other players expressing or sharing misogynistic ideas relating to the belief that men are superior to women.

8% had seen or heard other players expressing or sharing ideas that people from one race, culture, religion, or nationality being better than other people.[73]

5.72In October 2022, the AFP reported that it had 'evidence of extremist groups accessing popular online games in a bid to recruit young Australians'. It was aware of 'a concerning trend of members and associates of extremist groups targeting young people to expose them to dangerous content – including violent recreations of actual terrorist events – across online gaming platforms'.[74]

5.73The AFP advised 'that nationalist, racist and violent extremist content in online games is almost certainly part of a radicalisation process for some young people'.[75]

5.74In December 2023, the AFP indicated that extremists were increasingly using gaming platforms to recruit youth. There were examples of:

…IMVE and [religiously motivated violent extremist] individuals and groups creating their own bespoke digital games and platforms to access potential recruit supporters.

The AFP is aware that some of these extremists are building and releasing games that really are just a trojan horse to promote their worldview, blurring the reality of young users with the aim to radicalise them.[76]

Extremist activity on encrypted communication applications

5.75Online anonymity and the use of anonymising technologies were raised as major concerns during the inquiry, particularly by law enforcement and intelligence agencies.

5.76Encrypted technology allows individuals to conceal their identity online or erase 'any established online footprint which may exist on other applications'.[77]

5.77Communication applications including Telegram, WhatsApp, and Messenger are also used by RWEs. Dr McSwiney et al explained that these apps are:

…not public facing, making it difficult to establish how, and to what degree, right-wing extremists are using messaging apps in Australia. However,these types of apps have been found to be an important part of extremist online environments, and it has been established that right-wing extremists in Australia used Telegram broadly in their dissemination of online propaganda.[78]

5.78The CEP submitted that Telegram is 'one of the important online organizing spaces for the international violent extreme right'. The messaging app:

…allows for the anonymity of users and includes chats enabled with end-to-end encryption. Channels allow administrators to share content such as text, images, videos, and links with subscribers. Administrators can also enable comments on posts. Telegram chats allow users to interact with one another directly, which users may have to apply to join.[79]

5.79According to an analysis by the Centre for Research and Evidence on Security Threats, some extremist groups have increased their operational security online. In the period from 2016 to 2018, 'locating online extremists and their content was relatively straightforward'. Since that time, some extremists employ vetting procedures to applicants before allowing them to join online chat groups. Some of those procedures include having applicants 'provide videos of self-harm (knowing that this would preclude law enforcement)'.[80]

5.80Mr Stephen Blanks, Executive Committee Member, New South Wales Council for Civil Liberties, hoped that ASIO would not be permitted 'to self-authorise requirements for platforms to provide decryption'. He stated that any request for decrypted communication by ASIO 'should be subject to a warrant process like any other. A search warrant is an absolute minimum'. There are possibly technological limitations to be aware of, however, as some platforms may not be able to comply with a request for decrypted communications as they do not have access to the decryption key to decrypt the communications.[81]

5.81The AIC stated that reducing the anonymity of users of online platforms would decrease the risk of radicalisation. The anonymous nature of the online environment can contribute to increased antisocial behaviour on many online platforms as that anonymity:

…bolsters a sense of security on online platforms where individuals feel the risk of identification, detection and apprehension is low. It is also associated with deindividuation, in which people lose their sense of individuality and instead align their identity with a group, limiting the sense of responsibility for antisocial behaviours. Anonymity online can weaken inhibitions and encourage people to behave and engage with content in ways they would not in the real world.[82]

5.82It has been argued that online anonymity can also have socially beneficial outcomes as:

Critically, the internet is a key outlet for disseminating information that has been suppressed by authoritarian regimes, and empowering resistance against corrupt or criminal practices by individuals, companies (eg the Panama Papers) or governments (eg Arab Spring). As such, it is important to be aware of how measures to reduce online anonymity and track individual identities might be misused to suppress the free expression of ideas and information.[83]

5.83Mr Burgess described the dual nature of encryption technology, on one hand it 'protects our privacy and enables our economy…and [on the other it] creates safe spaces for violent extremists to operate, network and recruit'.[84] ASIO reported that violent extremists are increasingly and 'routinely using secure messaging apps, virtual private networks (VPNs) and fake emails to avoid detection'. The use of these encrypted and secure communication platforms 'damages intelligence coverage in many priority counter-terrorism cases'.[85]

5.84The Director-General reiterated that '[e]ncryption is clearly a good thing, a positive for our democracy and our economy. It protects privacy, it enables communications and transactions'. However, it can also be used by individuals monitored by ASIO to 'hide their activities'. That nefarious use of encryption technology has become more prevalent, with Mr Burgess stating:

Even ASIO's most unsophisticated targets routinely use secure messaging apps and virtual private networks to avoid detection and hide their activities.

In 2021, I revealed that encryption damages intelligence coverage in 97 per cent of our priority counter-terrorism cases.

The impact is now worse.

It is virtually 100% in our priority counter-terrorism and counter-espionage cases.[86]

5.85Mr Burgess called for technology companies to do more to allow ASIO to legally intercept encrypted messages in very specific cases involving suspected threats to national security. He illustrated his argument with the following example:

ASIO is investigating a number of Australians who belong to a nationalist and racist extremist network. They use an encrypted chat platform to communicate with offshore extremists, sharing vile propaganda, posting tips about homemade weapons and discussing how to provoke a race war.

The chatroom is encrypted, so ASIO's ability to investigate is seriously compromised. Obviously, we and our partners will do everything we can to prevent terrorism or sabotage, so we are expending significant resources to monitor the Australians involved. Having lawfully targeted access to extremist communications would be much more effective and efficient. It would give us real time visibility of their activities.

I believe technology should not be above the rule of law. Encryption, the companies implementing it and the individuals using it should be responsible and accountable. Under law, all of us have a right to do many things, up to a point. Privacy is important but not absolute. You lose that privilege if you engage in criminal conduct or threats to our security. In the case of the neo-Nazi chat room—should a nationalist and racist violent extremist's right to privacy outweigh the community's right to safety?[87]

5.86The AFP raised similar concerns about the increased use of anonymising technology. It considered that 'the possession, viewing, sharing and distribution of material relating to extremist ideology…can be a precursor to serious terrorism offending'. Domestic and international law enforcement agencies collaborate in identifying and pursuing 'those responsible for the dissemination of such content if the conduct constitutes a criminal offence'. However, the increased use of anonymising technologies makes this work increasingly difficult.[88]

5.87Mr Burgess explained that the powers his agency is seeking would only be used 'to help us lawfully intercept encrypted terrorist communications, in very tightly controlled and targeted situations. This includes the communications of nationalist and racist violent extremists'.[89]

5.88He gave the example of a current ASIO investigation into:

…one network that uses an encrypted chat platform to communicate with offshore extremists, sharing vile propaganda, posting weapons tips and discussing how to provoke a race war. Our investigation is hindered because the chat room is encrypted, and so we're expending significant resources to find and then monitor the Australians involved. Having lawful and targeted access to their communications would be much more effective and efficient.[90]

5.89There would only be limited, and tightly regulated, circumstances in which ASIO would examine encrypted messages. Mr Burgess reiterated:

We're not asking to peer inside the internet and look at everyone's conversations. It's a limited set of circumstances, and I have to justify why I need to do that. In the case of people we have identified through other means, who are part of that group I talked about, we do have legal grounds to justify getting access because we know from what some of them are sharing that these are violent extremists. One day, one of them will likely kill someone. Come the coronial inquiry or the police investigation, we will discover we were unable to get in there and get that intelligence to allow the police to save lives. That is going to be a hard day. I don't think that day's too far away, so we're asking for help.[91]

5.90Mr Burgess explained that co-operation with industry is required to gain access to encrypted communication applications. Under existing legislation:

…in Australia today, to provide a carriage service under licence, you have to provide interception capability, and the carriers do that, but if you're a messaging app provider that sits on top of those carriage service providers and you're building some new app that provides privacy, and I acknowledge privacy is a good thing for people who are not breaking the law, you are not subject to the same requirement to provide an interception service directly. They're choosing to implement encryption in a way that means they can't help us. Effectively, they’re putting those communications beyond the rule of law.[92]

5.91The Director-General argued that the rationale for additional powers granted to ASIO 'is simple. If I have a warrant, where I've demonstrated to the Attorney-General that I have grounds to seek that level of breaking their privacy, I would like them to help us do that'.[93]

5.92If they do not co-operate with ASIO’s request, Mr Burgess stated ‘then of course that’s something I will take back to the government of the day, presenting my case for further action to be considered’.[94]

Detecting and addressing the online promotion of extremism

5.93Some inquiry participants discussed the challenges associated with detecting the promotion of extremism in online environments. It is difficult to appropriately address the promotion of extremist ideas in the online environment without being aware of the presence of those ideas.

5.94Universities Australia (UA) reported that 'student clubs and societies often conduct club activities online and use social media platforms such as Discord and Facebook to form group chats and forums'. From its perspective:

The situation can present a challenge for the sector as these platforms are not universityauthorised technology, thus there can be a grey zone when it comes to initiating misconduct proceedings where there is an allegation of concerning behaviour online. Regardless of whether misconduct proceedings can be initiated, universities provide impacted students with support.[95]

5.95UA advised the committee that students could use university-authorised technologies inappropriately, including by sharing extremist content or promoting extremist ideologies. To address that, 'universities have established policies outlining acceptable IT use, social media guidelines and what potential consequences apply for breaching these policies'.[96]

5.96Associate Professor Josh Roose, Centre for Resilient and Inclusive Societies (CRIS), suggested that the nature of the online environment provided challenges for law enforcement:

…because we have people sitting on Australian territory making not only extremist statements but spreading hate and threatening violence who aren't being held to account. They're able to hide behind anonymity online, but they're able to use VPNs and other ways of obscuring their identity, to dox, to attack, to threaten, and also, on apps like Telegram, to spread extreme messaging, including the use, for example, of Nazi symbology, which has now been banned but using it almost without any form of scrutiny or any form of accountability. It's the online environment as much as the offline where we've really got to focus enforcement efforts.[97]

5.97It was submitted to the committee that the moderation of online platforms is an important, but imperfect, tool:

It requires active collaboration from companies that provide gaming platforms; most of which are based outside Australia. Moderation also have[sic] technical, ethical and legal complexities, particularly in the context of liberal democratic values. RWEs are adept at finding ways to void automated moderation tools, such as using coded language and deliberate spelling 'mistakes'. Moderation should, therefore, not be relied on as the only method for addressing RWE activity in video-gaming.[98]

5.98It was further submitted that it was challenging to moderate or regulate online platforms as the users of those spaces often construct cultures that can be difficult for outsiders to understand as:

RWE cultures, and uses of popular culture, often use material and spaces that are unfamiliar to policy-makers and practitioners working on preventing and countering violent extremism. This poses unique challenges in understanding and countering violent extremism in Australia. Partnerships that bring together individuals and organisations with different kinds of expertise are essential to addressing these challenges.[99]

5.99It was further submitted that those responsible for moderating and regulating online platforms should be encouraged to develop and maintain a deeper understanding of RWE culture. Developing that knowledge:

…will increase understandings amongst analysts and front-line workers as to the role such narratives play in processes of radicalisation and recruitment into violent extremism. Front-line workers' lack of literacy in the sub-cultures in which RWE often operate, such as trolling and video-gaming, has been identified as a significant danger. Online communications that may seem innocuous or technical, such as a discussion of the medieval Battle of Tours, a fantasy or historical video-game, or casting in Amazon's The Rings of Power, may in fact be red flags that may be used to track growing far-right consciousness and agitation.[100]

5.100It has been argued that as the internet is a key arena for the radicalisation of individuals and the promotion of violent extremism, measures that target the online environment are becoming increasingly important. Those measures:

…include the detection and removal of online violent extremist materials, counter-messaging and strategic communications initiatives, media and digital literacy measures to help young people more effectively critique violent extremist messaging, and the fostering of partnerships between governments, media and technology companies.[101]

5.101In his address to the Social Media Summit jointly hosted by the New South Wales Government and the Government of South Australia, Mr Burgess argued 'no form of technology, no corner of the internet, should be above the rule of law. Social media cannot be without a social licence'.[102]

5.102The National Counter-Terrorism Plan stated the Commonwealth government engages with the digital industry to reduce the risk of Australians being exposed to terrorist and extremist content online. That engagement includes reference to the Christchurch Call and the Global Internet Forum to Counter-Terrorism.[103]

5.103The National Counter-Terrorism Plan indicated that eSafety has 'independent statutory powers to remove or restrict certain terrorist and violent extremist content'.[104]

5.104Extremist content can also be reported online through the National Security Hotline or directly 'to eSafety for removal under Australia’s Online Content Scheme'.[105]

Transparency of social media companies and online platforms

5.105Some inquiry participants highlighted the difficulties they experienced in reporting offensive material to social media companies and conducting research into the activities of those companies. For example, Mr Peter Sutton explained that it is difficult to contact representatives of these companies making it:

…virtually impossible to report anything. There are no contact numbers, the reporting is automated. You cannot explain why a post is offensive and should be removed. As a result dangerous messaging is allowed to spread with misinformation and disinformation.[106]

5.106He suggested that the contact information of social media companies should be available to the public. That information should include 'telephone numbers, email addresses, mail and correspondence addresses as well as the opportunity to interact with team members'.[107]

5.107Dr Rachel Sharples, Lecturer, Social Sciences, and Member, Challenging Racism Project, Western Sydney University, told the committee that more research needs to be conducted into understanding the online environment and how it is being used by extremists. Multiple sectors of society, including government, academia, and non-government organisations, need to be involved in conducting that research.[108]

5.108Researchers highlighted the difficulties they are facing in getting access to data from social media platforms to assist them in better understanding how they operate and how they are used by extremists.

5.109For example, Dr McSwiney informed the committee that it has become increasingly difficult for independent researchers to get access to data from social media companies:

…platforms like Facebook, Instagram and X/Twitter have made it increasingly difficult, if not nearly impossible now, for academics to collect data on a large scale on these platforms to understand what's happening and for them to be held to account.[109]

5.110Dr McSwiney referred to a 2019 article by Professor Axel Bruns from the Digital Media Research Centre at the Queensland University of Technology.[110] That article highlighted the effect that tighter control over data was having on researchers' ability:

…to investigate phenomena such as abuse, hate speech, trolling, and disinformation campaigns, and to hold the platforms to account for the role that their affordances and policies might play in facilitating such dysfunction.[111]

5.111The eSafety Commissioner, Ms Julie Inman Grant, reflected on the implications that the discontinuation of social media monitoring tools could have for Australian democracy:

If we're killing social media monitoring tools that are giving us just a degree of transparency, then it's going to limit our ability to be able to intercede and stop those harmful activities from undermining democratic processes.[112]

5.112The OECD reported that most of the online platforms that it monitors:

…prohibit in their Terms of Service or Community Guidelines the use of their technologies to foster terrorist and/or violent extremist activities. They deploy a range of proactive and reactive measures to prevent, detect, and remove TVEC, including through automated means. In case of violation of their policies, they may take enforcement actions such as sending warnings, blocking or removing content, suspending and permanently banning accounts, or referring users to legal authorities.[113]

5.113The OECD indicated that there is limited transparency from online platforms about how they address the dissemination of TVEC. It argued that there is a:

…need for more precision in the Services' governing documents, when they have any; more consistency in the metrics and methodologies used to prepare transparency reports; more transparency in their approaches to content moderation; and more efforts to ensure due process and to safeguard fundamental rights and freedoms, such as the freedom of expression and the right to privacy.[114]

5.114In the OECD's view:

Transparency on TVEC is paramount to better inform policymaking, assess the effectiveness of counter-measures and their potential impact on human rights. Addressing the TVEC issue in a holistic manner, supported by reliable evidence, can help reduce the volume and reach of TVEC online, prevent new terrorist attacks from being committed, and eventually save lives.[115]

5.115eSafety also raised concern about the lack of transparency from social media platforms. It stated:

Transparency is a key pillar of the Global Internet Forum to Counter Terrorism and the Christchurch Call, global initiatives that many of these companies are signed up to – but it still is not clear what commitments they are living up to. It is of great concern that we do not know the answer to a number of fundamental questions about the systems, processes and resources that these tech behemoths have in place to keep Australians safe.[116]

5.116eSafety expressed disappointment about the level of engagement that technology companies have had with the voluntary framework:

…disappointingly, none of these companies have chosen to provide this information through the existing voluntary framework – developed in conjunction with industry – provided by the OECD. This shows why regulation, and mandatory notices, are needed to truly understand the true scope of challenges, and opportunities to make these companies more accountable for the content and conduct they are amplifying on their platforms.[117]

5.117eSafety collaborates across agencies, sectors, and jurisdictions to:

prevent online harms;

protect; and

foster proactive and systemic change.[118]

5.118eSafety also engages with 'the National Intelligence Community on matters of national security interest'. Those matters 'relate to serious material such as child sexual abuse material, terrorist material, and crime and violence material'.[119]

5.119In addition to the complaints schemes, eSafety may 'register industry codes and develop standards for eight sections of the online industry'. Those codes and standards set obligations for industry to address illegal and restricted online content. eSafety has registered six industrydeveloped codes that focus on Class1 material, including:

…pro-terror material, which advocates terrorist acts as defined in the Criminal Code, outside of public debate, entertainment, or satire. The codes apply to social media services, app distribution services, hosting services, internet carriage services, equipment providers, and search engine services.[120]

5.120The industry codes and standards require social media services to take the following measures to prevent and address pro-terror material:

Using systems, processes and/or technologies to detect and remove certain types of pro-terror material.

Implementing systems, processes, and technologies that enable the provider to take appropriate enforcement action against end-users who breach policies prohibiting pro-terror material.

Providing tools which enable Australian end-users to report, flag, and/or make a complaint about pro-terror material accessible on the service.[121]

5.121eSafety published the Basic Online Safety Expectations, which 'are designed to improve online service providers' safety standards, transparency, and accountability'. Online service providers must 'have terms of use, policies, and procedures to ensure the safety of users'. They must also 'take steps to ensure that penalties for breaches of their terms of use are enforced against all accounts created by the end-user'.[122]

5.122The Online Safety (Basic Online Safety Expectations) Determination 2022 sets out the expectations that online service providers will:

take reasonable steps to ensure that end-users are able to use the service in a safe manner and to proactively minimise the extent to which material or activity on the service is unlawful or harmful;

take reasonable steps to minimise certain material, including Class 1 material and material that depicts abhorrent violent conduct;

ensure they have terms of use, policies, and procedures in relation to the safety of end-users, as well as policies and procedures for dealing with reports and complaints; and

take reasonable steps to ensure that penalties for breaches of their terms of service are enforced against all accounts held by those end-users.[123]

5.123Online service providers can be required 'to report on the steps they are taking to comply with any or all of the Basic Online Safety Expectations'. A failure to comply with that 'requirement is enforceable and backed by civil penalties and other enforcement mechanisms'. Information from those reports 'are published in transparency summaries where appropriate'.[124]

5.124As at the time of the writing of this report, the Department of Infrastructure, Transport, Regional Development, Communications and the Arts was consulting on an amended determination. The draft determination 'proposes that detecting and addressing hate speech, which breaches a service's terms of use, is a reasonable step to ensure safe use of a service'.[125]

Footnotes

[1]Centre for Culture and Technology (CCAT), Gender Research Network (GRN) and Perth Extremism Research Network (PERN), Submission 29, p. 2.

[2]CCAT, GRN and PERN, Submission 29, p. 2.

[3]Addressing Violent Extremism and Radicalisation to Terrorism Network (AVERT Research Network), Submission 23, p. 39. Note: the AVERT Research Network referred to a 1985 report from the United States-based Anti-Defamation League (ADL) that discussed online bulletin boards operated by American extremist groups, see: ADL, Computerized Networks of Hate: An ADL Fact Finding Report, 1985.

[4]AVERT Research Network, Submission 23, p. 41. Also see: Ms Lydia Khalil, Submission 33, p. [29].

[5]Australia/Israel & Jewish Affairs Council (AIJAC), Submission 42, p. 4.

[6]AIJAC, Submission 42, p. 4.

[7]AIJAC, Submission 42, p. 4.

[8]Australian Security Intelligence Organisation (ASIO), 'Director-General's National Press Club Address, 24 April 2024, www.asio.gov.au/director-generals-national-press-club-address (accessed 11 June 2024).

[10]ASIO, 'Director-General's National Press Club Address, 24 April 2024, www.asio.gov.au/director-generals-national-press-club-address (accessed 11 June 2024).

[11]See, for example: Name Withheld, Submission 34, p. [4]. Also see: Dr Maura Conway, Dr Ryan Scrivens and Logan Macnair, 'Right-Wing Extremists' Persistent Online Presence: History and Contemporary Trends', International Centre for Counter-Terrorism, October 2019, p. 2.

[12]Pam Nilan, Josh Roose, Mario Peucker and Bryan S Turner, 'Young Masculinities and Right-Wing Populism in Australia', Youth, 3, 2023, pp. 285–299, p. 287, https://doi.org/10.3390/youth3010019.

[13]AVERT Research Network, Submission 23, p. 38.

[14]AVERT Research Network, Submission 23, p. 38. Also see: Charlie Winter, Peter Neumann, Alexander Meleagrou-Hitchens, Magnus Ranstorp, Lorenzo Vidino, Johanna Fürst, 'Online Extremism: Research Trends in Internet Activism, Radicalization, and Counter-Strategies', International Journal of Conflict and Violence, 14, 2020, pp. 1–20, p. 1, https://doi.org/10.4119/ijcv-3809.

[15]Australian Muslim Women's Centre for Human Rights (AMWCHR), Submission 9, p. 8.

[16]Name Withheld, Submission 34, p. [4].

[17]AVERT Research Network, Submission 23, pp. 12–13.

[18]Victorian Government, Submission 40, p. 11.

[19]Australian Federal Police (AFP), AFP Federal Crime Threat Picture, 2023, p. 4.

[20]Royal Commission of Inquiry into the Terrorist Attack on Christchurch Mosques on 15 March 2019 (Royal Commission), Report of the Royal Commission of Inquiry into the terrorist attack on Christchurch masjidain on 15 March 2019: Volume 1: Parts 1–3, pp. 41–42.

[22]CCAT, GRN and PERN, Submission 29, p. 5.

[23]ASIO, Submission 7, p. 3.

[24]ASIO, Director-General’s Social Media Summit address, 11 October 2024, www.asio.gov.au/director-generals-social-media-summit-address (accessed 12 November 2024).

[25]AFP, Submission 27, p. 4.

[26]AFP, Submission 27, p. 6.

[27]AFP, Submission 27, p. 8.

[28]Office of the eSafety Commissioner (eSafety), Submission 28, p. 1.

[29]Heather Wolbers, Christopher Dowling, Timothy Cubitt and Chante Kuhn, 'Understanding and preventing internet-facilitated radicalisation', Trends & issues in crime and criminal justice, no. 673, 15June 2023, p. 3, https://doi.org/10.52922/ti77024.

[30]AMWCHR, Submission 9, pp. 8–9. Also see: Associate Professor Derya Iner, Dr Ron Mason and Chloe Smith, Islamophobia in Australia Report – IV (2014-2021), 21 March 2023, p. 7.

[31]AMWCHR, Submission 9, p. 9.

[32]Organisation for Economic Cooperation and Development (OECD), Transparency Reporting on Terrorist and Violent Extremist Content Online: Fourth Edition, OECD Digital Economy Papers No.367, June 2024, pp. 297–298. Note: the OECD indicated that terrorist and violent extremist content (TVEC) is broadly similar to class 1 material as defined in the Online Safety Act 2021, see: p. 40.

[34]Dr McSwiney, Dr Richards, Dr Sengul, Callum Jones, and Cam Smith, Submission 18, p. 8.

[35]Mr Joshua Fisher-Birch, Content Review Specialist and Researcher, Counter Extremism Project (CEP), Committee Hansard, 24 July 2024, p. 54.

[36]Multicultural Youth Advocacy Network, Submission 13, p. 7.

[37]Dr Imogen Richards, Private capacity, Committee Hansard, 24 July 2024, p. 10.

[38]Dr Richards, Private capacity, Committee Hansard, 24 July 2024, p. 13.

[39]Victorian Government, Submission 40, pp. 11–12.

[40]eSafety, 'eSafety Statement – Terrorist and extremist material on social media', media release, 6 August 2024. Note: the industry codes and standards are available online, see: eSafety, Industry codes and standards, 22 October 2024, www.esafety.gov.au/newsroom/media-releases/esafety-statement-terrorist-and-extremist-material-on-social-media (accessed 14 November 2024).

[41]eSafety, Submission 28, p. 4.

[42]eSafety, Submission 28, pp. 4–5. Note: following an alleged terrorist act at a church in Wakeley, Sydney on 15 April 2024, eSafety called upon social media companies to remove extreme violent video content of that act from their platforms. On 16 April 2024, eSafety issued Class 1 removal notices to Meta and X Corp compelling them to take all reasonable steps to take down extreme violent video content related to the act. eSafety was satisfied with Meta's response to that notice. However, it was not satisfied that X Corp complied with the notice and sought an interim injunction from the Federal Court of Australia, see: eSafety, 'Statement on removal of extreme violent content', media release, 23 April 2024.

[43]Department of Home Affairs (Home Affairs), National Counter-Terrorism Plan, 5th edition, 2024, p.41.

[44]Home Affairs, National Counter-Terrorism Plan, 5th edition, 2024, p. 41.

[47]Mr Nathan Smyth, Deputy Secretary, National Security and Resilience, Home Affairs, Committee Hansard, 24 July 2024, p. 62.

[48]Wolbers et al, 'Understanding and preventing internet-facilitated radicalisation', Trends & issues in crime and criminal justice, no. 673, 15June 2023, p. 4, https://doi.org/10.52922/ti77024.

[49]Dr McSwiney et al, Submission 18, p. 8.

[50]Professor Ramón Spaaij, Associate Professor Mario Peucker, Professor Debra Smith, Professor Natalie Pyszora and Professor Zainab Al-Attar, Victoria University, Submission 5, p. 12.

[51]Dr McSwiney et al, Submission 18, pp. 8–9. Note:right wing extremist online spaces have become more popular over time. For example, the Australia subgroup on the Gab social media platform saw its membership increase significantly within the three months following the March 2019 Christchurch terrorist attack. Membership of that subgroup increased from 45 000 members in March 2021 to almost 74 000 in May 2022, see: Centre for Resilient and Inclusive Societies (CRIS), Submission19, p.5.

[52]Organisation for Economic Cooperation and Development (OECD), Transparency Reporting on Terrorist and Violent Extremist Content Online: Fourth Edition, OECD Digital Economy Papers No.367, June 2024, p. 3.

[54]eSafety, Submission 28, pp. 2–3. Note: Class 1 material includes 'content that advocates terrorist acts, or that promotes, incites, or instructs in matters of crime or violence'. It also 'includes material that depicts, promotes, incites, or instructs in 'abhorrent violent conduct' such as terrorist acts, murder, torture, rape, or violent kidnapping'. Class 2 material 'includes highimpact material that may be inappropriate for general public access and/or for children and young people under 18 years old, such as material featuring high-impact violence, crime and suicide, death, and racism, see: p. 3.

[55]eSafety, Submission 28, p. 3. Note: Parts 5–7 and Part 9 of the Online Safety Act 2021 enables the eSafety Commissioner to issue removal notices to providers of social media services, electronic services or internet services.

[56]eSafety, Submission 28, p. 3.

[57]AVERT Research Network, Submission 23, p. 42.

[58]AVERT Research Network, Submission 23, p. 42. Also see: Linda Schlegal, 'Jumanji Extremism? How games and gamification could facilitate radicalization processes', Journal for Deradicalization, 23, 2020, pp.1–44, https://journals.sfu.ca/jd/index.php/jd/article/view/359 (accessed 8 August 2024).

[60]Dr Helen Young, quoted in Nicola Heath, 'Alt-right groups are targeting young video gamers—and finding a culture where extremist views can flourish', ABC News, 15 July 2022, www.abc.net.au/news/2022-07-15/alt-right-groups-video-games-radicalising-young-men-extremism/101212494 (accessed 15 November 2024).

[61]Mr Henry Campbell, Strategic Engagement and Program Manager, Northern Australia Strategic Policy Centre, and Program Manager, Counterterrorism, Australian Strategic Policy Institute (ASPI), Committee Hansard, 24 July 2024, p. 36.

[62]Ms Atitaya (Angela) Suriyasenee, Research Intern, Cyber, Technology and Security, ASPI, Committee Hansard, 24 July 2024, p. 35.

[63]Mr Campbell, ASPI, Committee Hansard, 24 July 2024, p. 36.

[64]Name Withheld, Submission 34, p. [5].

[65]Dr Young, 'Extremists use video games to recruit vulnerable youth. Here's what parents and gamers need to know', The Conversation, 10 November 2022, www.theconversation.com/extremists-use-video-games-to-recruit-vulnerable-youth-heres-what-parents-and-gamers-need-to-know-193110 (accessed 19 June 2024).

[66]Name Withheld, Submission 34, p. [5].

[67]Name Withheld, Submission 34, p. [5].

[68]Ms Atitaya (Angela) Suriyasenee, Research Intern, Cyber, Technology and Security, ASPI, Committee Hansard, 24 July 2024, p. 35.

[69]Center for Digital Youth Care, 'Center for Digital Youth Care', no date, www.cfdp.dk/english/ (accessed 26 September 2024).

[70]Center for Digital Youth Care, 'Our digital practice', no date, www.cfdp.dk/digital-praksis/ (accessed 26 September 2024).

[71]Home Affairs, Submission 8, p. 4.

[73]eSafety, Submission 28, p. 2.

[74]AFP, 'Extremist recruitment reaching young Australian gamers', Media Release, 16 October 2022.

[75]AFP, 'Extremist recruitment reaching young Australian gamers', Media Release, 16 October 2022.

[77]Victorian Government, Submission 40, p. 12.

[78]Dr McSwiney et al, Submission 18, p. 9.

[79]CEP, Submission 3, p. 8.

[80]Garth Davies and Mackenzie Hart, 'Online Extremist Threats: A View From The Trenches', Centre for Research and Evidence on Security Threats, 12 September 2024, www.crestresearch.ac.uk/comment/online-extremist-threats-a-view-from-the-trenches/ (accessed 26 September 2024).

[81]Mr Stephen Blanks, Executive Committee Member, New South Wales Council for Civil Liberties, Committee Hansard, 24 July 2024, p. 52.

[82]Wolbers et al, 'Understanding and preventing internet-facilitated radicalisation', Trends & issues in crime and criminal justice, no. 673, 15June 2023, p. 8, https://doi.org/10.52922/ti77024.

[83]Wolbers et al, 'Understanding and preventing internet-facilitated radicalisation', Trends & issues in crime and criminal justice, no. 673, 15June 2023, p. 8, https://doi.org/10.52922/ti77024.

[84]ASIO, 'Director-General's National Press Club Address, 24 April 2024, www.asio.gov.au/director-generals-national-press-club-address (accessed 11 June 2024).

[85]ASIO, Submission 7, p. 3.

[86]ASIO, 'Director-General's National Press Club Address, 24 April 2024, www.asio.gov.au/director-generals-national-press-club-address (accessed 11 June 2024).

[87]ASIO, 'Director-General's National Press Club Address, 24 April 2024, www.asio.gov.au/director-generals-national-press-club-address (accessed 11 June 2024).

[88]AFP, Submission 27, p. 7.

[89]Mr Mike Burgess, Director-General of Security, ASIO, Committee Hansard, 24 July 2024, p. 62.

[90]Mr Burgess, ASIO, Committee Hansard, 24 July 2024, p. 62.

[91]Mr Burgess, ASIO, Committee Hansard, 24 July 2024, p. 64.

[92]Mr Burgess, ASIO, Committee Hansard, 24 July 2024, p. 64.

[93]Mr Burgess, ASIO, Committee Hansard, 24 July 2024, p. 64.

[94]Mr Burgess, ASIO, Committee Hansard, 24 July 2024, p. 64.

[95]Universities Australia (UA), Submission 12, p. 3.

[96]UA, Submission 12, p. 3.

[97]Associate Professor Josh Roose, CRIS, Committee Hansard, 17 June 2024, p. 4.

[98]Name Withheld, Submission 34, p. [5].

[99]Name Withheld, Submission 34, p. [7].

[100]Name withheld, Submission 34, p. [8].

[101]Wolbers et al, 'Understanding and preventing internet-facilitated radicalisation', Trends & issues in crime and criminal justice, no. 673, 15June 2023, pp. 1–2, https://doi.org/10.52922/ti77024.

[102]ASIO, Director-General’s Social Media Summit address, 11 October 2024, www.asio.gov.au/director-generals-social-media-summit-address (accessed 12 November 2024).

[103]Home Affairs, National Counter-Terrorism Plan, 5th edition, 2024, p. 21.

[104]Home Affairs, National Counter-Terrorism Plan, 5th edition, 2024, p. 22.

[105]Home Affairs, National Counter-Terrorism Plan, 5th edition, 2024, p. 22. Also see: Home Affairs, ‘Report Online Extremist Material’, 22 September 2024, www.livingsafetogether.gov.au/report-online-extremism (accessed 14 November 2024); eSafety, Online Content Scheme: Regulatory Guidance, December 2021, p. 6.

[106]Mr Peter Sutton, Submission 30, p. [1].

[107]Mr Sutton, Submission 30, p. [1].

[108]Dr Rachel Sharples, Lecturer, Social Sciences, and Member, Challenging Racism Project, Western Sydney University, Committee Hansard, 24 July 2024, p. 19.

[109]Dr McSwiney, Private capacity, Committee Hansard, 24 July 2024, p. 14.

[110]Dr McSwiney, Private capacity, Committee Hansard, 24 July 2024, p. 14.

[112]Michael Workman and Kevin Nguyen, 'As social media inflames the UK race riots, Facebook makes it harder to police its platform', ABC News, 14 August 2024, www.abc.net.au/news/2024-08-14/meta-shuts-crowdtangle-as-disinformation-ai-runs-rampant/104214782 (accessed 21 August 2024).

[118]eSafety, Submission 28, pp. 2–10.

[119]eSafety, Submission 28, p. 5.

[120]eSafety, Submission 28, pp. 5–6. Note: information about the industry codes and standards, including the six industry codes that are currently operational, is available on the eSafety Commissioner website, see: eSafety, Industry codes and standards, 5August 2024, www.esafety.gov.au/industry/codes (accessed 13 August 2024).

[121]eSafety, Submission 28, p. 6.

[122]eSafety, Submission 28, p. 6. Also see: eSafety, 'Basic Online Safety Expectations', 13August 2024, www.esafety.gov.au/industry/basic-online-safety-expectations (accessed 13August 2024).

[123]eSafety, Submission 28, p. 7. Note: 'the Explanatory Statement to the Determination states that services should use their terms, policies, and procedures to address harmful material that is not necessarily unlawful or explicitly referenced in the OSA'. That material includes online hate.

[124]eSafety, Submission 28, p. 7.

[125]eSafety, Submission 28, p. 8.