Chapter 2

Key issues

Introduction

2.1
This chapter explores key issues raised in evidence to the inquiry. It is divided into four sections: compliance and classification; investigations, notices, and enforcement; penalties and prosecutions; and the scope and value of the abhorrent violent material (AVM) legislation. These sections cut across the various parts of the inquiry's terms of reference.

Compliance and classification

How is ‘abhorrent violent material’ defined in the AVM Act?

2.2
Part ‘e’ of the inquiry terms of reference asked the committee to consider the definition of AVM under the AVM Act. The definition of AVM in the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (AVM Act) is limited to very specific categories of the most egregious, violent audio, visual or audio-visual material produced by a perpetrator or their accomplice. The definition includes video, still images, and audio recordings1—as was made clear during the inquiry, and as discussed further below, it does not include, among other things, symbols, insignia, text, paraphernalia, animation and the like, or commentary.
2.3
To meet the definition of AVM, the material must stream or record conduct where a person engages in a terrorist act (involving serious physical harm or death of another person), murders or attempts to murder another person, tortures another person, rapes another person, or kidnaps another person (where the kidnapping involves violence or the threat of violence). This conduct is referred to as ‘abhorrent violent conduct’ (AVC). The definition does not include material recording animated, re-enacted, or fictionalised conduct.2
2.4
The definition of AVM includes a recording that has been altered but still includes its original content. The definition does not and was not intended to include material produced by bystanders or individuals not engaged in the AVC.3
2.5
The committee explored whether the definition excludes material that might otherwise be considered abhorrent or extremist material—for example, insignia, paraphernalia, written material, and animation. Some providers explained that such material would be prohibited under their own community guidelines, irrespective of whether the material was captured by the AVM Act.4
2.6
The committee received evidence that major industry platforms have dedicated teams, tools and processes in place to identify and respond to material prohibited under their community standards,5 suggesting a larger definition of AVM that would capture other forms of violent or egregious content in the legislation is not required.
2.7
However, asked at the public hearing about the exclusion of material such as insignia connected to extremism, the AFP advised it considered there is a definitional gap within the Criminal Code that can at times limit its investigations in relation to extremism and terrorism. The AFP explained that terrorism offences currently do not capture instances where individuals have access to extremist material but that material is not explicitly connected to a terrorist act. The AFP advised that this broader definition is captured under South Australian law, meaning that the AFP had been able to charge some individuals, but only through joint counterterrorism investigations.6
2.8
The AFP also argued for a broader possession offence to be introduced into the Criminal Code which would capture the individual possession (rather than the sharing) of AVM. This, the AFP argued, would allow the AFP to disrupt individuals at an earlier stage and prevent investigators from having to view harmful material with no pathway to charging.7
2.9
It is worth noting that the committee is prevented from monitoring, reviewing, or reporting on the performance of the AFP of its functions under Part 5.3 of the Criminal Code (that is, the AFP's counterterrorism function). While the committee did not consider the AFP’s exercise of its counterterrorism functions in this inquiry, it nonetheless notes the concerns raised by the AFP during its inquiry as they pertain to the definition of AVM contained in the Act which includes conduct where a person engages in a terrorist act.

Offences and powers introduced by the AVM Act

2.10
The purpose of the Act was to reduce the incidence of online platforms being misused by perpetrators of violence. To this end, it introduced new criminal offences targeted at service providers:
The ‘failure to notify’ offence criminalises internet service providers, hosting providers and content providers who fail to notify the Australian Federal Police (AFP) within a ‘reasonable time’8 about material that depicts AVC which occurred in Australia. A defence is available if the provider can demonstrate that they had reasonable grounds to believe the AFP was already aware of the material.
The ‘failure to remove’ offence criminalises hosting and content providers who fail to remove access to AVM ‘expeditiously’ where the material is reasonably capable of being accessed within Australia. Defences to this offence include, for example, where the material relates to law enforcement purposes, news reports made in the public interest by a professional journalist, court and tribunal proceedings where the accessibility of the material is for lawful advocacy purposes, and where the accessibility of the material relates to research or artistic works created in good faith.9
2.11
A maximum penalty of 800 penalty units is attached to the failure to notify offence. For the failure to remove offence, a maximum penalty of three years imprisonment or 10,000 penalty units, or both, applies for an individual. For a body corporate, the maximum penalty is the greater of 50,000 penalty units or 10 per cent of annual turnover. The question of penalties is explored later in this report.
2.12
The AVM Act also granted the eSafety Commissioner10 a new power to issue notices which advise internet, hosting, and content providers that, at the time the notice was issued, their service was providing access to AVM and they should act to expeditiously remove the content.11
2.13
The notice also serves to establish evidentiary requirements necessary for a criminal prosecution against a provider that it was ‘reckless’ as to the presence of AVM on its service. This presumption can be rebutted if the provider is able to produce evidence to the contrary.12 Concerns regarding the threshold for ‘recklessness’, and the application of the defences available to providers in this regard, are discussed in detail below.

Continuing value of the legislation

2.14
Evidence received by the committee suggested that the AVM Act remains an important mechanism for combatting the ongoing threat of extremist and terrorist violence in Australia. The AFP advised the committee that, within Australia, the terrorism threat level remains probable, as credible intelligence indicates that individuals or groups have the intent and capability to conduct a terrorist attack in Australia. The AFP identified that during the COVID-19 pandemic, extremists are taking advantage of isolation, loneliness and financial stress to boost their numbers.13 The AFP told the committee that the borderless nature of offending, along with advances in technology and evolving methodologies means that despite decades of effort, the exploitation of children has expanded across the globe and ‘outpaced every attempt to respond’.14
2.15
While the AVM Act has not yet led to any criminal arrests or penalties, evidence received in this inquiry strongly suggests the legislation is serving as an effective deterrent to misconduct or wilful blindness to the sharing of AVM. Indeed, as discussed further below, one of the primary values of the legislation at this point appears to be its deterrence effect.
2.16
The Office of the eSafety Commissioner (eSafety) told the committee that feedback from industry indicated that the potential criminal penalties which might apply has been front of mind for providers when taking expeditious action to remove material that has been brought to their attention.15
2.17
The Department of Home Affairs pointed out that the legislation provides an important deterrence element and encourages platforms to take material down where they are aware of it. According to the department, the legislation remains a key element of Australia's strong online regulatory environment.16
2.18
Several industry submissions to the inquiry agreed with the objectives behind the legislation, and representatives of Meta noted that the legislation ensures that industry is being held to account and that platforms are not 'dithering'.17

Challenge of determining whether content is AVM

2.19
Various submitters highlighted practical difficulties in identifying whether an item of content meets the definition of AVM and determining whether any of the defences listed in the legislation apply.18 As a result of this apparent complexity, some industry participants suggested that the effectiveness of the Act would be improved if there was greater time allowed to enable providers to determine whether material should be removed.19
2.20
Meta (formerly Facebook) argued that determining whether an item of content constitutes an offence under the Criminal Code is not always straightforward. In many cases, it explained, contextual information will not yet be available, as law enforcement or other stakeholders will still be responding to the situation.20
2.21
Similarly, the joint submission from industry and civil society organisations (comprising the Communications Alliance, Digital Industry Group Inc, Digital Rights Watch, Tik Tok, Google, IBM, and Twitter) argued that it is not always clear to service providers where there may be a lawful reason for the AVM being available, such as news reporting, law enforcement, court proceedings, or research purposes.21
2.22
Microsoft explained that it can be challenging to determine the identity of the content producer, the correct geographic location of the video, and distinguish between real and fictional material.22 Microsoft argued that the difficulties in determining the context for AVM can also unnecessarily impact both policing and industry resources by resulting in incorrect reporting to the AFP.23
2.23
Meta provided a case example to illustrate some of the complexities involved in determining whether an item of AVM should be taken down or left online:
There was one example, during the last couple of years, when it was flagged to us that there was a possibility that footage of the Christchurch attack was circulating on Facebook… Once we looked at it, we discovered that it was not of the Christchurch attack; it was actually a violent attack in a mosque in South Asia...One of the challenges was that it was hard to determine whether or not the video was coming from a bystander or from the attacker or an accomplice. But what was clear is that it appeared to be being shared in order to alert the world to the risks that Muslims were facing in that particular community in being able to worship. So that's probably one of the best examples we've seen in the last couple of years of a really difficult case. In that case we erred on the side of caution in removing it, and there is an obvious question about whether we were right to do that. Should we have left it up because it was bringing the world's attention to the safety of Muslims in that South Asian country and in that community?24
2.24
At the public hearing, a representative of Meta told the committee:
Because of the way the law works, does it actually allow us to have time to think about it? That's one issue here. Once we were aware of it [the potential AVM], we got it removed immediately. That's a challenge for us when we want to be able to deliberate and understand: where is this video; where is this event happening; and is it happening right now?25
2.25
In its written submission to the inquiry, Meta suggested narrowing the definition of AVM in the legislation so that it only captures items of content that are clearly or reasonably identifiable as AVM.26

Industry concerns regarding definition of ‘kidnapping’

2.26
Industry raised significant concerns regarding operationalising the kidnapping component of the AVM legislation. To meet the definition of AVM, kidnapping must involve taking or detaining a person without their consent; the kidnapper must have taken that person with intent to ransom or cause harm; and the kidnapping must have involved either violence or the ‘threat of violence’.
2.27
Industry and civil society organisations argued that the detection and removal of AVM is being made unnecessarily complex for providers by the inclusion of the words ‘threat of violence’ in the legislation. According to these organisations, providers are essentially being asked to determine whether there has been a ‘threat of violence’ when, in some cases, the threat of violence is not readily apparent.27
2.28
Meta provided an anonymised real-life case example to demonstrate some of the challenges involved:
We received an informal report that claimed a parent had taken their own children, for whom they may not have had custody rights, and was livestreaming on Facebook Live. We located the livestream and, on review, it appeared the parent was raising awareness of perceived injustices relating to their children and there was no evidence of threats or harm to the children. Law enforcement could not confirm for us whether it was a kidnapping (ie. whether the parent had custodial rights or not). This case demonstrates that it can be challenging to determine whether a livestream constitutes a kidnapping, as it requires the digital platform to have sufficient knowledge of the surrounding circumstances. However, the AVM law could have resulted in Facebook being deemed ‘reckless’, even though we quickly responded to an informal report and would have had no way of knowing whether or not this instance constituted kidnapping.28
2.29
At the public hearing, the Attorney-General’s Department acknowledged the definitional and operational difficulties associated with the kidnapping portion of the legislation. According to the Attorney-General’s Department, since the legislation came into force, there had been multiple consultations with industry regarding issues in operationalising the legislation.29 The result of these consultations, according to the AGD, had informed subsequent guidance on the AVM Act released by the department to assist industry.30

Potential for overremoval of content

2.30
The AVM Act requires providers to ‘expeditiously’ remove AVM that is reasonably capable of being accessed within Australia, and to notify the AFP in a ‘reasonable time’ where the AVM depicts conduct in Australia.
2.31
As mentioned, one of the key concerns raised by industry was that the
AVM Act does not provide providers additional time to make complex assessments. Submitters argued that a combination of resource-constraints, time-pressure, ambiguous visual media, and harsh penalties are incentivising industry participants to err on the side of overremoval of content to mitigate the risk of criminal penalties and financial loss.31
2.32
Microsoft argued that, in reality, when providers are considering potential items of AVM, industry is required to take a broad interpretation of AVM over the specific definition of the legislation.32
2.33
Google told the committee that, while it had not received any AVM notices, it regularly receives broad removal of content requests. An analysis of its cease-and-desist and takedown letters identified that many of these removal requests had sought to remove potentially legitimate or protected speech.33
2.34
At the hearing, Meta pointed out that the industry has long grappled with balancing the need to allow people to highlight oppression, human rights violations or other wrongs happening in their communities with the need to protect other people from viewing harmful content when they are online.34
2.35
Likewise, Twitter argued that allowing providers adequate space to review content would ensure that lawful or counter speech by activists, politicians, journalists, and whistleblowers on important issues is not being taken down.35
2.36
Some submitters were concerned that the legislation could also inadvertently capture important historical and non-harmful material, such as that relating to the Holocaust being provided by museum websites. These submitters called for refinement of the legislation to mitigate the risks of inappropriate overreach.36
2.37
The Department of Home Affairs acknowledged industry’s concerns about the operation of the Act and how the requirements being placed on industry may play out in practice. The department told the committee that it considers that the legislation maintains an appropriate balance, while noting that the legislation is under continual review:
We are aware of concerns. We consider that the act maintains the appropriate balance and meets the government's objectives. Having said that, the act has been in operation for only two years, and you do need more time to be able to assess it. As with any legislation or policy, we keep it under continual review, and we will look for opportunities, as they come up, for where improvements or changes are needed.37
2.38
Separately, as set out in the AVM Act Fact Sheet produced by the Attorney-General’s Department, the legislation includes a safeguard against potential inappropriate prosecutions designed to prevent theoretical examples such as the Holocaust Museum being captured in the definition. Specifically, that the Attorney-General’s consent is required before any proceedings relating to the removal of content can commence.38
2.39
The AGD also advised the committee that the AVM Act strikes a balance between the need to support platforms in making informed judgements about action in relation to potential AVM, and the need and expectation that in the more ‘egregious cases’ of AVM platforms will ‘act fast’39 (the issue of response times is discussed further in the next section). The importance of quick responses to AVM was made obvious during the inquiry, with a number of inquiry participants pointing to the fact that AVM is often shared fast and wide, with enduring effects. For example, in its written submission, the AFP explained that the ability for individuals to rapidly disseminate content and store it across disparate platforms and jurisdictions allows extremist messaging and abhorrent material to duplicate in such a way that subsequently removing the source content can have limited impact.40
2.40
During the hearing, witnesses acknowledged that a decision to remove AVM by a provider is ultimately and properly an exercise of human judgement. Witnesses were asked about the difficulties and potential counterproductivity of attempting to legislate for each instance where a judgement call needs to be made about what constitutes AVM and whether to remove it, rather than providing those making such judgements with appropriate levels of flexibility. Talking to this point, a representative of the AGD emphasised that the AVM Act included numerous safeguards to ensure that providers applying their resources and making those judgements in good faith were protected for penalty or prosecution, while at the same time targeting those platforms ‘taking unnecessary risks about the hosting of this kind of material, [and] not paying due regard to the hosting of that material’. Moreover, the legislation, rather than being prescriptive about timeframes for action to report or remove AVM, allowed for the fact that such judgement calls would be ‘circumstantially dependent’.41

Defining response times

2.41
Various inquiry participants called for clarity around the terms ‘expeditious’ and ‘reasonable time’.42
2.42
According to Microsoft, the time-based language in the AVM Act ‘is ambiguous and has been the cause of significant industry confusion…these [terms are] unclear and somewhat subjective’.43
2.43
Microsoft argued that takedown times should be made clearer and more proportionate, that the practical limitations being faced by industry in real-world settings should be reflected, and that the government should provide additional guidance for industry on how the timeframes would work in practice.44
2.44
Similarly, Snapchat argued that introducing clearly defined response times to remove AVM would prevent misinterpretation and be useful for both providers and the eSafety Commissioner. Snapchat argued that times should be aligned to existing agreed international response times. That is:
a one-hour response time for material related to terrorism and child sexual abuse, in keeping with the voluntary guidelines that a range of companies signed up to in collaboration with the EU Internet Forum; and
a 24-hour response time for the other categories of AVM listed in the Act.45
2.45
Conversely, Meta provided evidence that varying circumstances and contexts make it difficult to set a ‘reasonable’ timeframe for action in relation to an item of AVM. For example, Meta’s AI automatically removes a lot of harmful content which was pre-recorded and is being redistributed, but in live instances where there may be a child at risk, then Meta would alert the authorities as quickly as possible to safeguard the child.46
2.46
Officials from the Attorney-General’s Department made a similar point, explaining that in developing the legislation, the government had intentionally sought to maintain flexibility regarding timeframes in relation to notifications of and removal of potential AVM:
The core concern we had was that how you define what is going to be the right time line will depend a lot on the factual circumstances...If we have—as we had with what occurred in Christchurch—a live-streamed event of a terrible magnitude, we will want those videos down as soon as possible…In other situations it's not going to be as clear-cut…They'll want to think about it and make sure they're not overly regulating what they're doing. In that instance there's probably a little more room for consideration and for dealing with it…47

Calls to streamline wording of legislation to assist providers

2.47
Industry and civil society organisations suggested that one way to reduce some of the complexity involved in determining whether an item of content meets the definition of AVM would be to remove unnecessary wording from some of the defences.
2.48
For example, with regards to the research defence, industry and civil society organisation argued for the removal of the words ‘reasonable in the circumstances’. For context, a defence to the offences is available where the accessibility of the material is necessary for scientific, medical, academic or historical research, but only where it is ‘reasonable in the circumstances’. Some inquiry participants argued that, in practice, this wording requires digital platforms to be the arbiters of the limits on research. Given the uncertainty around what material might qualify, it was argued, providers may be inclined to overremove material to avoid the risk of committing an offence. Therefore, this wording raises the risk of unnecessary curbing of scientific, medical, academic or historical research by private companies. Industry and civil society organisations argued that deleting reference to ‘reasonable in the circumstances’ would provide clarity while still maintaining the intent of the provision.48
2.49
The art defence allows AVM to be left online for artistic endeavours developed in ‘good faith’. Industry and civil society organisations argued that it can be difficult for providers to determine the motivations of artists. Given this uncertainty, it was argued that providers may be inclined to remove the material to avoid the risk of committing an offence, and in doing so stymie artistic endeavours. Industry and civil society organisations argued that deleting the words ‘in good faith’ from the legislation would remove uncertainty as to whether the defence applies.49
2.50
A number of industry and civil society organisations called for amendments to the political expression defence which allows AVM for the purpose of lawful advocacy where 'reasonable in the circumstances'. Industry and civil society organisations argued that inclusion of the qualification ‘reasonable in the circumstances’ introduces uncertainty as to whether the defence applies. In practice, it was argued, this requires providers to set the limits of acceptable political activism. Given the uncertainty, providers may be inclined to overremove material to avoid the risk of committing an offence.50
2.51
The Attorney-General’s Department submitted that some of the defences contained in the Act are qualified by particular requirements to ensure the defences only apply to genuine claims of an exempted purpose, and to ensure unsubstantiated claims remain within the scope of the offences.51

Should classification of AVM be undertaken by government?

2.52
A number of inquiry participants made the case that the responsibility for making decisions about whether an item of content satisfies the definition of AVM, and thus its illegality, more appropriately rests with governments and courts rather than with private industry.52
2.53
Tech Against Terrorism was concerned that there had been a trend, predominately in Europe, where the responsibility for determining whether content is illegal had been outsourced to tech platforms following notification by authorities or users.53
2.54
Microsoft suggested that the identification and classification of AVC and AVM are currently highly discretionary and decentralised. Microsoft argued that introducing a regulatory body to make complex assessments and decisions on the legality of specific content would remove the need for private industry to balance individuals' rights with the application of local laws.54
2.55
Google argued that as content is not always clearly illegal, courts have an essential role to play in reaching conclusions on which private platforms can rely.55
2.56
Due to the complexity involved in determining the nature of an item of AVM content, various submitters argued that the introduction of a formal pathway that allows industry to seek determinations from the eSafety Commissioner is required.56 Industry and civil society organisations argued that this pathway would help in limiting unnecessary content removal.57
2.57
Industry and civil society organisations proposed making these decisions of the eSafety Commissioner reviewable, similar to the treatment of potentially prohibited material under the Broadcasting Services Act 1992 (BSA).58
2.58
Meta argued that where a digital platform had referred items to the eSafety Commissioner for a decision, up until a determination had been made by eSafety the providers should not be held liable for not expeditiously removing those items of content.59
2.59
In response to suggestions that there should be a mechanism for providers to seek a decision from eSafety on ambiguous material, eSafety raised concerns about its capacity to do so. eSafety further suggested that while it can issue notices to bring AVM to the attention of providers, it nonetheless falls to the provider to determine whether that content does, in fact, constitute AVM:
… the notice we provide to a hosting service or a content service is only to bring the material to their attention. Whether it is AVM becomes a matter for them. If it is the case that a service is prosecuted through the relevant offence provision for failing to take expeditious action against content, it's open to the court to consider whether or not there were, in fact, grounds to consider the material AVM, and the service may very well have identified defences for reasons for the material not to be regarded as AVM, justifying their decision not to take expeditious action to remove the material.60
2.60
In response to a written question on notice from the committee, eSafety clarified that notices are not takedown notices; they are intended to make the service aware of AVM. Before issuing a notice, the eSafety Commissioner must be satisfied on reasonable grounds that specified material was AVM. However, the service may take a different view as to the material’s character, concluding that the material is not AVM. The service may also conclude that one or more defences apply under the Act.61
2.61
eSafety’s written submission also pointed out that there is a difference between the standard of proof required to issue an AVM notice to a provider, and the higher standard of proof required in any subsequent prosecution.62

Investigations, notices, and enforcement

eSafety investigations and notices

2.62
eSafety currently exercises powers to investigate complaints from Australians about material that is prohibited or potentially prohibited under the Online Content Scheme. The Online Content Scheme (currently contained within the Broadcasting Services Act 1992 and due to be modernised through the OSA) enables the eSafety Commissioner to investigate and take action on complaints about prohibited online content.
2.63
Since the introduction of the AVM Act, 2120 regulatory investigations into material satisfying the definition of AVM under the Online Content Scheme have been undertaken by the eSafety Commissioner.63 All of the material was hosted overseas, and more than 98 per cent depicted penetrative child sexual abuse or torture.64
2.64
In all but a few matters, eSafety notified members of the Internet Association of Internet Hotlines (INHOPE) of the child sexual abuse material (CSAM) for takedown in the host jurisdiction. INHOPE members removed almost 75 per cent of CSAM URLs within three days of notification.65 According to eSafety, the efficiency of CSAM removal within the INHOPE network represents a useful alternative to AVM notices.66
2.65
It should be emphasised here that only certain types of CSAM would appear to be captured by the AVM Act—specifically, rape and torture, as defined in subdivision 474.32 of the Criminal Code, as amended by the AVM Act. The committee notes that as part of its ongoing inquiry into law enforcement capabilities in relation to child exploitation, the committee will consider the adequacy of Australia’s legislative and online regulatory framework in combatting CSAM more broadly, including material that does not meet the strict definition of AVM. In that inquiry, the committee will have an opportunity to inquire into whether the exclusion of certain types of CSAM from the definition of AVM is impacting the efforts of law enforcement to combat the growing scourge of online child exploitation.
2.66
eSafety advised the committee that platforms are responsive to informal approaches when AVM has been identified on their services, and that not all AVM will necessarily be acted on via a formal AVM notice.67 eSafety exercises discretion based on criteria such as how widely the material can be shared, whether it can incite violence or cause harm, its recency, if it serves an extremist agenda, and whether an AVM notice can be served.68
2.67
eSafety reassured the committee that it had operationalised its response to the legislation by having regard to the Explanatory Memorandum (EM). The EM articulates that the Act is one part of Australia's overall response to terrorism online and the weaponisation of online platforms for the purpose of propaganda.69 The committee heard that the focus of eSafety was not material which is presented in an objective context for the purposes of news, discussion, or debate, but rather to limit the reach of terrorist propaganda.70
2.68
As explained further below, to date the eSafety Commissioner has not issued a notice to any major platform. This may reflect, at least in part, the abovementioned level of informal engagement between eSafety and those platforms. It is likely also relevant, as also noted below, that much of the AVM available online circulates on smaller platforms targeting or utilised by specific audiences.

Lack of compulsive powers

2.69
The submission from the Office of the eSafety Commissioner raised concerns about its lack of compulsive investigation powers for addressing AVM. It explained that eSafety has no compulsive investigation powers that relate directly to the investigation of an AVM notice unless the material also meets the definition of class 1 material under the Online Safety Act 2021 (OSA).71 As a result, eSafety must rely on assistance from the AFP to investigate.72
2.70
eSafety submitted that this lack of powers can complicate its work. For example, eSafety cannot compel documents or other information from an upstream provider to identify their downstream client or subsidiary. This can undermine the accurate drafting of notices, which must correctly specify the corporate entity.73
2.71
In its submission to the inquiry, the Department of Home Affairs noted that encrypted networking and new hosting arrangements may also add to the complexity.74
2.72
eSafety explained that under the BSA, an investigation into online content can be undertaken to determine whether Australian end-users can access prohibited material. However, once it becomes apparent that there is no Australian connection to the content (as required under the BSA for a takedown notice) – meaning no takedown notice is possible – the exercise of compulsive powers merely to facilitate service of an AVM notice raises potential legal questions.75

Should there be a formal review process for AVM notices?

2.73
The AVM Act intentionally excludes procedural fairness requirements to ensure that the eSafety Commissioner can act quickly.76
2.74
Some submitters argued for the decisions of the eSafety Commissioner to issue an AVM notice to be subject to review by the Administrative Appeals Tribunal (AAT). This is because the eSafety Commissioner’s assessment of content will be subjective and may not consider whether any defences apply. This mechanism, it was argued, would give providers certainty that they are only removing genuine items of AVM.77
2.75
Additionally, industry and civil society organisations argued that providers should be allowed to publish a notice at the site of the content that has been removed, explaining that the removal is due to a decision by the eSafety Commissioner. This notice would invite parties who believe an item of content may have been removed incorrectly to apply to the AAT for a review.78

Transparency in the AVM scheme

2.76
Since April 2019, 24 notices have been issued by the eSafety Commissioner to a variety of content and hosting services in relation to 15 items of content depicting fatal terrorist violence, murder, and torture.79 All except one of the AVM notices were issued to content and hosting providers based offshore.80 None of the notices issued concerned a major digital platform.81
2.77
Although an AVM notice is not enforceable by the eSafety Commissioner in the same way as a regulatory takedown notice, content was removed in 13 out of 15 cases following receipt of the notice.82
2.78
At the hearing, officials from eSafety provided an update on the status of the two outstanding items of content. One matter was ongoing and related to incorrect public information relating to whom the notice was issued to as the hosting company was no longer the owner of the IP range.83
2.79
The second matter related to material hosted overseas where eSafety AVM notices are not compulsory or able to be enforced. An official from eSafety explained that:
…under the current Broadcasting Services Act we have no power to exercise our take-down powers that are conferred under that act...there's nothing directly compelling a service to take material down once it's been brought to their attention.84
2.80
In its written submission to the inquiry, Google argued there should be more transparency about the recipients of the 24 notices issued by the eSafety Commissioner under the AVM Act, in particular, the organisations who did not remove content after receiving a notice.85
2.81
A representative of Digital Rights Watch warned the committee that content ‘removal’ often means that the items have simply moved to a different platform. As such, it is essential that transparency about removal requests and responses is available for academics, civil society, and governments to inform policies moving forward.86
2.82
A joint submission from the Director of the Centre for Law and Justice at Charles Sturt University, Professor Mark Nolan, and a senior lecturer at the ANU College of Law, Dr Dominique Dalla-Pozza, identified that it can be difficult to obtain up-to-date information about the extent to which the AVM provisions have been used. The annual reports of the AFP do not provide information on the number of persons who have been charged under the offences inserted by the AVM Act. Professor Nolan and Dr Dalla-Pozza acknowledged that while this could indicate that no charges have been laid, it is important, given the 'controversial' nature of the provisions, for there to be reporting on their use.87
2.83
Most industry participants were supportive of increasing transparency in principle. Some platforms already provide regular transparency reports, which include requests received from governments and law enforcement.88
2.84
The annual reports of the eSafety Commissioner provide the number of AVM notices which were issued in the period but do not provide further information.
2.85
In response to suggestions that eSafety should provide information about which organisations did not comply with formal notices, eSafety pointed out that it may not be prudent to advertise those sites which traffic in egregious material; this principle, eSafety advised, is fundamental to the AVM Act.89

Geographical limitations

2.86
Under the AVM Act, providers are required to remove AVM regardless of where they are based and regardless of where the depicted conduct occurred—not doing so gives rise to a ‘failure to remove’ offence.
2.87
For material that relates to conduct which occurred in Australia, providers are required to notify the AFP, regardless of where the provider is based—not doing so gives rise to a ‘failure to notify’ offence.
2.88
Despite these requirements, there are no formal mechanisms to enforce extraterritorial AVM notices, or those issued to providers based elsewhere.
2.89
The committee heard that in extraterritorial cases where a content service fails to respond to an AVM notice, eSafety may use an indirect method to achieve compliance. For example, eSafety can approach the hosting service to identify a breach of their terms of service. The hosting service will then ensure that their client is conforming with their terms of service, which prevents them from being used for offensive material.90
2.90
The inquiry received evidence that the dissemination of AVM also mostly occurs from within smaller platforms, as they may not have either the will or the capacity to combat weaponisation of their services.91
2.91
In acknowledging the extraterritorial limitations of the framework, the Department of Home Affairs submission noted that the Global Internet Forum to Counter Terrorism (CIFCT),92 which aims to build the capacity of smaller platforms to respond to terrorist content on their service, could be leveraged to have AVM removed. However, in the same submission, the department noted that the GIFCT does not represent the platforms on which AVM is likely to be hosted.93

Penalties and prosecutions

Is the penalty regime appropriate?

2.92
While industry participants broadly agreed with the intentions behind the AVM Act, concerns were raised regarding the penalty regime.
2.93
Submitters variously argued that the penalties should be amended to better accommodate good-faith actors, account for platform size, and align with other penalties in the legal system.94
2.94
Microsoft argued that criminal sanctions, including financial penalties and imprisonment, are disproportionate penalties in the space, especially where a provider may be in the process of doing all it can to comply.95
2.95
Industry and civil society organisations suggested amending the legislation to ensure the most severe penalties are reserved for repeat offenders and those acting in bad faith. One proposal was for an initial offence amount, which would then double for repeat instances of non-compliance.96
2.96
A lack of clarity over how duplicates and derivatives of AVM are treated under the AVM Act was also criticised by inquiry participants.97 Tech Against Terrorism pointed out that, in theory, a criminal offence and penalty could be imposed for every individual duplicate of an item of AVM.98
2.97
Microsoft argued that:
Once made available online, the unfortunate reality is that most AVM will stay in circulation or be able to be shared somewhere on the Internet on an ongoing and indefinite basis. Even small alterations to AVM can pose challenges to the detection tools used by responsible actors.99
2.98
According to evidence from industry and civil society organisations, technology is not yet advanced enough to automatically screen all content correctly. Although artificial intelligence is improving, there is still a margin for error, meaning most platforms rely on a mix of human moderators and technology. As such, it is difficult for providers to guarantee they have captured every duplicate across their platform in an expeditious time frame.100
2.99
Microsoft argued that while an initial notice from the eSafety Commissioner may be logical with respect to an initial piece of content, it would be an ‘adverse and unworkable outcome for industry’ if the presumption of recklessness continued to apply on an ongoing basis to all further copies and derivative versions. This was especially the case, Microsoft submitted, for content that escapes industry-leading technological detection.101
2.100
Microsoft drew comparisons to the content classification decisions made by the Classification Review Board and related takedown powers soon to commence under the OSA, which apply exclusively to specific copies of material.102
2.101
Microsoft was further concerned that the removal requirement appears to apply to AVM on an ongoing and indefinite basis regardless of how much time has elapsed since the event. Microsoft argued that since industry need not have actual knowledge to be in breach and there is no jurisdictional limit on the origin of the AVM, industry is indefinitely exposed to serious liability.103
2.102
Submitters variously called for clarity regarding penalties for duplicates and derivatives, and the treatment of material as it aged over time.104
2.103
While it did not specifically address duplicate or derivative material, or material as aged over time, the Attorney-General's Department explained to the committee that the AVM Act does not require providers to take steps to make themselves aware of AVM accessible on their platforms and does not require that providers monitor all content on their platforms.105 Providers are only required to take action where there is a substantial risk and the risk is unjustifiable.106

Point at which a provider is deemed ‘reckless’

2.104
Several submitters raised concerns that the legislation presumes providers to be ‘reckless’ from the time that a notice is issued by the eSafety Commissioner, irrespective of whether they had begun to address the AVM.
2.105
As an organisation which focuses on supporting smaller platforms against exploitation, Tech Against Terrorism suggested that a notice could potentially be issued after a provider has started trying to remove a piece of content, in which case the provider will still be presumed to have been reckless.107
2.106
Industry and civil society organisations argued that the current approach does not give the recipient an opportunity to comply with the notice before exposure to penalty.108 These organisations argued that the legislation should be amended so that notices would trigger an obligation to remove the content expeditiously (as it creates awareness of AVM) but that the presumption of recklessness would only apply where the provider fails to remove the material expeditiously upon receipt of the notice.109
2.107
Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter, argued that the legislation should place emphasis on providers having systems in place to enforce AVM policies, rather than assuming providers are reckless from the moment an item of AVM is identified.110
2.108
The Attorney-General’s Department’s submission explained that careful attention was given to the appropriate fault elements to include in the legislation. According to the department, the ‘recklessness’ threshold was deliberately chosen to ensure providers could be held to account in circumstances where they ought to have known that their services were being used to spread AVM. A requirement of actual knowledge, the department explained, was not considered appropriate as it could encourage wilful blindness.111
2.109
At the hearing, officials from the Attorney-General’s Department assured the committee that:
…we're not trying to capture everyone here. We want those platforms and services that know there is a risk and don't act on that risk. That was the intention behind the legislation.112
2.110
The department advised that the issuing of the notice would act as a warning to the provider. If, after receiving the notice, the provider was to ensure the expeditious removal of the material, a prosecution would be unlikely.113
2.111
Indeed, the Attorney-General’s Department Fact Sheet on the AVM Act states that ‘removing the specified material expeditiously after the service becomes aware of it will also ensure no offence is committed’.114

Scope and importance of the legislation

Does the legislation confer monitoring obligations on providers?

2.112
Various suggestions were put to the committee on ways in which the obligations placed on providers by the AVM Act might be clarified or otherwise improved.
2.113
Meta contended that an obligation to proactively detect all AVM without fail is impossible, given the vast amount of content on the internet and the imperfectness of artificial intelligence.115
2.114
Submitters suggested that clarity should be provided on whether the AVM Act represents a proactive monitoring obligation for providers, and that the legislation should be aligned with the Attorney-General’s Department Fact Sheet.116
2.115
At the hearing, the committee explored the merits of amending the legislation so that providers would not be required to proactively remove every item of AVM, but would be required to proactively search for it and develop appropriate tracking tools. The Communications Alliance advised that the Basic Online Safety Expectations (BOSE) that are currently for consultation by the department will essentially provide this by establishing an expectation for providers to proactively scan services for such material.117

Calls to preclude closed private platforms from the Act

2.116
Some submitters argued that the scope of the AVM Act should be amended to preclude closed private platforms and hosting services.
2.117
For example, the committee heard that the legislation currently captures business-to-business (B2B) infrastructure and cloud providers, which provide large internal IT systems for government departments, airports and banks.
2.118
According to industry and civil society organisations, these closed platforms are highly unlikely to contain AVM, and as they are not public-facing platforms, there is a lower risk of widespread dissemination of AVM. Additionally, B2B infrastructure and cloud services cannot control uploaded content like most public-facing hosts; instead, they are usually contractually precluded from reviewing or monitoring content.118
2.119
Microsoft argued that the AVM Act should also preclude hosting services as it is the enterprise customer that is closest to the ultimate end users of a service. As such, that customer will be more likely to become aware of AVM and will be best positioned to remove or disable access to that content.119
2.120
Microsoft explained that, should AVM appear on a service, often the hosting provider’s only option would be to suspend service to that enterprise customer as a whole, which is ‘a blunt tool, and risks having a disproportionate response on the enterprise customer’. Microsoft highlighted that this had been recognised and accommodated in the OSA.120
2.121
The submission from the Attorney-General’s Department advised that the application of the offences to B2B infrastructure and cloud service providers was carefully considered during the bill’s development. The department explained that, while these services are unlikely to be directly captured by the offences unless they receive a notice, it is not appropriate to exempt them as it may be necessary to issue a notice where the contact point for the relevant provider cannot be identified or is unresponsive.121

Online Safety Act (OSA) blocking powers and removal notices

2.122
The OSA was passed by Parliament in July 2021. The legislation modernises the online content scheme; outlines basic online safety expectations for service providers; reduces the timeframe for service providers to respond to a removal notice from the eSafety Commissioner; and introduces new blocking powers to require internet service providers (ISPs) to block domains that contain terrorist and extreme violent material.122
2.123
Some submitters argued that the introduction of the OSA, which commences in January 2022, will largely address the issues being targeted by the AVM Act.
2.124
For example, Microsoft contended that the OSA goes further than the AVM Act as its new blocking powers capture a wider array of material.123
2.125
Additionally, the OSA provides the eSafety Commissioner with the power to issue timed removal notices for class 1 material (the definition of which would encompass AVM) regardless of whether that material is hosted in Australia or not.124
2.126
According to Microsoft, the OSA, in contrast to the AVM Act, also allows providers the necessary time to make complex assessments by granting the eSafety Commissioner power to order providers to remove content within 24 hours.125
2.127
However, there are important distinctions between the two pieces of legislation. Non-compliance with the OSA can attract a civil penalty of up to 500 units, and the eSafety Commissioner can also access enforceable undertakings and injunctions.126 However, in contrast to the AVM act, the civil penalty provisions in the OSA do not impose criminal liability, do not carry the possibility of imprisonment, and do not lead to the creation of a criminal record.127 A decision to issue a blocking or removal notice under the OSA is also subject to review by the AAT. 128
2.128
At the hearing, eSafety advised that material that falls outside the definition of AVM will be captured by the incoming class 1 powers under the OSA. These powers allow eSafety to issue a regulatory removal notice against content no matter where it is hosted in the world or whom it is hosted by.129

GIFCT and the Online Content Incident Arrangement (OCIA)

2.129
In addition to the introduction of the OSA, BOSE, and industry codes, some submitters argued the GIFCT’s CIP, and the OCIA are two additional measures that again already largely address the weaponisation of digital platforms by extremists.130
2.130
The GIFCT developed the CIP to respond to emerging and active terrorist events, and assess any potential online content produced and disseminated by those involved in the planning or conducting of the attack. When the GIFCT declares the CIP is in force, all hashes of an attacker’s content are shared among its members, and a stream of communication is established between them. The first CIP was activated on 9 October 2019, following the shooting in Halle, Germany.131
2.131
However, as previously mentioned, the GIFCT does not capture the platforms on which AVM is more likely to be hosted.132
2.132
The OCIA provides a framework for cooperation between government and industry to remove AVM following a crisis event.133 The Department of Home Affairs’ established it following the Christchurch attack as a way to combat the dissemination of AVM.

Importance of retaining the AVM scheme

2.133
The inquiry highlighted several ways in which the AVM regime remains important, even as the legislative and regulatory landscape has evolved since its introduction in 2019.
2.134
In response to industry suggestions that there may be some duplication between the OSA and the AVM Act, eSafety advised:
We see the three components sitting together—the AVM notice scheme; the AVC blocking scheme, which will be enabled through the OSA; and the class 1 powers—as forming a fairly unified whole in terms of providing the eSafety Commissioner with a range of options to deal with this egregious material…the AVC blocking notice scheme is really intended to block domains at the root…It's intended to respond to the demands of an online crisis event where content is at risk of going viral right in the immediate aftermath of a terrible incident…But it won't always be the case that we will issue an AVC blocking notice. It might not be proportionate to the issue…In that case, we will look to issue class 1 removal notices and, depending on circumstances, back those up with an AVM notice as well.134
2.135
The Department of Home Affairs explained that where the OSA provides regulatory powers to the eSafety Commissioner, the AVM Act provides a complementary criminal penalty for instances where regulatory powers are insufficient.135
2.136
According to eSafety, the AVM notice power also filled a regulatory gap where eSafety had not been empowered under the BSA to act against overseas-hosted prohibited extremist material. According to eSafety, the AVM notice scheme is also an effective deterrent.136
2.137
Discussing those service providers who elected to remove material after receiving a notice, eSafety advised the committee that ‘the link between the notice scheme and criminal penalties has clearly incentivised removal,’ and that ‘[i]n several matters…the risk of criminal prosecution was the only reason for their removal of material’.137
2.138
The Attorney-General’s Department argued that while there have been no prosecutions since the introduction of the AVM Act, the existence of these offences has encouraged industry to develop systems to address AVM.138
2.139
Meta told the committee that while it had not received any formal notices from eSafety, the existence of the AVM framework had helped to strengthen informal relationships between digital platforms, the regulator, and law enforcement:
…there have been a number of instances where eSafety have sent us over material for informal review, as per our community standards…That has happened in a number of instances where a piece of content might come close but might not hit the threshold for being AVM...Again, not in a formal way…is this ongoing dialogue that we have with law enforcement to give them a sense about a piece of content that may be borderline…we'll give relevant contacts in the right law enforcement agencies a call or establish contact in informal instances…139
2.140
At the hearing, representatives of Meta acknowledged that Australia was world-leading in respect of its suite of online protection work:
Certainly, Australia is leading the way in some ways in terms of the sheer range of issues on which the Australian parliament has or plans to regulate in this space…You are very much one of the leading jurisdictions on all aspects of tech regulation.140

Committee view

2.141
The Christchurch terrorist attack provided a horrific demonstration of the potential for the online environment to be misused by terrorists, extremists and others who seek to promote or engage in abhorrent violent conduct. It is equally clear from evidence before the committee that this risk remains prevalent and requires an ongoing legislative framework to address this misuse and protect the community from such harmful content.
2.142
As explained below, the committee considers that the AVM Act has addressed these needs in a balanced and appropriate way, and provides an effective framework for ensuring quick action is taken in relation to AVM. The committee further considers that the AVM Act continues to play an importance deterrence role, and provide an important tool for regulators and law enforcement in their efforts to reduce the incidence of misuse of online platforms.
2.143
The committee acknowledges concerns expressed by various industry participants and civil society organisations during the inquiry regarding the scope and details of the Act. While overwhelmingly supportive of the purpose of the AVM Act, a range of inquiry participants pointed to the challenges they might potentially face in determining what constitutes ‘AVM’ under the Act, understanding how duplicate or derivative AVM is treated, and identifying where there is a lawful reason for material that might otherwise be considered AVM to be shared online.
2.144
In considering these concerns, the committee was reassured by evidence received from the Office of the eSafety Commissioner that the operation of the AVM scheme is focused on preventing the most egregious material from being shared online. Additionally, there are protections already in place to safeguard against inappropriate prosecutions. Those providers making genuine efforts to comply and using their best endeavours to reduce AVM on their services are not the focus of the eSafety Commissioner, and the committee did not receive evidence suggesting that such providers would likely be subject to penalty or prosecution under the AVM Act if they had acted in good faith.
2.145
Based on the evidence received, the committee is similarly satisfied that the ‘recklessness’ threshold in the AVM Act is appropriate, and unlikely to capture providers acting in good faith. The committee agrees with the point made by the Attorney-General’s Department—that is, that the ‘recklessness’ threshold appropriately ensures providers can be held to account in circumstances where they ought to have known that their services were being used to spread AVM, and protects against the risk of wilful blindness to AVM.
2.146
The committee considered carefully the current practical operation of the AVM Act and is satisfied that despite the issues raised by industry, there is no evidence of inappropriate prosecutions. Similarly, the evidence the committee did receive in relation to the overremoval of content related to specific individual case studies that required greater human intervention. For the most part, the committee considers that the concerns of industry can be ameliorated with greater human resourcing so that more complex cases can be given greater attention more quickly.
2.147
The committee also weighed concerns of possible overremoval with the fundamental principles of the Act which is to have material of the most violent and extreme nature removed from circulation as soon as possible. It concluded that this fundamental requirement to quickly protect Australians from the harm of this material outweighed these concerns and that appropriate protections were already in place given the AVM definition covers only the most heinous of material.
2.148
The committee acknowledges the concerns raised by industry regarding the definition of 'kidnapping', as well as the difficulties faced by industry in determining whether some content meets this definition. The committee also acknowledges industry suggestions that the AVM legislation could be amended to assist providers in making judgment calls on particular items of potential AVM. The committee is not convinced that amending the legislation would address the issues raised; rather, an overly prescriptive approach to defining AVM in the legislation could reduce the flexibility and utility of the current framework, and could even introduce additional burden and complexity for industry. However, the committee is mindful that industry, and in particular smaller platforms, might benefit from increased consultation with and guidance from government on the operation of the AVM Act, including questions around how to identify content as AVM. The committee considers that establishing a forum for regular and ongoing consultation between the government and industry is beneficial. An AVM stakeholders forum could focus on ways that industry can be more effective in enacting the AVM legislation, and ensure industry has the guidance and support it requires to do so.

Recommendation 1

2.149
The committee recommends that the Australian Government establish a stakeholder forum, consisting of relevant representatives, for regular engagement with industry to identify ways in which industry can be more effective in enacting the provisions of the AVM legislation.
2.150
The committee is satisfied that the penalties available under the AVM Act are appropriate and proportionate to the seriousness of the type of offences the AVM Act is concerned with. While no penalties have been imposed to date, the committee does not consider there is a meaningful risk that parties acting in good faith would be subject to prosecution and penalty. Equally, the availability of criminal penalties is a critical and valuable component of the AVM Act, and one not provided by other relevant parts of Australia’s online safety framework in relation to AVM. The committee considers the stakeholder forum it has recommended above could also provide an additional means for industry to seek ongoing guidance and clarification in relation to the potential use of penalties, and therefore operate with greater certainty and confidence in acting in response to content that may be AVM.
2.151
The committee is encouraged by evidence provided by a wide range of inquiry participants—including government agencies, law enforcement and industry—that, independent of the AVM Act, major platforms are enforcing their own community standards, which encompass, and in some instances go further than, the definition of AVM. Considerable initiative has also been shown by industry in developing automated detection technology and establishing information-sharing networks to enhance self-regulation efforts. It is also apparent that major platforms already utilise well-established law enforcement referral pathways to report and take action in relation to AVM. While this by no means negates the need for legislation and an appropriate penalty regime (including criminal penalties), the committee considers that the existing efforts of major platforms to address the sharing of AVM indicate the obligations created by the AVM Act on providers are neither overly burdensome nor unachievable, particularly in light of the resources available to those providers. Further, the committee did not receive any convincing evidence that the AVM Act places any unreasonable or inappropriate demands on smaller platforms, particularly given the risks inherent in the sharing of AVM.
2.152
A key benefit of the scheme is that it enables the eSafety Commissioner to notify providers where AVM is identified and have this material removed. This appears to be occurring through mostly collegial and informal but effective processes. A secondary benefit of the scheme is that it has forged relationships between the eSafety Commissioner, law enforcement, and industry. These relationships seem to be working well in addressing the removal of abhorrent content and the subsequent disruption of crime. The committee does not believe that the AVM scheme duplicates other legislation in this space; rather, AVM notices are an important complementary tool available to eSafety when dealing with recalcitrant providers.
2.153
A key question for this committee is whether the AVM penalty regime represents an appropriate motivation for industry to develop the capability required to combat the exploitation of technology by extremists and perpetrators.
2.154
The committee believes that the design of the legislation strikes the right balance in granting flexibility for providers who are making case-by-case judgement calls on whether to remove or refer material, while deterring misconduct or wilful blindness to the presence of AVM by content and hosting services. When balanced against the need to protect the Australian community from radicalisation and needless exposure to abhorrent material, the administrative burden being placed on providers to proactively limit the spread of this material is justifiable.
2.155
However, the committee remains concerned about the risk posed by smaller, recalcitrant platforms that are based overseas. Extraterritorial enforcement of the AVM scheme remains an impediment to its effectiveness. The inquiry identified that there is at least one clear case of non-compliance with a formal AVM notice issued by the eSafety Commissioner. This notice appears to be unable to be enforced overseas. The committee notes that this problem is unlikely to be easily addressed. Nonetheless, it encourages government, law enforcement and industry to continue to work together, including in international fora where appropriate, to identify steps that might be taken to overcome the practical limitations that can arise when pursuing extraterritorial enforcement in this space.
2.156
The committee notes evidence from the AFP regarding potential legislative gaps in relation to AVM and similar material that can impact its counterterrorism investigations. Some of those gaps may not be directly related to the purpose of the AVM Act of preventing the online sharing of AVM. Nonetheless, the issues raised are potentially significant and may benefit from fuller consideration and inquiry. As previously noted, this committee is precluded from monitoring, reviewing, or reporting on the performance of the AFP’s counterterrorism function by the PJCLE Act. As such, the committee suggests that the Parliamentary Joint Committee on Intelligence and Security (PJCIS) or the government might consider inquiring further into this issue. In particular, it may be appropriate for the PJCIS or the government to consider the AFP’s points regarding whether the Criminal Code needs to be expanded to capture material that is broader in nature than the definition of AVM; whether existing legislation allows for appropriate action against people who are in possession of AVM but do not share it; and whether appropriate provision exists for AFP action in relation to material that is not explicitly connected to a terrorist act.
2.157
Unfortunately, but not surprisingly, the AVM Act has not stopped the misuse of online platforms by perpetrators of violence. Individuals who wish to either share or access online AVM still find opportunities to do so, and the broader community still risks exposure to such material in online environments. The global, instantaneous, anonymous and ever-evolving nature of online environments will continue to challenge law enforcement and online safety measures introduced by government, particularly. Clearly, more work remains to be done in this space, and as the threat changes and evolves, so too will the responses by policymakers and law enforcement. The recommended stakeholder forum would help ensure that government, industry, and civil society organisations continue to work cooperatively in helping prevent the sharing of online AVM. At the same time, the committee considers the AVM Act an important, appropriate and effective response to helping address such threats.

  • 1
    Attorney-General’s Department, Submission 3, p. 6.
  • 2
    Attorney-General’s Department, Sharing of Abhorrent Violent Material Act Fact Sheet, September 2021, p. 2.
  • 3
    Attorney-General’s Department, Submission 3, p. 6.
  • 4
    See for example: Mr Josh Machin, Head of Public Policy, Australia, Meta, Proof Committee Hansard, 17 November 2021, p. 8; Mr Henry Turnbull, Head of Public Policy, Asia-Pacific, Snapchat,
    Proof Committee Hansard, 17 October 2021, p. 17.
  • 5
    See for example: Snapchat, Submission 10, pp. 2-3; Digital Industry Group Inc, Submission 11, p. 2; Google, Submission 13, p. 1; Meta, Submission 14, p. 17; Mr Josh Machin, Head of Public Policy, Australia, Meta, Proof Committee Hansard, 17 November 2021, p. 8.
  • 6
    Mr Scott Lee, Assistant Commissioner, Counter Terrorism and Special Investigations,
    Australian Federal Police, Proof Committee Hansard, 17 November 2021, p. 27.
  • 7
    Mr Scott Lee, Assistant Commissioner, Counter Terrorism and Special Investigations,
    Australian Federal Police, Proof Committee Hansard, 17 November 2021, p. 27; AFP, Submission 8, p. 8.
  • 8
    ‘Reasonable time’ is not defined. The submission from the Attorney-General’s Department explained that a number of factors could indicate whether a person had referred within a reasonable time after becoming aware of the Abhorrent Violent Material (AVM). For example, the type and volume of the material, complaints received about the material, and the capabilities of and resourcing available to the provider.
  • 9
    The Hon Christian Porter MP, Attorney General, House of Representatives Hansard, 4 April 2019,
    p. 1849.
  • 10
    The eSafety Commissioner is Australia’s national independent regulator for online safety.
  • 11
    Attorney-General’s Department, Sharing of Abhorrent Violent Material Act Fact Sheet,
    September 2021, pp. 1​​‑2.
  • 12
    The Hon Christian Porter MP, Attorney General, House of Representatives Hansard, 4 April 2019,
    p. 1850.
  • 13
    AFP, Submission 8, p. 3.
  • 14
    AFP, Submission 8, p. 3.
  • 15
    Mr Toby Dagg, Executive Manager, Investigations Branch, Office of the eSafety Commissioner, Proof Committee Hansard, 17 November 2021, p. 30.
  • 16
    Ms Ciara Spencer, First Assistant Secretary, Aviation and Maritime Security, and Executive Director, Transport Security, Department of Home Affairs, Proof Committee Hansard,
    17 November 2021, pp. 22 and 28.
  • 17
    Mr Josh Machin, Head of Public Policy, Australia, Meta, Proof Committee Hansard, 17 November 2021, p. 5.
  • 18
    See for example: Microsoft, Submission 2, p. 5; Industry and civil society organisations,
    Submission 9, p. 4; Meta, Submission 14, p. 20.
  • 19
    See for example: Tech Against Terrorism, Submission 1, p. 1; Microsoft, Submission 2, p. 5; Meta, Submission 14, p. 22.
  • 20
    Meta, Submission 14, pp. 20-21.
  • 21
    Industry and civil society organisations, Submission 9, p. 4.
  • 22
    Microsoft, Submission 2, p. 5.
  • 23
    Microsoft, Submission 2, p. 5.
  • 24
    Mr Simon Milner, Vice-President of Public Policy, Asia-Pacific, Meta, Proof Committee Hansard,
    17 November 2021, p. 4.
  • 25
    Mr Simon Milner, Vice-President of Public Policy, Asia-Pacific, Meta, Proof Committee Hansard,
    17 November 2021, p. 4.
  • 26
    Meta, Submission 14, p. 22.
  • 27
    Industry and civil society organisations, Submission 9, pp. 5, 16 and 17.
  • 28
    Meta, Submission 14, p. 21.
  • 29
    Mr Andrew Walter, First Assistant Secretary, Integrity and Security Division, Attorney-General’s Department, Proof Committee Hansard, 17 November 2021, p. 24.
  • 30
    Mr Andrew Walter, First Assistant Secretary, Integrity and Security Division, Attorney-General’s Department, Proof Committee Hansard, 17 November 2021, p. 25.
  • 31
    See for example: Tech Against Terrorism, Submission 1, p. 1; Microsoft, Submission 2, p. 5; Meta, Submission 14, p. 22.
  • 32
    Microsoft, Submission 2, p. 4.
  • 33
    Google, Submission 13, p. 4.
  • 34
    Mr Simon Milner, Vice-President of Public Policy, Asia-Pacific, Meta, Proof Committee Hansard,
    17 November 2021, p. 7.
  • 35
    Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter, Proof Committee Hansard, 17 November 2021, p. 15.
  • 36
    Industry and civil society organisations, Submission 9, pp. 4 and 20.
  • 37
    Ms Ciara Spencer, First Assistant Secretary, Aviation and Maritime Security, and Executive Director, Transport Security, Department of Home Affairs, Proof Committee Hansard,
    17 November 2021, p. 26.
  • 38
    Attorney-General’s Department, Sharing of Abhorrent Violent Material Act Fact Sheet,
    September 2021, p. 7.
  • 39
    Mr Andrew Walter, First Assistant Secretary, Integrity and Security Division, Attorney-General’s Department, Proof Committee Hansard, 17 November 2021, p. 24.
  • 40
    AFP, Submission 8, p. 2.
  • 41
    Mr Andrew Walter, First Assistant Secretary, Integrity and Security Division, Attorney-General’s Department, Proof Committee Hansard, 17 November 2021, p. 23.
  • 42
    See for example: Tech Against Terrorism, Submission 1, p. 2; Microsoft, Submission 2, pp. 6-7.
  • 43
    Microsoft, Submission 2, pp. 6-7.
  • 44
    Microsoft, Submission 2, pp. 6-7.
  • 45
    Snapchat, Submission 10, p. 2.
  • 46
    Mr Simon Milner, Vice-President of Public Policy, Asia-Pacific, Meta, Proof Committee Hansard,
    17 November 2021, p. 3.
  • 47
    Mr Andrew Walter, First Assistant Secretary, Integrity and Security Division, Attorney-General’s Department, Proof Committee Hansard, 17 November 2021, p. 24.
  • 48
    Industry and civil society organisations, Submission 9, p. 10.
  • 49
    Industry and civil society organisations, Submission 9, pp. 10-11.
  • 50
    Industry and civil society organisations, Submission 9, pp. 11-12.
  • 51
    Attorney-General’s Department, Submission 3, p. 7.
  • 52
    See for example: Tech Against Terrorism, Submission 1, p. 5; Microsoft, Submission 2, pp. 5-6; Industry and civil society organisations, Submission 9, p. 4; Google, Submission 13, p. 3.
  • 53
    Tech Against Terrorism, Submission 1, p. 5.
  • 54
    Microsoft, Submission 2, pp. 5-6.
  • 55
    Google, Submission 13, p. 3.
  • 56
    See for example: Tech Against Terrorism, Submission 1, p. 5; Microsoft, Submission 2, pp. 5-6; Industry and civil society organisations, Submission 9, p. 4; Google, Submission 13, p. 3.
  • 57
    Industry and civil society organisations, Submission 9, p. 4.
  • 58
    Industry and civil society organisations, Submission 9, p. 14.
  • 59
    Meta, Submission 14, p. 24.
  • 60
    Mr Toby Dagg, Executive Manager, Investigations Branch, Office of the eSafety Commissioner, Proof Committee Hansard, 17 November 2021, p. 31.
  • 61
    eSafety Commissioner, answer to question on notice, 1 December 2021, (received 6 December 2021).
  • 62
    eSafety Commissioner, Submission 5, p. 10.
  • 63
    The Online Content Scheme (Schedules 5 and 7 of the Broadcasting Services Act 1992) enables the eSafety Commissioner to investigate and take action on complaints about prohibited online content such as child sexual abuse material (CSAM). The scheme is underpinned by the National Classification Scheme, which also applies to films, computer games and publications. Under the current scheme, the eSafety Commissioner has the power to issue a takedown notice in relation to prohibited material hosted in Australia. Approximately 65 per cent of all complaints in 2020-21 concerned CSAM.
  • 64
    eSafety Commissioner, Submission 5, p. 4.
  • 65
    eSafety Commissioner, Submission 5, p. 4.
  • 66
    eSafety Commissioner, Submission 5, p. 4.
  • 67
    eSafety Commissioner, Submission 5, p. 9.
  • 68
    eSafety Commissioner, Submission 5, p. 5.
  • 69
    Mr Toby Dagg, Executive Manager, Investigations Branch, Office of the eSafety Commissioner, Proof Committee Hansard, 17 November 2021, p. 33.
  • 70
    Mr Toby Dagg, Executive Manager, Investigations Branch, Office of the eSafety Commissioner, Proof Committee Hansard, 17 November 2021, p. 34.
  • 71
    Class 1 material is material that has been given, or would be given, ‘Refused Classification’ status by the Classification Review Board.
  • 72
    eSafety and the Australian Federal Police are engaging on a set of procedures for this.
  • 73
    eSafety Commissioner, Submission 5, p. 10.
  • 74
    Department of Home Affairs, Submission 4, p. 4.
  • 75
    eSafety Commissioner, Submission 5, p. 10.
  • 76
    Explanatory Memorandum, pp. 24 and 27.
  • 77
    Industry and civil society organisations, Submission 9, pp. 12-13.
  • 78
    Industry and civil society organisations, Submission 9, pp. 4. 14 and 15.
  • 79
    eSafety Commissioner, Submission 5, pp. 4-5.
  • 80
    Department of Home Affairs, Submission 4, p. 3.
  • 81
    eSafety Commissioner, Submission 5, p. 9.
  • 82
    eSafety Commissioner, Submission 5, pp. 4-5.
  • 83
    Mr Alex Ash, Manager, Online Content, Office of the eSafety Commissioner,
    Proof Committee Hansard, 17 November 2021, p. 29.
  • 84
    Mr Toby Dagg, Executive Manager, Investigations Branch, Office of the eSafety Commissioner, Proof Committee Hansard, 17 November 2021, p. 30.
  • 85
    Google, Submission 13, p. 2.
  • 86
    Ms Lucie Krahulcova, Executive Director, Digital Rights Watch, Proof Committee Hansard,
    17 November 2021, p. 14.
  • 87
    Professor Mark Nolan and Dr Dominique Dalla-Pozza, Submission 15, pp. 5-6.
  • 88
    See for example: Mr Josh Machin, Head of Public Policy, Australia, Meta, Proof Committee Hansard, 17 November 2021, p. 2; Ms Kara, Director of Public Policy, Australia and New Zealand, Twitter, Proof Committee Hansard, 17 November 2021, p. 13.
  • 89
    Mr Toby Dagg, Executive Manager, Investigations Branch, Office of the eSafety Commissioner, Proof Committee Hansard, 17 November 2021, pp. 30 and 35.
  • 90
    Mr Toby Dagg, Executive Manager, Investigations Branch, Office of the eSafety Commissioner, Proof Committee Hansard, 17 November 2021, p. 35.
  • 91
    Mr Adam Hadley, Director, Tech Against Terrorism, Proof Committee Hansard, 17 November 2021, pp. 38-40.
  • 92
    The Global Internet Forum to Counter Terrorism was established in 2017 to prevent terrorists from exploiting digital platforms. It was founded, and is funded and operated by Meta, Microsoft, Google, Twitter, and YouTube. It brings together industry, government, civil society, and academia and works on building shared technology, information-sharing, and funding research.
  • 93
    Department of Home Affairs, Submission 4, p. 4.
  • 94
    See for example: Tech Against Terrorism, Submission 1, p. 1; Microsoft, Submission 2, p. 6; Industry and civil society organisations, Submission 9, pp. 4, 15, 16; Meta, Submission 14, p. 7.
  • 95
    Microsoft, Submission 2, p. 6.
  • 96
    See for example: Industry and civil society organisations, Submission 9, pp. 4, 15 and 16.
  • 97
    See for example: Tech Against Terrorism, Submission 1, pp. 3-4; Microsoft, Submission 2, pp. 7-8; Industry and civil society organisations, Submission 9, p. 18.
  • 98
    Tech Against Terrorism, Submission 1, pp. 3-4.
  • 99
    Microsoft, Submission 2, p. 6.
  • 100
    Industry and civil society organisations, Submission 9, p. 18.
  • 101
    Microsoft, Submission 2, p. 7.
  • 102
    Microsoft, Submission 2, pp. 7-8.
  • 103
    Microsoft, Submission 2, pp. 6-7.
  • 104
    See for example: Tech Against Terrorism, Submission 1, pp. 3-4; Microsoft, Submission 2, pp. 7-8; Industry and civil society organisations, Submission 9, p. 18.
  • 105
    Attorney-General's Department, Submission 3, p. 10.
  • 106
    Mr Andrew Walter, First Assistant Secretary, Integrity and Security Division, Attorney-General's Department, Proof Committee Hansard, 17 November 2021, p. 24.
  • 107
    Tech Against Terrorism, Submission 1, p. 3.
  • 108
    Industry and civil society organisations, Submission 9, pp. 3, 4 and 6.
  • 109
    Industry and civil society organisations, Submission 9, pp. 3, 4 and 6.
  • 110
    Ms Kara Hinesley, Director of Public Policy, Australia and New Zealand, Twitter,
    Proof Committee Hansard, 17 November 2021, p. 14.
  • 111
    Attorney-General’s Department, Submission 3, p. 10.
  • 112
    Mr Andrew Walter, First Assistant Secretary, Integrity and Security Division,
    Attorney-General’s Department, Proof Committee Hansard, 17 November 2021, p. 23.
  • 113
    Attorney-General’s Department, Submission 3, p. 14.
  • 114
    Attorney-General’s Department, Sharing of Abhorrent Violent Material Act Fact Sheet,
    September 2021, p. 6.
  • 115
    Meta, Submission 14, p. 23.
  • 116
    See for example: Industry and civil society organisations, Submission 9, pp. 3, 4, 6, and 7; Meta, Submission 14, p. 23.
  • 117
    Ms Christiane Gillespie-Jones, Director, Program Management, Communications Alliance,
    Proof Committee Hansard, 17 November 2021, p. 15.
  • 118
    Industry and civil society organisations, Submission 9, pp. 4 and 8.
  • 119
    Microsoft, Submission 2, p. 8.
  • 120
    Microsoft, Submission 2, p. 8.
  • 121
    Attorney-General’s Department, Submission 3, p. 12.
  • 122
    The Enhancing Online Safety Act 2015 established a civil penalties scheme that allows the eSafety Commissioner to help with the removal of intimate images or videos from online platforms by issuing enforceable removal notices to social media services, websites, hosting providers and perpetrators. It also granted the Commissioner powers to investigate and act on cyberbullying material targeted at an Australian child.
  • 123
    Microsoft, Submission 2, p. 2.
  • 124
    Microsoft, Submission 2, p. 3.
  • 125
    Microsoft, Submission 2, pp. 5-6.
  • 126
    Online Safety Act 2021, Explanatory Memorandum, pp. 4, 5 and 116-120.
  • 127
    Online Safety Act 2021, Explanatory Memorandum, p. 72.
  • 128
    Online Safety Act 2021, Explanatory Memorandum, pp. 119 and 126.
  • 129
    Mr Toby Dagg, Executive Manager, Investigations Branch, Office of the eSafety Commissioner, Proof Committee Hansard, 17 November 2021, p. 32.
  • 130
    See for example: Microsoft, Submission 2, p. 4.
  • 131
    Digital Industry Group Inc, Submission 11, pp. 2-3.
  • 132
    Department of Home Affairs, Submission 4, p. 4.
  • 133
    Microsoft, Submission 2, p. 4.
  • 134
    Mr Toby Dagg, Executive Manager, Investigations Branch, Office of the eSafety Commissioner, Proof Committee Hansard, 17 November 2021, p. 32.
  • 135
    Ms Ciara Spencer, First Assistant Secretary, Aviation and Maritime Security, and Executive Director, Transport Security, Department of Home Affairs, Proof Committee Hansard,
    17 October 2021, p. 21.
  • 136
    eSafety Commissioner, Submission 5, p. 6.
  • 137
    eSafety Commissioner, Submission 5, p. 6.
  • 138
    Attorney-General’s Department, Submission 3, p. 5.
  • 139
    Mr Josh Machin, Head of Public Policy, Australia, Meta, Proof Committee Hansard,
    17 November 2021, p. 2.
  • 140
    Mr Simon Milner, Vice-President of Public Policy, Asia-Pacific, Meta, Proof Committee Hansard,
    17 November 2021, pp. 9 and 10.

 |  Contents  |