7. Democracy and digital technology

7.1
Participation and engagement measure the health of our democracy. Through the growth of the digital sphere, citizens can better connect with their representatives, strengthening Australia’s democratic system. This Chapter presents the Committee’s initial consideration of this issue. As noted in Chapter 1, the Committee’s inquiry into this subject will continue under a new reference and evidence presented at hearings will be reported at a later date.
7.2
It is because of this that the growth of digital technologies and the integrity of the online environment need to be protected from manipulation or malicious intent.
7.3
Mr Chris Zappone highlighted the broader threat:
Social media manipulation is not simply the technological issue of bots and trolls, but the social dynamics and choice in ideas promoted online which can be optimised for maximum destruction. The medium affects the choice in messaging. Conspiracy theories and half true narratives as well as visceral, hot-button imagery are more likely to go viral than other material. This has implications for any liberal democracy which relies on a reason-driven discussion.1
7.4
Although flexibility and freedom online are a vital extension of political discussion, they bring new challenges. The Digital Industry Group Inc (DIGI) is a group of organisations that provide digital services to Australians, including Facebook, Google, Oath and Twitter. In its submission to this inquiry, DIGI noted that:
One of the challenges of the modern media landscape is that anyone can be a publisher. While on one hand, the rise of micro publishers on the web, where anyone with an internet connection can publish information on events, politics, and ideas, has empowered more voices and offered more views in turn making the media ecosystem more pluralistic and democratic; on the other, everyday people can have huge audiences, but lack any kind of professional training in media and media ethics, which is particularly important in the reporting of public interest news.2
7.5
The dangers of cyber manipulation and the spread of disinformation became particularly apparent in 2016, with alleged interference in both the UK Brexit campaign and US Presidential election.
7.6
Accordingly, as part of the inquiry into and report on all aspects of the conduct of the 2016 Federal Election, the Committee chose to conduct a review of cyber manipulation of elections, specifically considering:
the extent to which social media bots may have targeted Australian voters and political discourse in the past;
the likely sources of social media manipulation within Australia and internationally;
way to address the spread of deliberately false news online during elections; and
measures to improve the media literacy of Australian voters.
7.7
From the outset, it should be noted that while ‘fake news’ became highly contested during 2016, there is no agreed definition. Many international inquiries, such as the European Union’s independent High Level Group on fake news and online disinformation, are choosing to avoid the term, particularly as it comes to be used by political opponents and those seeking to discredit uncomplimentary press.3
7.8
The EU’s High Level Group eschewed the term ‘fake news’ in favour of ‘disinformation’ because:
The term [fake news] is inadequate to capture the complex problem of disinformation, which involves content that is not actually or completely “fake” but fabricated information blended with facts, and practices that go well beyond anything resembling “news” to include some forms of automated accounts used for astroturfing,4 networks of fake followers, fabricated or manipulated videos, targeted advertising, organized trolling, visual memes, and much more.5
7.9
For similar reasons, the Committee uses the term ‘disinformation’ in this Chapter.
7.10
This Chapter discusses the growth of social media and ‘fake news’ and the resulting dangers, as well as the need for greater definitional clarity when attempting to regulate fake news and online platforms. The level of threat social media manipulation presents in Australia and abroad is also analysed. Finally, the Chapter considers the need for higher levels of media literacy for all Australians.
7.11
The Committee notes that there has been no evidence of any cyber manipulation in the 2016 Australian federal election. However, given international events, as discussed in this Chapter, it is essential that this issue be actively considered as a part of Australian elections. The Committee therefore makes a number of recommendations to prevent cyber manipulation occurring in the future, and to form a basis for future inquiries.

The growth of disinformation

7.12
Data analysed at the Australian Strategic Policy Institute showed that the dissemination of disinformation is one of the most powerful forces undermining the integrity of the electoral process:
The power of cyberspace to influence the democratic process lies in much more than just the nuts and bolts of the election infrastructure. Every vote cast on election day is the product of the information ecosystem of the preceding months. Shaping the nature and volume of information available to the public in the lead-up to an election is a sophisticated way of influencing voter decision-making and election outcomes.
In this method of tampering with elections, a culprit’s digital fingerprints can never be directly linked to the election perse. Election decision-making can be influenced through the dissemination of ‘fake news’ or ‘strategic disclosures’, and the impact of this false or previously unavailable information can be increased through the creation of an ‘artificial consensus’ online.6
7.13
In addition to this concern, the Committee was cognisant of a further problem, highlighted by the independent democracy watchdog, Freedom House. Their analysis found that the number of countries in which physical violence in response to online speech occurred had risen by 50 per cent between 2016 and 2017.7
7.14
For example, social media debate and misinformation have resulted in violent attacks and deaths in India, Myanmar, and Sri Lanka.
7.15
In India, false allegations concerning child kidnappers went viral on WhatsApp. In response, angry mobs across the country attacked and killed people they accused of kidnapping. The Government’s response has been to monitor and restrict social media access.8
7.16
According to Freedom House, Australia ranks number 1 in the Asia-Pacific for internet freedom (see Figure 7.1).9 It is unlikely that the Australian Government will seek to enforce internet restrictions.
7.17
Despite this, the Committee is aware of the potential misuse of messaging services within similarly free countries, with evidence from The Guardian’s investigation into the 2011 London riots showing that use of the BlackBerry Messenger service was highly influential in escalating the crisis.10
7.18
Additionally, given the evidence found by the News and Media Research Centre (NMRC) that the use of messaging apps for news content is increasing,11 the Committee is aware of the need to monitor the misuse of such technology.

Figure 7.1:  Internet freedom across the Asia-Pacific (2017)

Source: Freedom on the Net, November 2017, p. 30.

Understanding disinformation

7.19
Dr Claire Wardle, an expert in information disorders, identified 7 ‘distinct types of problematic content that sit within our information ecosystem’.12 See Figure 7.2.

Figure 7.2:  7 types of disinformation

C Wardle, ‘Fake news. It’s complicated’.
7.20
Dr Wardle’s analysis also presented evidence of the motivations behind the dissemination of disinformation. Satire, for example, while technically being a form of disinformation, has a very different objective from propaganda or manipulated political or commercial content.13 See Figure 7.3.

Figure 7.3:  Motives behind disinformation

Source: C. Wardle, ‘Fake news. It’s complicated’.

Quantifying disinformation

7.21
The News and Media Research Centre (NMRC) provided extensive statistical analysis into Australia’s vulnerability to fake news.14
7.22
The submission considered 6 forms of misinformation:
i.
poor journalism;
ii.
stories where facts are spun to push an agenda;
iii.
fictional stories created for political or commercial reasons;
iv.
misleading headlines disguising advertisements;
v.
satire; and
vi.
the use of ‘fake news’ to discredit opposing media stories.15
7.23
NMRC found that the majority of their survey respondents had been exposed to one or more forms of disinformation. However, it is worth noting that only 25 per cent of those surveyed had experienced political manipulation. This is disproportionate to the level of concern about political manipulation, which was at 67 per cent.16
7.24
NMRC recommended that:
… the government fund continued monitoring of Australians’ use of social media and messaging apps to inform strategies for combatting the spread of fake news. It is particularly important to understand how Australian news consumers come cross information while engaging in online activities–incidental news exposure–and what impact this may have on news consumers’ engagement with politics and the society. Previously, much focus was on monitoring the content of news. However, in an age of information abundance it is critical to understand what, how and how much information consumers are accessing via various platforms.17

Who should combat disinformation?

7.25
In its submission to the ACCC Digital Platforms Inquiry, the Australian Press Council noted a dangerous double-standard:
If Press Council members are being held to high standards of practice as to the accuracy of materials they produce and distribute, it can be seen to be highly unfair that powerful players such as Facebook and Google are not also required to make some effort to reduce the adverse effects on public discourse and democracy of fake news being systematically and widely produced.18
7.26
The level of accountability and the responsibilities that come with that is a debated issue. With regards to accountability and possible regulation of disinformation, one of the most vital questions appeared to be the definition of social media companies as either a platform or publisher.
7.27
Facebook itself has an inconsistent approach to whether it is a platform or publisher. While the majority of Facebook CEO Mark Zuckerberg’s statements label Facebook as a platform and technology company, in a recent court case, a lawyer for Facebook discussed the protections publishers have with regards to content decisions.19 This seems to suggest that Facebook considers it has the protections of a publisher.
7.28
This is in contrast to evidence presented at a US Senate Committee on 10 April 2018, where Mr Zuckerberg was asked by Alaskan Senator Sullivan:
You know, you — you mention you're a tech company, a platform, but there's some who are saying that you're the world's biggest publisher. I think about 140 million Americans get their news from Facebook, and when you talk to — when you mentioned that Senator Cornyn — Cornyn, he — you said you are responsible for your content.
So which are you, are you a tech company or are you the world's largest publisher, because I think that goes to a really important question on what form of regulation or government action, if any, we would take.20
7.29
Mr Zuckerberg responded:
Well, I agree that we're responsible for the content, but we don't produce the content. I — I think that when people ask us if we're a media company or a publisher, my understanding of what — the heart of what they're really getting at, is do we feel responsibility for the content on our platform.
The answer to that, I think, is clearly “yes.” And — but I don't think that that's incompatible with fundamentally, at our core, being a technology company where the main thing that we do is have engineers and build products.21
7.30
Even without producing content, social media sites control access to information. Evidence by a Microsoft researcher, Mr Tarleton Gillespie, suggested that social media sites frequently define and determine what we see and how we see it. This is done in two ways; by preventing or removing content that breaches their own terms and conditions, and by using algorithms to determine what an individual sees and how often they see it:
These sites also emphasize that they are merely hosting all this content, while playing down the ways in which they intervene – not only how they moderate, delete, and suspend, but how they sort content in particular ways, algorithmically highlight some posts over others, and grant their financial partners privileged real estate on the site.22
7.31
One problem raised by Mr Gillespie is that, even when appearing to act ‘rationally’ and in accordance with their terms of service, the process of moderation or removal of content is so opaque that the validity of the decision can easily be challenged.23
7.32
Mr Gillespie highlighted the need to bring greater transparency to the moderation process:
As more and more of our public discourse, cultural production, and social interactions move online, and this handful of massive, privately owned digital intermediaries continues to grow in economic and cultural power, it is crucial that we examine the choices moderators make.24

The implied freedom of political communication

7.33
In discussing possible restrictions, the Committee is cognisant of the need to protect the implied freedom of political communication, which has been upheld by the High Court since Australian Capital Television Pty Ltd v Commonwealth.25
7.34
With regards to the limitations of this freedom, Justice Brennan stated:
It is both simplistic and erroneous to regard any limitation on political advertising as offensive to the Constitution. If that were not so, there could be no blackout on advertising on polling day; indeed, even advertising in the polling booth would have to be allowed unless the demands of peace, order and decorum in the polling booth qualify the limitation. Though freedom of political communication is essential to the maintenance of a representative democracy, it is not so transcendent a value as to override all interests which the law would otherwise protect.26
7.35
The consequence of this clarification is that Parliament can, in some instances, impose restrictions; balancing the need for such measures with the importance of political discussion.
7.36
In Lange v Australian Broadcasting Corporation,27 a two-step test was introduced:
First, does the law effectively burden freedom of communication about government or political matters either in its terms, operation or effect? Second, if the law effectively burdens that freedom, is the law reasonably appropriate and adapted to serve a legitimate end the fulfilment of which is compatible with the maintenance of the constitutionally prescribed system of representative and responsible government and the procedure prescribed by s 128 for submitting a proposed amendment of the Constitution to the informed decision of the people ... If the first question is answered ‘yes’ and the second is answered ‘no’, the law is invalid.28
7.37
A further restriction, however, was stated by Justice McHugh in Australian Capital Television:
[H]aving regard to the conceptions of representative government, Parliament has no right to prefer one form of lawful electoral communication over another. It is for the electors and the candidates to choose which forms of otherwise lawful communication they prefer to use to disseminate political information, ideas and argument. Their choices are a matter of private, not public, interest. Their choices are outside the zone of governmental control.29
7.38
This must be remembered when attempting to address online political communication and contrasting it to more ‘traditional’ advertising. However, it is not an insurmountable obstacle to redressing dark advertising or disinformation. In concluding his paper, Professor George Williams AO noted that:
Legislation carefully and proportionately targeted to meet some other purpose, such as the purpose of ensuring free and fair elections, will survive the scrutiny of the constitutional freedom. Free speech and the regulation of electoral canvassing need not be in conflict.30

Social media manipulation

7.39
The Committee considered both the domestic and international threat of social media manipulation, including the use of automated bots.
7.40
Bots are programs that are capable of generating their own followers and can be either automated or human driven (referred to as ‘cyborgs’). The definition of bots given by the UK House of Commons’ Digital, Culture, Media and Sport Committee in its inquiry into ‘Disinformation and “fake news”’ is:
… algorithmically-driven computer programmes designed to carry out specific tasks online, such as analysing and scraping data. Some are created for political purposes, such as automatically posting content, increasing follower numbers, supporting political campaigns, or spreading misinformation and disinformation.31
7.41
Bots are one of the most effective methods of spreading disinformation and, while there are several inquiries being conducted around the world, it appears that laws need to evolve to bring transparency and regulation to their use.

The international landscape

7.42
There have been significant security breaches and allegations of cyber interference in the electoral process in countries such as Britain, the US, France and the Philippines.
7.43
Many of the problems facing these countries are either already present in, or are likely to reach, Australia.
7.44
In the US, Facebook CEO Mark Zuckerberg estimated that the Russian disinformation agency–the Internet Research Agency (IRA)–was linked to 470 accounts, which generated around 80 000 posts over 2 years which were viewed by around 126 million people. IRA also spent an estimated
US$100 000 on more than 3 000 advertisements on both Facebook and Instagram. These were seen by approximately 11 million Americans.32
7.45
When questioned about motive, testimony by Google Senior Vice President and General Counsel Kent Walker noted that ‘the large majority of the material we saw was in the socially divisive side, rather than direct electoral advocacy’.33 This does not appear to be any less damaging.
7.46
Reforms have been put in place since the 2016 US election, with Twitter and Facebook taking steps to address disinformation through an increase in fact-checking and greater transparency with regards to advertising.34
7.47
These reforms come as Facebook is drawn into investigations over Russian interference. In a hearing before the United States Senate Select Committee on Intelligence on 5 September 2018, Facebook’s Chief Operating Officer, Sheryl Sandberg, apologised for being ‘too slow to spot this [Russian interference] and too slow to act’.35
7.48
Ms Sandberg continued to outline the improvements and measures made by Facebook:
We’re investing heavily in people and technology to keep our community safe and keep our service secure. This includes using artificial intelligence to help find bad content and locate bad actors. We’re shutting down fake accounts and reducing the spread of false news. We’ve put in place new ad transparency policies, ad content restrictions, and documentation requirements for political ad buyers. We’re getting better at anticipating risks and taking a broader view of our responsibilities. And we’re working closely with law enforcement and our industry peers to share information and make progress together.
This work is starting to pay off. We’re getting better at finding and combating our adversaries, from financially motivated troll farms to sophisticated military intelligence operations. We’ve removed hundreds of Pages and accounts involved in coordinated inauthentic behavior— meaning they misled others about who they were and what they were doing.36
7.49
In an earlier hearing, Virginian Senator Warner took aim at Facebook, Twitter, Google and YouTube to highlight the scale of the problem, calling not only on the platforms to take action, but also the government, the President, and the public. 37 There appears to be growing recognition that the problem of foreign interference and disinformation can only be tackled by a coordinated effort.
7.50
In Britain, there are current investigations and inquiries into the role data breaches, fake news, twitterbots, inappropriate funds and foreign interference played in the Brexit result.
7.51
As with the US 2016 Presidential election, questions have been raised over alleged Russian interference. Mr Tom Brake MP (UK) said that for Russian President Vladimir Putin, ‘the use of social media to interfere in foreign states is a vital, weaponised tool’.38
7.52
Dr Dan Mercea and Dr Marco T Bastos found evidence that 13 494 twitterbots were active during the Brexit campaign, with rates of support higher for Vote Leave than the Remain campaign.39
7.53
The 2017 French election also witnessed attempts at cyber manipulation–in this case through a mix of disinformation and what Ms Hawkins terms ‘strategic disclosures’.40
7.54
As explained by Ms Hawkins, strategic disclosures are, as the leaking of Presidential Candidate Hillary Clinton’s emails showed, potentially very damaging to a campaign:
Information doesn’t have to be false to influence voters’ decision-making. Acquiring and distributing true but previously unavailable facts can change the way people make choices during an election.41
7.55
In the closing days of the 2017 French presidential election campaign, emails connected to President Macron’s campaign were leaked and quickly spread via the internet. Despite a 44-hour media blackout prior to polling, because of the viral nature of online, non-mainstream media, the emails spread quickly across popular channels such as 4chan.42
7.56
These international instances and the subsequent investigations validate the Committee’s concerns over the potential misuse of online platforms, disinformation and bots.

The threat to Australia

7.57
According to the News and Media Research Council (NMRC), the threat to Australia of social media manipulation, spread of fake news, and the use of bots appears to currently be more of a domestic threat than one of foreign interference.43
7.58
While the security concerns related to the malicious use of bots by foreign actors cannot be ignored, new legislation such as the National Security Legislation Amendment (Espionage and Foreign Interference) Act 2018 and the Foreign Influence Transparency Scheme Act 2018 may cover foreign interference during Australia’s democratic or government processes.44
7.59
The legislative gap instead appears to be in domestic and commercial communications. Further consideration of spam laws, privacy laws, advertising laws, and regulatory guidelines is warranted.
7.60
Evidence to this inquiry demonstrated the ways in which companies were themselves addressing these gaps. For example, Twitter outlined its commitment to safeguarding the integrity of its platform:
It’s important to note our work to fight both maliciously automated accounts and disinformation goes beyond any one specific election, event, or time period. We’ve spent years working to identify and remove spammy or malicious accounts and applications on Twitter. We continue to improve our internal systems to detect and prevent new forms of spam and malicious automation in real time while also expanding our efforts to educate the public on how to identify and use quality content on Twitter.45
7.61
For Australia, however, evidence suggested only minimal disruption. NMRC presented an analysis of the #auspol hashtag to quantify the prevalence of Russian trolls and news sources within a prominent location for online political debate. The focus of the investigation was the presence of state-sponsored media, Russia Today and Sputnik, in the #auspol debate.46
7.62
#auspol is over 8 years old and ‘the endurance of the hashtag from election to election and between elections, combined with the evidence that platforms like Twitter are an increasingly important source of news for Australians lends credence to concerns about security of Twitter as an information space’.47
7.63
The NMRC found that Russian accounts were more likely to engage in an international rather than national debate.48
7.64
The absence of a concerted bot effort in Australia was also found by an Oxford study which studied 28 countries and found that Australia had only minimal evidence of ‘cyber troop capacity’.49 See Figure 7.4.

Figure 7.4:  Organisational density of cyber troops, 2017

S Bradshaw and P N Howard, ‘Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation’.
7.65
Similarly, in both submissions, Twitter and Facebook research has found minimal evidence of interference in Australia’s electoral and voting process.50 Facebook noted that during the 2017 Postal Survey, it worked in conjunction with the Australian Electoral Commission (AEC) to remove content that violated Australia’s policies and laws, as well as running advertisements encouraging voters to enrol.51
7.66
While the success of this enrolment campaign is difficult to quantify, the Postal Survey did see an enormous increase in new electoral enrolments and,52 given the difficulty in enrolling voters in the 18-24 age group, social media campaigns present an avenue to foster engagement.
7.67
Some submitters expressed concern at the length of time Facebook took to respond to AEC inquiries, specifically in the context of the Marriage Law Survey (Additional Safeguards) Act 2017. Digital Rights Watch recommended that the Committee consider potential options to empower the AEC in the social media space, including requiring Facebook to respond to AEC enquiries within a set period of time and increasing resources for the AEC to handle digital and social media-related issues. They also noted:
Dark ads are also playing an increasingly prominent role in the Australian political context. In the lead up to the 2017 postal vote on same-sex marriage, the federal parliament passed a law intended to safeguard against vilification or intimidation, including requiring paid advertisements to be authorised. However during the months and weeks before the vote, sponsored Facebook posts (i.e. paid advertising) which were openly homophobic and were clearly targeting Australian Facebook users with the intention of influencing their vote continued to appear without authorisation. At least one such ad took more than a month to be blocked, despite the direct intervention of the Australian Election Commission and Special Minister of State with Facebook.
The fact that these unauthorised ads were able to target Australian voters with more or less complete impunity despite the safeguards law, and the difficulty in identifying who was behind the campaigns and who was exposed to them, should be a matter of serious concern for the parliament.53

Micro-targeting and ‘dark adverts’

7.68
Micro-targeting came to light when the political consulting firm Cambridge Analytica was linked to a major privacy breach, involving the harvesting of an estimated 87 million Facebook users’ personal data. This data could be used to better target political advertisements. Its potential misuse is subject to inquiries in relation to both the 2016 US Presidential election and the 2016 Brexit campaign.
7.69
On 17 March 2018 The New York Times and The Observer published articles alleging Cambridge Analytical had harvested millions of Facebook profiles, initially through thisisyourdigitallife, an app created by Aleksandr Kogan and his company Global Science Research. The app gave access to not only users’ profiles, but also the profiles of their friends.
7.70
In the UK, Facebook has been fined £500 000 by the Information Commissioner’s Office for two breaches of the UK Data Protection Act 1998 related to the Cambridge Analytica scandal. Information Commissioner Elizabeth Denham highlighted the broader issue:
It’s part of our ongoing investigation into the use of data analytics for political purposes which was launched to consider how political parties and campaigns, data analytics companies and social media platforms in the UK are using and analysing people’s personal information to micro-target voters.54
7.71
Similar questions have been raised in Australia. The Office of the Australian Information Commissioner opened an inquiry on 5 April 2018 into whether Facebook had breached the Privacy Act 1988.55 There is currently no report date for this inquiry.
7.72
A further question regarding privacy was raised by Digital Rights Watch and related to political campaigners’ exemption from the Privacy Act and the potential misuse of data:
The exemption of political parties, and their contractors, sub-contractors and volunteers, from the Privacy Act … poses not only a data security risk, but also a major reputational risk for political parties themselves.56
7.73
Misuse of data appeared to be a serious threat not only in terms of privacy laws, but also through the potential dangers of ‘dark advertising’, as evidenced by groups such as Who Targets Me–a UK initiative aimed at bringing transparency to political advertising on social media (primarily Facebook).57
7.74
Dark advertising allows groups and companies to target specific individuals or groups (micro-targeting), with the goal of shifting their opinions. It is different from normal advertising because it will be seen by only the intended recipient.
7.75
Who Targets Me highlighted the problem:
Imagine during an election campaign, a particular party advertises the message “We will ban all migration” on a billboard in one part of the country, and that same party puts the message “We will allow unrestricted migration” on a billboard in another part of the country in an attempt to attract voters from both sides of a debate. Now imagine someone from one end of the country is visiting friends in the other, they notice the mismatch and naturally the party is ridiculed by the public.
Now imagine instead of billboards, those were online “dark” adverts. How would you go about determining if your neighbours or those of different demographics were seeing different adverts to you? How can you be sure you truly understand what a party stands for when the vast majority of campaign material is hidden from view?58
7.76
In Australia, The Guardian set up a similar initiative to publicise dark adverts shared during the 2018 Tasmanian, South Australian and Victorian elections. ProPublica developed a browser plugin capable of automatically collecting ads from Facebook. Users also have the option to send screenshots of ads directly to the initiative.59
7.77
Facebook and Twitter have launched initiatives aimed at bringing transparency to dark ads. They have promised to publish online databases of the political adverts that have appeared on their sites prior to the November 2018 US mid-term elections.
7.78
Facebook and Twitter have launched initiatives aimed at bringing transparency to dark ads. It is now possible to track dark ads on the Facebook platform and access an Ad Library which collects political advertisements.
7.79
Twitter also outlined its own improvements in the area of political advertising and discussed its coordination with national laws:
Twitter’s current advertising policies permit political campaigning advertising, but we maintain additional country level restrictions. In addition to Twitter advertising policies, all political campaigning advertisers must comply with applicable laws regarding disclosure and content requirements, eligibility restrictions, and blackout dates for the countries where they advertise, including Australia.60
7.80
Similarly, when speaking at a US Senate hearing, Google’s Senior Vice President and General Counsel, Kent Walker, outlined the company’s response to evidence of political manipulation:
Going forward, we will continue to expand our use of cutting-edge technology to protect our users and will continue working with governments to ensure that our platforms aren't abused. We will also be making political advertising more transparent, easier for users to understand, and even more secure. In 2018, we'll release a transparency report showing data about who is buying election ads on our platform and how much money is being spent. We'll pair that transparency report with a database, available for public research, of election and ad content across our ads products.
We're also going to make it easier for users to understand who bought the election ads they see on our networks. Going forward, users will be able to easily find the name of any advertiser running an election ad on Search, YouTube, or the Google Display Network through an icon on the ad. We'll continue enhancing our existing safeguards to ensure that we permit only U.S. nationals to buy U.S. election ads.61

Media literacy

7.81
Evidence from the News and Media Research Council (NMRC) suggested that media literacy should be the first line of defence when combatting disinformation.62
7.82
NMRC provided evidence of the relatively low level of media literacy in Australia.63 See Figures 7.5 and 7.6.
7.83
NMRC found that the type of news platforms used influenced the level of news literacy. For instance, 76 per cent of social media news consumers recorded low/very low levels of literacy, compared to 58 per cent of consumers using online news sources.64
7.84
Importantly, the submission also showed that news literacy increases the ability to spot fake news.65

Figure 7.5:  Levels of news literacy in Australia (%)

Source: NMRC, Submission 222, p.17.

Figure 7.6:  International comparison of media literacy

Source: NMRC, Submission 222, p. 17.
7.85
Based on their findings and the low level of news literacy in Australia, NMRC recommended all school and university students be instructed in information literacy, including providing them an understanding of algorithms and artificial intelligence (AI).66
7.86
The inclusion of algorithms and AI should help users identify bot activities and fake accounts.
7.87
In its own submission, it was unclear whether Facebook-supported initiatives provided information on algorithms or AI. However, there was an attempt to get media literacy programs into schools through their Digital Literacy Library.67
7.88
Evidence suggested that given political participation increasingly occurs online, media literacy could perhaps be incorporated into a strengthened civics education curriculum. Twitter identified that its media literacy programs focused on, among other things, ‘active citizenship online’.68

Media literacy templates

7.89
Internationally, concerns have always existed over the level of critical analysis of media that should be taught in schools. However the necessity for such education has only increased with the advent of social media and the rise of bloggers and self-publishers.
7.90
UNESCO has published a Handbook for Journalism Education and Training–Journalism, ‘Fake news’ & Disinformation-which provides guidance on the development and aim of modules, as well as possible activities and resources.69
7.91
Similarly, Stony Brook University runs an undergraduate course which has also been adapted to school settings. It also provides links to resources and potential course modules.70
7.92
These international examples could form the basis for an Australian curriculum targeted at both school and university level.

Interim committee conclusions

7.93
In light of the terminological confusion surrounding ‘fake news’, the Committee recommends that future inquiries adopt the term ‘disinformation’. This recommendation accords with investigations in Europe and elsewhere and should allow for more exact exploration, while preserving free speech and expression for legitimate forms of communication.

Recommendation 27

7.94
The Committee recommends to the Australian Government that all future inquiries into issues concerning ‘fake news’, instead use the term ‘disinformation’.
7.95
During its inquiry, the Committee found limited evidence of social media manipulation within Australia, including minimal use of bots; however, given recent international incidents, the Committee recommends this issue continue to be monitored, particularly during election periods.
7.96
The Committee supports Mr Chris Zappone’s suggestion that lines of communication should be opened between the Department of Foreign Affairs and Trade and Silicone Valley companies. This would encourage companies to bring greater transparency to changes to their platforms, and ensure that they are conducive to a liberal democratic environment.71

Recommendation 28

7.97
The Committee recommends that the Australian Government establish a permanent taskforce to prevent and combat cyber manipulation in Australia’s democratic process and to provide transparent, post-election findings regarding any pertinent incidents. The taskforce is to focus on systemic privacy breaches.
7.98
The evidence presented to the Committee and other international investigations shows the power of social media companies to shape what we see and how we see it. Governments must address the degree of responsibility of social media companies for their content, together with ways to bring greater transparency to their methods of regulation and moderation. Inconsistency and ambiguity over Facebook’s responses on its status as a platform or publisher suggests self-regulation is problematic. Consequently, determining the nature of Facebook and similar social media ‘platforms’ is crucial.

Recommendation 29

7.99
The Committee recommends that the Australian Government bring greater clarity to the legal framework surrounding social media services and their designation as ‘platform’ or ‘publisher’.
7.100
Of particular interest in US hearings has been discussion concerning FEC law which prohibits foreign nationals from spending funds in a US election. Given this law applies to Facebook it appears more could have been done to authenticate the true source of political advertising funding to ensure the money originated from a domestic source.
7.101
Also of concern to the Committee are the low levels of media literacy and the rapid spread of disinformation in Australia. The Committee recommends increased focus on improving media literacy levels among both students and the general public.

Recommendation 30

7.102
The Committee recommends that the Australian Government consider ways in which media literacy can be enhanced through education programs that teach students not only how to create media, but also how to critically analyse it.

Recommendation 31

7.103
The Committee recommends that the Australian Electoral Commission examine ways in which media literacy can be incorporated into a modern, relevant civics education program.
7.104
Some submitters recommended that the AEC should have a role in producing and circulating a comparative table of political positions as one strategy to tackle misinformation. According to Mr Chris Zappone, Foreign News Editor, The Age and Fellow, Futures Council, ANU National Security College:
With the structural demise of traditional media, there may be a role in the Australian Electoral Commission producing a simplified gazette of political parties’ views on issues, which can be heavily publicised through multiple channels in the days before the elections. Voters can be directed to these charts that contain verified information on the views of politicians, where they can compare and contrast the positions of parties. The government could limit the word count of the statements, which would increase the importance of the statements. In a time of unrestrained information flows, the brevity of the statement would increase its value to the public.72
7.105
The Committee will consider this, and other proposals, in its next report on this issue.
Senator the Hon James McGrath
Chair
29 November 2018


 |  Contents  |