Policymakers and Big Tech have made significant progress in reducing online hate speech on mainstream social media platforms, but they must now grapple with the challenge of mitigating harmful speech that circulates within alt-tech environments.
Dr Melissa-Ellen Dowling, Senior Research Fellow, Jeff Bleich Centre
Policy Perspectives #1, June 2023 | Download this Policy Perspective
In its formative years, the internet was heralded as a global space in which the exchange of information between users from diverse locales, demographics, and ideological persuasions could occur largely unfettered by geographic, political, or social constraints. This digital utopia offered the promise of a borderless, equal and inclusive public sphere, one which would act as a powerful promotor of freedom, democracy and liberalisation around the world. Yet, those early ambitions have failed to materialise. Far from it. The reality is that today, ‘hate speech’ – statements denigrating individuals or groups for perceived differences, typically on grounds of race, religion, ethnicity, gender, sexual orientation, or nationality – is rampant. Online spaces are routinely infiltrated, influenced or even dominated by the worst expressions of prejudice found in the world, with approximately 14% of Australia’s population having experienced online hate over the course of a year.
There is little mystery to why this is so. The framework of the internet is one of a diffuse, global, distributed network; it resists intervention or regulation by design. Its resistance to regulation and intervention has been revealed as a double-edged sword that provides opportunities to enrich public debate on the one hand, while enabling harmful rhetoric to flourish on the other. As a consequence, two emergent challenges to state authority have become apparent.
First, online hate speech is difficult to counter because it can traverse borders and jurisdictions with ease. In all its manifestations, hate speech is socially harmful and corrosive: whether as discriminatory remarks within online communities or as targeted communications at individuals. Hate speech has been found flourishing across Australia, New Zealand, and Europe, with those identifying as LGBTQI+ reporting frequent experiences of hate speech online. In the US, following the murder of George Floyd in 2020 race-based online hate speech surged. In 2014, gender-based hate speech flourished globally with the #Gamergate movement. Where such hate speech originates beyond the state’s jurisdiction, its enforcement agencies are almost powerless to respond.
Second, liberal democracies are increasingly concerned that online hate speech can subvert political norms too. Hate speech promotes discrimination and intolerance and, as a consequence, can further engender offline incitement to exclude, vilify, or even promote violence. Recent research demonstrates direct links between online hate speech and offline violence against minorities. Hate speech can also have significant psychological repercussions for targeted groups and individuals, can foment social unrest and can also degrade online communities that might otherwise produce benefits for democracy. Indeed, explicit denunciations of liberal ideals of pluralism, inclusion, and equality can foster a more polarised, unstable society.
What are the prospects for the modern liberal democratic state to respond? Policymakers across the liberal democratic world have recognised the need to mitigate online hate speech, yet, due to the complexities of the challenge they have struggled to develop effective regulation – particularly where non-mainstream, radical free-speech social platforms (‘alt-tech’) are concerned. Since many current approaches to mitigating online hate speech rely on platforms’ cooperation and support, existing initiatives will struggle to be effective for alt-tech platforms such as Gab, Truth, Parler, and ‘the Chans’ that pride themselves on being uncensored, pro-free speech communities and welcoming of all ideological dispositions, including those that are intensely illiberal and exclusionary.
Aside from contending with the quintessential ‘free speech’ dilemma that accompanies debates around online content moderation, there are other practical obstacles to regulating online hate speech that place policymakers in an unenviable situation. Uncertainties over who should regulate hate speech, what should be regulated, how it should be regulated, and where regulation should apply, combine to produce an acute regulatory challenge. And, these problems worsen in relation to hate speech that takes place outside of the law’s reach. Jurisdiction challenges, user anonymity, and platform mobility (the ability of platforms to move online location if denied server access) make alt-tech platforms resistant to punitive and censoring regulatory measures.
One intervention, which has had some success is platform-initiated content moderation. This approach focuses on deplatforming – the removal of users and/or groups from the platform for contravening the platform’s terms of service. This approach has proven partly effective in reducing hate speech on mainstream platforms such as Twitter and Reddit: doing so ‘cordons off’ the hate speech so that targeted groups or individuals may not be as exposed to it. Yet, these approaches may inadvertently drive those deplatformed users to non-mainstream parts of the internet, where hate speech against ethnic, gender, and religious minorities goes virtually unpoliced by platform moderators. Harmful speech is then able to proliferate in ‘echo-chambers’that might be unchecked by alternative viewpoints and lead to more extreme ideas.
In an increasingly digital world, online hate speech is a challenge that is unlikely to abate anytime soon. Despite liberal cultural messaging promoting tolerance, inclusion, and diversity, ‘haters gonna hate’. This is by no means a reason for complacency; rather it is a call for vigilance. While liberal democracy enshrines inclusion and pluralism, its pillar of free expression constrains the regulatory options available to governments in the fight against hate speech. Likewise, the freedoms inherent in internet design encumber government-imposed regulation of platforms and users, and alt-tech platforms either lack the resources or inclination to censor hate speech beyond its most egregious manifestations. For now, policy options seem limited, yet there is hope that advances in technology might just provide the tools we need to mitigate alt-tech hate speech in the future.
Dr Melissa-Ellen Dowling is a Senior Research Fellow with the Jeff Bleich Centre.
Her research focuses on the ways in which liberal democracy can be challenged, sustained, and enriched in a context of a digitising world. Within this remit, Dr Dowling’s research explores illiberal ideologies, foreign interference, elections, and disinformation.