Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

Collaboration and Legislation: Confronting Online Violent Extremism from New Zealand

Collaboration and Legislation: Confronting Online Violent Extremism from New Zealand
10th May 2021 Cameron Sumpter
In Insights

It’s unclear exactly how many times the livestreamed video of the 2019 Christchurch attack was viewed or shared before the blocking began. In any case, senior teachers at four schools in the vicinity of the Al Noor mosque told me that many of their students had watched the footage, even as they hid in their classrooms during the protective lockdown.

The terrorist was Australian, and the federal government in Canberra responded with swift legislation that criminalised the sharing of ‘abhorrent violent material’ online. The law was passed with very little consultation among civil society associations or technology companies, who were reportedly ‘furious’ at the initiative.

Meanwhile, New Zealand’s government pursued a multi-stakeholder approach, attempting to collaborate with the major tech players. The subsequent Christchurch Call to Action highlighted respective commitments, while aiming to build consensus among divergent interests and address upstream drivers, both off and online.

But now – two years after the attacks, Wellington is also seeking legislative tools to counter violent extremism online, possibly involving the future development of an Internet filter. Will the proposed new laws impact Christchurch Call partnerships or simply help to clarify responsibilities?

Introduced in May 2020, the Bill aims to amend the Films, Videos and Publications Classification Act (1993) to incorporate online spaces more explicitly. Currently at the select committee stage, it would criminalise the livestreaming of ‘objectionable’ content; empower the Chief Censor to make quicker interim classifications; and authorise take-down notices with pecuniary penalties for non-compliance.

For the first time in 25 years of proportional representation in New Zealand’s parliament, one party (left-of-centre Labour) holds a majority of seats, and so far they look to need every one of them for the Bill to pass. During its first reading in February, the most clamorous criticism was of an “electronic system to prevent access,” which was included as a future possibility with no details on the type of filter proposed.

Parties from across the spectrum have cited free speech concerns. The libertarian-leaning Act party spoke of people “cancelled out and beaten down for expressing their views.” The centre-right National Party said the Bill in its current form would have a “fundamentally chilling effect on free speech,” while the Māori Party expressed concern that it could jeopardise legitimate activism such as the Black Lives Matter movement.

While the Bill may fare better once the question of automated blocking is shelved for a later debate, public submissions to the select committee in April have provided nuance and expertise to the concerns laid out in parliament. Many noted that a filter in this context would not be technically feasible anyway.

Civil society submissions argued that any automatic or overzealous content moderation could erode the social trust needed to counter the growing problem of misinformation – particularly during the pandemic. Others highlighted the unattainable global consensus on definitions of terrorism and violent extremism, adding that certain portrayals of violence are in the public interest, such as the George Floyd murder footage.

The major technology companies have generally welcomed the Bill’s intent but called for greater clarity. Microsoft’s submission recognised that “voluntary industry efforts, while necessary, are not always sufficient” but that government regulation had to be very clearly defined to avoid subjective decisions on what to block.

Facebook wrote that “gaps in New Zealand’s current online content regulation … were highlighted in the tragic events” in Christchurch, but that regulators should understand both the “capabilities and limitations of technology in content moderation.” The company had specific concerns over retention of unlawful content for “safety and security purposes,” and sought a right to appeal take-down notices.

YouTube/Google pointed out that its community guidelines are actually “tougher than NZ law” and argued that legally compelling companies to remove videos probably wouldn’t have made a practical difference with the livestreamed Christchurch terrorist attack. The submission highlighted YouTube’s effective collaboration with the NZ government’s Digital Safety Team, and hoped the legislation did not “undermine this practice.”

The Amendment Bill has some way to go before reaching sufficient consensus in parliament, regardless of the governing party’s current majority. It is clear the Classifications Act needs to be updated for the 21st Century; for example, to remove or block content from online platforms that have no relationship with government and resent regulation.

While clarifying rules and responsibilities is important for efficiency in managing the problem of overtly harmful content, it is equally crucial to develop and maintain constructive relationships of trust between governments and the global technology companies.

State controls may or may not prevent determined people from accessing damaging or dangerous material online. But no regulation could have much effect on the upstream engagements which lead people down intolerant pathways that can and do result in real-world violence.

Much has been said about the attention economy’s affinity for contentious or even polarising content. It also seems difficult to ensure that a person concerned about their nation’s immigration policy is not three clicks away from joining some undesignated extremist group on social media.

Both are problems that neither technology companies nor governments appear willing or able to solve by themselves.

Thoughtful and honest consultation over proposed legislative change is constructive for all concerned. So too are the multi-stakeholder working groups established in the context of the Christchurch Call to Action, which discuss issues such as algorithmic processes and recommendations. Reaching consensus may be slow, but progress is surely more likely through ongoing collaborative dialogue than adversarial positioning or punitive reactions.