Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

Do Researchers Have an Obligation to Report Dangerous Actors?

Do Researchers Have an Obligation to Report Dangerous Actors?
23rd April 2021 Lydia Khalil
In Insights

The Lowy Institute recently conducted a survey of the terrorism and extremism research community regarding research on the role of technology in violent extremism and the state of the research community’s engagement with the technology sector. The survey will form the basis of a more in depth GNET Report that will discuss the survey findings to be published later in the year. One of the questions that was asked of the researchers, but not profiled in the forthcoming report, was “In the course of my research, I have reported dangerous actors or violations of the terms of service by a subject/s of my research to a particular platform.” Of the 112 respondents to this question, 42% indicated that they had done so and 58% responded no, they did not.

Given that many in the terrorism and violent extremism research community have voiced concerns about how extremist and dangerous actors have exploited social media and messaging platforms, it’s interesting to note that a clear majority of the respondents have not reported dangerous actors or violations of the terms of service to social media companies.

It’s important to state at the outset that the results reported here are based on a non-random sample and represent only the responses of those who responded to the questionnaire, and therefore can’t claim to be representative of the entire terrorism/violent extremism research community.  And though there was no option in the questionnaire for respondents to provide a reason for why they did or did not report, some respondents provided additional comments which indicated that, for them, reporting ‘dangerous actors’ and reporting ‘violations of the terms of service’ were two different propositions. For example, more than one respondent indicated that the spectrum of what violates a mainstream social media platform’s terms of service is quite broad and while they may report to law enforcement someone that is threatening or discussing violence, they would not necessarily report other ‘lesser’ violations of a platform’s terms of service.  Aside from reporting potential violence and illegal behaviours to law enforcement, a researcher’s obligations to report violations of terms of service of private, for profit entities are quite complex and intersect with broader issues around research ethics. In addition to considering the degrees and severity of the violation of a platform’s terms of service, there could be many other reasons why researchers may not report dangerous actors.

For one, there is push back in some quarters of the research community, particularly but not exclusively within critical terrorism studies, that much of the research in the field is driven by the agendas of counterterrorism agencies and powerful technology companies that often fund research and otherwise set priorities. There are also differences in how individual researchers view their role.  Is it a researcher’s role to put out the fire or as scholars Alex Schmid and Alex Jongman wrote in Political Terrorism, to ‘be a student of combustion?’

However, to take the analogy a step further, researchers must also consider the harm that could come as they observe the flames consume its targets. Especially when the targets of extremist actors are often historically marginalised, victimised and under resourced.

Additionally, social media is a huge reservoir of user generated content that is very valuable to researchers – particularly in the terrorism and violent extremism field/s where conducting interviews and otherwise engaging with their subjects of study can be difficult and dangerous.  The online content and communication of violent extremist individuals and groups are one of the few relatively accessible sources of data to study and understand these individuals and movements.  To report and risk their de-platforming, even while believing for the sake of the public good that they should be de-platformed, also risks the loss of valuable data.  As JM Berger has noted, “As social media companies crack down on extremist content, some valuable resources become more difficult to obtain.. to state the obvious, if researchers can’t obtain data, they can’t analyze it…In some cases, material that is removed on the grounds of extremism is not retained by the platform in any capacity, rendering it truly ephemeral.”

And then there are the broader societal harms and impacts on social cohesion for researchers to consider. When research ethics consider issues around ‘risk of harm,’ those considerations are usually in relation to the research subject, which in the case of terrorism and extremism research, the subject is usually a violent extremist group or individual.  The consideration of the risk of harm when conducting research is not usually broadened out to the greater public who can be impacted by a violent extremist group’s behaviour or the private company that hosts their online content.

This is clearly a fraught area for researchers. Though there are some generalised Internet research ethics codes, there are no clear ethical frameworks that address researching extremist and dangerous actors online and guidance that discusses potential violations of terms of service is geared towards researchers avoidance of violating terms of service, but nothing on what to do if the research subject does so.

John Morrison, Andre Silke and Eke Bont, recognising the need for greater guidance for academic ethics committees (IRB/HRECs) when making decisions regarding terrorism research, have put forward a “Framework for Research Ethics in Terrorism Studies (FRETS)” in the latest edition of the journal Terrorism and Political Violence, a special issue dedicated to ethics and terrorism studies.  But even with this framework, they acknowledge that, “online terrorism studies research raises some unique challenges. In its current format, FRETs does not have a specific focus on this form of research.  As a result a case can be made that such research would benefit from the availability of a specialised framework of its own, which is tailored to the needs of the ethical review of Internet and social media based research.”

In the same journal, Maura Conway did put forward a contribution that dealt specifically with online extremism and terrorism research ethics. However, Dr. Conway’s article, while valuable, had a different focus examining the ethics relating to researcher safety and therefore also did not address this issue of whether researchers should report harmful conduct of dangerous actors to the platforms that are hosting their content.  She does make the important point though that the ‘do no harm’ principle does not apply when it comes to reporting illegal activities or planned crimes – which obviously includes attack plotting.  But what about reporting less obvious illegal activities that nonetheless violate terms of service and can do harm?

Tech platforms themselves recognise that they are ultimately responsible for adherence to their terms of services and prohibiting the exploitation of their platforms by extremist and dangerous actors. They have their own mechanisms to deal with these violations and do not, and should not, depend on researchers’ reporting. However, developing more detailed guidance for researchers about reporting the dangerous subjects of their research, to the tech platforms on which they distribute their content and use to communicate, network, mobilise and fundraise, as well as to law enforcement, could be useful and welcome by those that are doing the important work of studying and providing insights into difficult and dangerous movements.