Click here to read our latest report: Going Dark: The Inverse Relationship between Online and On-the-Ground Pre-offence Behaviours in Targeted Attackers

Reddit’s Hosting Service and the Dangers of Outlinking

Reddit’s Hosting Service and the Dangers of Outlinking
17th September 2021 Bàrbara Molas
In Insights

A key element in facilitating radicalisation processes leading to violent extremism, online hate speech has been considered among the most difficult areas to regulate on social media platforms. Challenges include the absence of a clear legal standard upon which companies can base their policies, as well as the importance of context in determining whether or not a post containing harmful words constitutes hate speech. The lack of clarity surrounding what constitutes hate speech together with inconsistent enforcement may enable users to abuse the platform to advance hateful ideologies. Such loopholes allow extremist users like white supremacists to use social media as a tool to effectively distribute their message, hoping that when their rhetoric reaches the right user, online messages will turn into real-life attacks against what they deem ‘the enemy’. They do so by normalising their narratives through the ‘echo chambers’ facilitated by these platforms’ algorithms. That is, through online spaces customised for the user via recommendations, shared content, and filtration, which remove alternative or challenging worldviews, thus facilitating the radicalisation and engagement processes. Several incidents in the past years have shown that “when online hate goes offline, it can be deadly.” For violent white supremacists like Dylann Roof, Anders Breivik, or Robert Bowers, online hate certainly did not stay online. So, the question is: how can social media platforms balance free expression and user/human rights protection while preventing online radicalisation leading to violent extremism?

In general, hate speech standards have focused on promoting the protection of specific groups or individual attributes (race, ethnicity, national origin, religious affiliation, sexual orientation, gender, age, ‘caste’, or disability). In their safety and transparency policy terms, the most popular social media platforms tend to insist on the idea that they stand against ‘hateful speech’ or ‘hateful conduct’, and ‘harmful content’ or content that ‘promotes violence’. However, some of these platforms fail to offer a definition for key terms like ‘hate speech’, ‘harmful’, or ‘violent’, and are unclear about how they detect and block hate speech. But even when definitions are clear and policies in place are enforced in transparent ways, automatic detection is bound to fail to detect hateful content when the phenomenon of ‘outlinking’ takes place.

‘Outlinking’ is the use of hyperlinks on social media platforms that take users to external sites with fewer protective measures against extremist material. This technique means that instead of directly posting words and images to communicate extremist content (which could be more easily detected using existing technology), users share websites that lead to ‘weak’ pages, or sites with less protective measures, in order to spread extremist propaganda. Common sites used in outlinking include news sites, WordPress blogs, or image and text hosting services, but sometimes outlinking is facilitated by the social platforms themselves. That seems to be the case with Reddit.

In 2015, the Southern Poverty Law Center (SPLC) called out Reddit as home to some of “the most violently racist” content on the Internet. In 2018, when Reddit CEO Steve Huffman was asked about whether ‘obvious open racism’ was against the company’s rules, he said: “It’s not” – although shortly thereafter he said it was not welcome. Last year, Reddit made a statement saying that the company would “take a stronger stance on fighting racism.” In spite of this, as of August 2021, the effectiveness of Reddit’s terms on online hatred and content blocking is still questionable, as while its policy on hate speech states that those who promote “hate based on identity or vulnerability…” will be banned, it allows a shocking amount of explicitly racist accounts to remain active. But even if these were effective, they would still be limited, for as a website aggregator, Reddit constitutes a platform whose content thrives on outlinking. While it could be argued that Reddit can’t be held accountable for harmful content distributed through outlinking third-party sites, it is possible to bring attention to the role that Reddit’s own image and video hosting services have had in helping the far right circumvent Reddit’s own hate speech policies.

After analysing the most impactful posts produced by four radical-right ‘chat rooms’ on Reddit between the years 2018 and 2021, it appears that Reddit itself is responsible for up to 56% of outlinking activity that facilitates the spread of hate speech. Indeed, Reddit’s own image hosting service in particular, i.redd.it, seems to be a main cause for Reddit’s not being able (or not being able to effectively use their safety policies) to ban extremist groups and accounts, and for these being able to share radical material with no consequences. The groups studied are located in the United States, the United Kingdom, and Canada; they share content in English, and their followers amount to half a million members. All of these groups are openly racist. Standing for extremist views, the mottos of these groups endorse the idea that ‘true racism’ is only that directed against whites, or that whites are the ‘master race’, making it obvious that membership to these groups requires allegiance to white supremacist ideology. Among their messages, it is common to read (explicitly or implicitly) that non-white individuals are inferior and that the white ‘race’ is victim of a (liberal/democratic) conspiracy promoting immigration and mix marriages. Anti-Asian, anti-Black, and anti-Jewish speech abounds. In addition, all these groups promote homophobic language, in particular attacking the LGBTQI+ community and their demands for equality. Out of the four cases observed, up to a 100% of the most popular posts –or those who received the highest level of engagement– used outlinking to share extremist content, as opposed to uploading material directly from the browser that could be flagged by the site. In the cases in which outlinking represented at least 38% of distributed extremist content among the top 50 posts per group, between 10% and 56% of it was facilitated by Reddit’s own image hosting service. Other prominent outlinking sites include other image and text hosting services (16% to 58%), new media sites (from 2% to 46%), and right-wing and radical-right sites (2% to 26%).

While Reddit does not explain what type of protection its own image and video hosting sites have vis-à-vis its direct feed, conversations among Reddit users on the use of these sites is illuminating. When Reddit launched its in-house image and video uploading service in 2017, conversations about using i.redd.it and v.redd.it (for videos) emerged on Reddit. In one of these conversations, one user insists that the social media platform allows the hosting of images and videos on their own site through hosting services because doing so “means people don’t leave,” meaning people don’t abandon the site to access other platforms while using Reddit. Other users argue that having its own hosting services allows Reddit to store content. Reddit itself stated that allowing third-party uploading services was time-consuming for users, which is why they provide their own. Six months ago, a Reddit user asked why Reddit’s image hosting service doesn’t block harmful content, specifically content labelled as NSFW or ‘Not Safe For Work’ content, a warning indicating that a website or attachment is not suitable for viewing at most places of employment. In the same chat thread, another user complained about submitting a report about a post linking (outlinking) a Discord server that hosted underaged porn. Clearly, Reddit’s users are worried about outlinking, but the platform has dismissed such concerns by stressing that the content didn’t break site rules. Does this mean Reddit’s direct feed alone, or does it include Reddit’s image and video hosting sites? If the former is true, then the rules need to be amended, and they need to be amended based on the practices of outlinking, whose nature remains heavily understudied despite it constituting, as shown, one of the main pathways for violent extremists to succeed in spreading their messages.