top of page

THE IMPACT OF HIGH-PROFILE MEDIA ON ONLINE CONTENT MODERATION


Christie Hui, Max M, Counter Threat Strategic Communications (CTSC) Team

Jennifer Loy, Chief of Staff

Week of Monday, April 11, 2022



News Reporters[1]



Video-sharing platforms like YouTube have implemented countermeasures to detect and remove terrorist and violent extremist content that violates their policies, especially in cases of high public interest.[2] The attention from high-profile media cases in the event of a terrorist or violent extremist attack increases the likelihood of violent and harmful content being downloaded and re-uploaded online. As moderators remove content from online platforms, it is almost certain that some users archive and re-upload the content to the same or different platforms. Despite initial content moderation efforts, re-uploaders likely use digital manipulation techniques, such as mirroring, filters, retiming, and pitching to avoid automatic content detection. Prioritizing content removal based on the content itself rather than the users who post it is likely a better strategy to combat violent content across the board. The increasing media attention surrounding such content likely motivates users to continue re-uploading the content to maximize impressions, promote principles of free speech, and even glorify the violent actor. The prominence of content from terrorist and violent extremist actors, both original and re-uploaded, is highly likely to increase the probability of vulnerable individuals viewing such content, leading to increased risk of future violent attacks.


Following high-profile acts of terrorism, the release of a suspect’s name generates an increased interest in social media accounts and channels associated with the suspect as seen on April 12, 2022 when the New York Police Department named the suspect in the Brooklyn mass shooting.[3] This effect is almost certainly magnified in cases in which law enforcement authorities have not yet detained an identified suspect. The increase in traffic is likely initially driven by curiosity about current events, rather than an attraction to the attacker’s ideology. Academia and law enforcement authorities will likely research and investigate the accounts and content, increasing the number of views. Regardless of the intentions of those seeking out this content, it is almost certain that the increase in search interest for terrorist content places it higher in trending topics and more prominent in algorithmic recommendations for all users. While a large population is unlikely to seek out these profiles for ideological fortification, the increased reach of the content likely serves to introduce this violent content to users who may be susceptible to the violent ideology.


As seen with the Brooklyn shooter, the original poster (OP) is often the attacker whose content remains online before the attack while they are relatively anonymous.[4] The removal of OP content by moderators drives a second group, re-uploaders, to race against social media sites to scrape profile content before it is removed from view.[5] It is almost certain that through the utilization of digital manipulation techniques, re-uploaders can avoid automatic detection technology and continue to spread the harmful content free from moderation. The motivation of the vast majority of re-uploaders is very likely to profit, monetarily or exposure-wise, from the high degree of interest. These re-uploaders are likely taking advantage of high-traffic keywords and hashtags to drive impressions to their profiles via re-uploaded terrorist content. A very small minority of re-uploaders likely stand in ideological support of the attack. It is unlikely that ideological re-uploaders will utilize the same techniques to maximize impressions as they often prefer to keep a lower profile to avoid content removal.


The immediate removal of content is likely to primarily target the individual instead of the content. The Brooklyn shooter uploaded hundreds of videos for nearly 4 years, only to be removed a matter of hours after the attack.[6] This very likely indicates that high-profile media attention is the main factor in pressuring social media platforms to remove the channel. Social media platforms’ focus on specific individuals likely enables content that is re-uploaded by different users to remain on the platforms. The removal of OP content unlikely equates to the removal of re-uploaded or similar harmful content. Moderating individuals very likely limits the range of content that is being investigated. Even if moderators remove all content posted by an individual, there will still likely be other variations or duplicates of the OP content online.


The topics of the content found on terrorist accounts have included extremist ideologies, calls for violence, videos of actual attacks, and video manifestos.[7] The harmful and occasionally violently graphic content very likely causes emotional damage to those who view it. As engagement increases, platform algorithms very likely recommend the content to individuals who may be susceptible to extremist thinking. Graphic and vitriolic images, especially videos that document an attack, almost certainly inspire emulator attacks and encourage the continued spreading of hate. The spread of violent and harmful terrorist content across the internet likely normalizes and glorifies hate-driven violence. Gradual desensitization as an effect of frequent viewing of terrorist and violent extremist content almost certainly dulls responses and reactions to terrorist attacks.


Social media sites have used automated content recognition to remove and catalog harmful content,[8] especially following the Christchurch shootings[9] and Christchurch Call to Action which is a commitment to addressing the abuse of technology to spread such content.[10] The faster social media sites identify and remove accounts associated with terrorist and violent extremist actors, the less likely there will be a negative impact on other users who may be exposed to harmful content. Swift removal very likely reduces efforts to download and archive the content for future re-upload or private distribution. Quick action to remove the profiles from view would almost certainly help to decrease its impressions on other users, decreasing the likelihood of the content inspiring a copycat attack and preventing viewers from experiencing emotional trauma.


Regardless of the re-uploader’s intentions, the combined viewership of re-uploaded clips surpasses the reach of the original content.[11] While the average low profile re-uploader is unlikely to gain enough attention to be manually moderated, re-uploaded clips that are linked in articles, are widely shared on third-party sites, or amass enough views are likely to be reported and manually removed from the platform. Re-uploaders who employ techniques such as mirroring, filtering, retiming, and pitching OP content are unlikely to face moderation unless it passes through a manual review. Re-uploaders who use digital manipulation techniques are very likely to defeat hash detection and current automatic content recognition processes. A hash, or digital fingerprint, is a shorter sequence of data that can identify specific content.[12] While increased media attention very likely encourages social media platforms to quickly restrict or remove content or users’ accounts, there is still a short time frame when users can capture OP content. The embedded direct weblinks in news articles allow their audience to immediately view the OP content as an event is still developing. Users are very likely to share the news articles and links to OP content across different platforms, significantly increasing the number of viewers.


Alternative technology (alt-tech) platforms, such as Gab, are attracting users due to less strict content moderation.[13] For example, Gab does not verify content before it is uploaded.[14] Gab and other alt-tech platforms are unlikely to implement the hash detection systems, likely facilitating users to repeatedly spread violent content and escalate threats. The lax enforcement of policies and minimal oversight very likely provide users with greater freedom to post potentially harmful content without consequences. The migration to alt-tech platforms almost certainly increases the amount of content being shared across multiple platforms. Users who have been removed or had content removed from larger platforms are likely to try re-uploading harmful content on alt-tech platforms. The increased cross-posting very likely creates difficulties for content moderators, especially if users re-upload the content without using flagged keywords and phrases.


Social media platforms have implemented various countermeasures to manage high-profile terrorist and violent extremist content. YouTube, Facebook, Microsoft, and Twitter coordinated efforts to create the Global Internet Forum to Counter Terrorism (GIFCT), which holds workshops with the Tech Against Terrorism initiative to establish industry-wide best practices and joint moderation efforts.[15] These workshops could likely encourage smaller platforms to adopt similar countermeasures that hinder re-uploaders’ efforts to share content across different platforms. Having similar countermeasures in place across multiple platforms also likely prevents cross-platform sharing because each platform would have similar content moderation policies. Larger platforms, like YouTube and Facebook, use a hash-sharing database to prevent re-uploads of known violative content.[16] Within hours of the Christchurch shooting in 2019, users uploaded approximately 800 versions of the shooter’s livestream video of the attack, which Facebook automatically blocked 80% of various versions of the video within 24 hours of the attack.[17] Although the hash-sharing database prevented a majority of re-uploaded content, there are still videos that bypass the hash detection systems, which users could likely download and re-upload. Manipulating content very likely changes the data sequence and corresponding hash, creating new hashes and bypassing the flagged hashes. Undetected content coupled with the creation of new hashes very likely creates a cycle where platforms are constantly searching for new hashes of the same or similar content. This cycle could likely exhaust the platforms’ resources, especially when new content is introduced.


It is almost certain that faster responses to the release of terrorist identities by social media platforms could have the greatest impact on the issue as the best way to prevent re-uploads is likely to limit the time the channel remains accessible following public disclosure of the identity of the perpetrator. Strengthening the hash databases by developing capabilities to combat re-uploaded digital manipulation techniques would very likely be useful to moderate re-uploaded content before it goes live. However, The Counterterrorism Group (CTG) recommends the input of researchers and academics into automated moderation techniques such as hash-sharing to ensure human rights compliance. CTG recommends greater collaboration between social media platforms, law enforcement agencies, and news outlets to identify suspects’ identities and conduct social media background checks to reveal harmful online content. CTG also recommends establishing dedicated rapid response teams to manage breaking news events concerning OP content and re-uploaded content. Social media platforms should continue to develop content moderation systems that prioritize viral content and trending hashtags as identified by the rapid response teams. Social media sites should also seek input from AI researchers and academics to develop automatic content capabilities that can be effective even when content is digitally manipulated


The CTG’s Counter Threat Strategic Communications (CTSC) Team will continue to monitor the spread of re-uploaded content and analyze content praising or empathizing with the Brooklyn shooter. The CTSC Team will also monitor the developments regarding the motive behind the Brooklyn shooting and developments regarding the shooter’s future trial and verdict. The Worldwide Analysis of Terrorism, Crime, and Hazards (W.A.T.C.H.) Officers and Threat Hunters will remain vigilant to threats and events related to the Brooklyn shooting by monitoring global events.

________________________________________________________________________ The Counterterrorism Group (CTG)

[3] A quiet morning commute on a Brooklyn subway quickly became a 'war zone' leaving more than 20 people injured, NYC mayor says, CNN, April 2022, https://edition.cnn.com/2022/04/12/us/brooklyn-subway-shooting/index.html

[4] Police search for motive in YouTube videos of man accused of Brooklyn subway shooting, PBS, April 2022, https://www.pbs.org/newshour/nation/police-search-for-motive-in-youtube-videos-of-man-accused-of-brooklyn-subway-shooting

[5] Corbeil, A. & Rohozinski, R. "Managing Risk: Terrorism, Violent Extremism, and Anti-Democratic Tendencies in the Digital Space," Oxford University Press, 2021

[6] Suspect in Brooklyn subway shooting posted videos discussing violence and mass shootings, CNN, April 2022, https://www.cnn.com/2022/04/13/us/frank-james-videos-brooklyn-subway-shooting/index.html

[7] Chew, Matthew, & Tandoc, E. "Lives and Livestreaming: Negotiating social media boundaries in the Christchurch terror attack in New Zealand," Critical Incidents in Journalism, 2020

[8] “Countering Extremists on Social Media: Challenges for Strategic Communication and Content Moderation,” Policy and Internet, 2020, https://onlinelibrary.wiley.com/doi/full/10.1002/poi3.236

[9] Christchurch shooting: Gunman Tarrant wanted to kill ‘as many as possible’, BBC News, August 2020, https://www.bbc.com/news/world-asia-53861456

[11] Ibid

[12] “Algorithmic content moderation: Technical and political challenges in the automation of platform governance,” Big Data & Society, 2020, https://journals.sagepub.com/doi/full/10.1177/2053951719897945

[13] Parler, MeWe, Gab gain momentum as conservative social media alternatives in post-Trump age, USA Today, November 2020, https://www.usatoday.com/story/tech/2020/11/11/parler-mewe-gab-social-media-trump-election-facebook-twitter/6232351002/

[14] Terms of Service, Gab, April 2020, https://gab.com/about/tos

[16] Ibid

[17] “Algorithmic content moderation: Technical and political challenges in the automation of platform governance,” Big Data & Society, 2020, https://journals.sagepub.com/doi/full/10.1177/2053951719897945

107 views
bottom of page