top of page


Cassidy Finnerty, Counter Threat Strategic Communications (CTSC) Team; Keanna Grelicha, Counterintelligence and Cyber (CICYBER) Team

Week of Monday, January 10, 2022

The Increased Use of Social Media Platforms[1]

Mainstream social media platforms, like Facebook, have developed new means of countering the spread of violent content using Artificial Intelligence (AI) training to detect and block violent material from spreading through livestreams.[2] Alternative technology (alt-tech) platforms generally have a smaller staff with fewer capabilities, resulting in low moderation that allows users to post content more freely.[3] As violent content is removed from mainstream social media platforms, threat actors have migrated to smaller, more niche, and less moderated alt-tech platforms, such as 8chan and Discord.[4] Alt-tech platforms very likely provide a space for threat actors, like school shooters, to avoid violent content restrictions. Despite alt-tech platforms having content policies that restrict violent content, privacy settings, lack of staff, and chat encryption very likely lead to a lax implementation of the policies. The lack of strict implementation very likely allows users to repeatedly create new accounts to continue spreading threats and violent content. The ability to freely post and the lack of effective policy implementation are likely why perpetrators use alt-tech to spread threats and incite violence.

Section 230 under chapter 47 of the US Code does not make technological companies liable for the content disseminated on their platforms.[5] School shooters using social media to disseminate videos is not uncommon, as seen in the Oxford High School perpetrator’s use of Instagram[6] and the Stoneman Douglas High School perpetrator’s use of Instagram and YouTube.[7] Alt-tech platforms like 8chan, Discord, and Steam have encryption and privacy policies that allow users to engage on the platforms freely without moderation or regulations on the type of content posted within chats or on the platforms’ primary feeds.[8] The privacy and encryption policies very likely make alt-tech platforms more desirable to threat actors as they can post with minimal oversight from content moderators. The platforms very likely refrain from adding further restrictions or changing privacy policies to include definitions of violence or harmful content to retain a large and consistent user base. The lack of restrictions very likely allows school shooters and other threat actors to continue disseminating their content online. This content spread will very likely attract other threat actors to the platforms. This rise of threat actors will almost certainly lead to a larger user base contributing to this content.

School shooters, such as the Aztec perpetrator, have used alt-tech gaming platforms to incite fear about the plan for attack and spread threats about future shootings.[9] After the Stoneman Douglas High School shooting in 2018 in Parkland, Florida, US, controversy over shooting games increased as law enforcement and psychologists believed that games mimicking school shootings would incite future violence, which led some gaming platforms to cancel new releases, though others did not take similar action.[10] In the US, 97% of teenagers are part of the gaming community, of whom 66% play violent and action video games,[11] which make up 85% of the video game market.[12] Platforms are unlikely to remove older shooting games or cancel new games because their revenue and business models are based on game streams, views, and downloads.[13] Streams and downloads would almost certainly decrease if military games or games involving combat were removed, likely motivating platforms to keep controversial games available to retain customers. While these games are unlikely to solely influence an individual to commit a mass shooting, realistic games are likely to influence a potential school shooter’s choice of targets, tactics, and weaponry. Realistic game scenarios depicting schools, churches, and other public infrastructure could likely influence a potential attacker’s target preferences or weaponry. The types of weapons in the games that are used to attack those settings could very likely influence an individual in weaponry preferences based on statistics of damage rates when planning a real attack.

Social media platforms that do not restrict hate speech or threats allow for the spread of false information about school shootings, survivors, and the families of those involved,[14] adding to the adverse mental health effects like depression and anxiety experienced by shooting victims.[15] Far-right groups fueled conspiracy theories about the 2012 Sandy Hook Elementary School shooting in Newtown, Connecticut, US by spreading misinformation and false claims about the shooter, victims, and family members.[16] Alt-tech platforms’ policies that limit the removal and restriction of misinformation almost certainly amplify the adverse psychological effects on victims and their families. Individuals who do not process their trauma will very likely develop post-traumatic stress disorder (PTSD) due to repeated exposure to harmful misinformation and conspiracy theories.[17] Without monitoring far-right groups and others fueling conspiracy theories on alt-tech platforms, false claims and distorted perceptions will very likely continue circulating in the media after attacks. Alt-tech platforms’ implementation of mainstream social media features, such as giving users the ability to block and report content, would very likely assist content moderators in removing harmful content. Decreased exposure to online content will likely decrease the negative mental health consequences of depression or anxiety from viewing violent content. However, features that would allow for these restrictions will very likely not be adopted by alt-tech platforms because it would provide more moderation and restrictions that do not coincide with current alt-tech platform policy.

The spread of misinformation and violent threats via alt-tech platforms does not always violate the platforms’ terms of service or lead to criminal charges because of privacy and encryption policies that protect the ability to post content freely.[18] Without reports from the platforms on content violations, freedom of speech concerns almost certainly conflict with law enforcement’s ability to act on potential threats and impose criminal charges. If misinformation and hate speech are not considered as actionable or imminent threats by content moderators, law enforcement will likely be unable to address online violence or harmful content. Lack of formal action by the authorities and platforms very likely encourages more users to post false information online. As the frequency and volume of this content increases, alt-tech platforms will likely be unable to moderate the content, further contributing to the lack of communication about potential threats between technological companies and law enforcement. Threat actors could likely transform online threats into operational plans if the threats are unmoderated by law enforcement and social media companies. Users are very likely to interact with like-minded individuals online, which will likely lead to an echo chamber of violent rhetoric, reaffirming negative beliefs associated with a common target.

Private companies like Global Internet Forum to Counter Terrorism (GIFCT)[19] and Tech Against Terrorism could very likely provide mentorship subscriptions for alt-tech platforms, allowing them to purchase assistance in managing content moderation.[20] The Counterterrorism Group (CTG) recommends alt-tech platforms to implement countermeasures by emulating Facebook’s AI training to limit the spread of violence on their platforms. If alt-tech platforms lack the required in-house, long-term resources or capabilities, like information technology (IT) experts, to implement countermeasures, they could likely hire short-term specialists to enable some form of algorithm modification or AI training.[21] Short-term establishment of countermeasures would very likely increase the long-term content moderation capabilities of alt-tech platforms and assist in removing harmful content at scale. Monitoring improvement through AI would likely limit the spread of violent content more effectively compared to what alt-tech platforms currently allow. AI improvement could likely result in more effective functionalities of content moderation with alerts and algorithm checks that flag posts, allowing users to report and disengage with harmful content. These improved features are unlikely to violate privacy policies because users will choose to report, remove, or not engage with the content from their feed, limiting the spread and likely decreasing the range of audience the threat actors can effectively reach.

CTG recommends collaboration between private and public sector companies to identify and enable countermeasures to hold threat actors accountable in accordance with legislation and without violating rights or freedoms. The collaboration of private companies like GIFCT and Tech Against Terrorism with the implementation of membership features like their hash sharing database will very likely build content moderation capabilities. The hash sharing database allows all GIFCT members to share data like images and videos that originated from terrorist and violent entities.[22] Such features will very likely help al-tech platforms’ IT personnel to code data to determine which values of data equate to violent language or imagery that would get flagged for removal. Collaboration between mainstream social media and alt-tech platforms through sharing algorithms will very likely help smaller platforms enhance their capacity to mitigate violence spread by threat actors. Collaboration will very likely prevent future school shooters from posting content related to school shootings online if they see formal actions taken.

The CTG’s Counter Threat Strategic Communications (CTSC) and Counterintelligence and Cyber (CICYBER) Teams will continue to collaborate to monitor the issue of alt-tech platforms used as an avenue for threat actors to disseminate violent content. The CICYBER Team will also continue to evaluate existing countermeasures against online threats and violence in private chats and social media forums. The CTG’s Worldwide Analysis of Threats, Crime, and Hazards (W.A.T.C.H.) Officers will remain vigilant on reported potential threats made by individuals to help monitor the situation for possible future attacks. CTG will continue to provide analysis and recommendations in the event that future school shooters use alt-tech platforms to disseminate hate and threats.

The Counterterrorism Group (CTG) is a subdivision of the global consulting firm Paladin 7. CTG has a developed business acumen that proactively identifies and counteracts the threat of terrorism through intelligence and investigative products. Business development resources can now be accessed via the Counter Threat Center (CTC), emerging Fall 2021. The CTG produces W.A.T.C.H resources using daily threat intelligence, also designed to complement CTG specialty reports which utilize analytical and scenario-based planning. Innovation must accommodate political, financial, and cyber threats to maintain a level of business continuity, regardless of unplanned incidents that may take critical systems offline. To find out more about our products and services visit us at


[2] Facebook trained its AI to block violent live streams after Christchurch attacks, The Guardian, October 2021,

[3] “Addressing the decline of local news, rise of platforms, and spread of mis- and disinformation online,” Center for Information, Technology, and Public Life, 2020,

[4] Parler, MeWe, Gab gain momentum as conservative social media alternatives in post-Trump age, USA Today, November 2020,

[5] 47 U.S. Code § 230- Protection for private blocking and screening of offensive material, Legal Information Institute,

[6] Oxford school shooting lasted 5 minutes. On social media, it never ended., Detroit Free Press, December 2021,

[7] Social media paints picture of racist ‘professional school shooter’, CNN, February 2018,

[8] Messaging Apps: Encrypted or not? WhatsApp, iMessage, Discord, Zoom, etc., TechSpot, October 2021,

[9] Aztec high school gunman posed as student; thumb drive may reveal motive: officials, CBS News, December 2017,

[10] Company Pulls School Shooter Video Game After Outrage From Victims’ Families, Education Week, December 2021,

[11] Violent video games and young people, Harvard Health Publishing, October 2010,

[12] “APA RESOLUTION on Violent Video Games,” American Psychological Association, 2020,

[13] Gaming revenue worldwide 2021, by device, Statistica, June 2021,

[14] Social Media Has Been a Tool for Good After Mass Shootings. We Can't Cede It to Hate, Time, August 2019,

[15] What happens to the survivors, American Psychological Association, September 2018,

[16] How Conspiracy Theories in the US Became More Personal, Cruel, and Mainstream After Sandy Hook, UConn Today, December 2021,

[17] What Is Posttraumatic Stress Disorder? American Psychiatric Association, August 2020,

[18] Spoof Instagram accounts multiply after Oxford High School shooting, Detroit Free Press, December 2021,

[19] Membership, GIFCT, January 2022, ​​

[20] Tech Against Terrorism Mentorship, Tech Against Terrorism, January 2022,

[21] “Addressing the decline of local news, rise of platforms, and spread of mis- and disinformation online,” Center for Information, Technology, and Public Life, 2020,



bottom of page