top of page

FAR-RIGHT EXTREMISTS’ USE OF SOCIAL MEDIA PLATFORMS TO COMMUNICATE AND SPREAD RADICALIZED BELIEFS

Keanna Grelicha, Counterintelligence and Cyber (CICYBER) Team; Indirah Canzater, Tiffany Dove, Dyuti Pandya, NORTHCOM Team

Clea Guastavino, Cassandra Townsend, Senior Editors

Week of Monday, December 6, 2021


Facebook Messenger Logo[1]


Far-right extremist groups have used social media platforms to spread radicalized beliefs since the early 2000s.[2] Social media platforms have contributed to an overall increase in the radicalization of far-right extremist movements in the US after 2010.[3] Compared to mainstream media platforms with more regulations, far-right extremist groups very likely use encrypted forums more frequently to spread radicalization, organize events, and trade weapons. Such platforms very likely allow for the decentralization of far-right extremist groups, very likely making them difficult to track through forums like Telegram, WhatsApp, and Gab. These forums almost certainly allow these groups to post hate speech and initiate calls to violence without fear of reprisal or termination from the platform. Far-right extremists are almost certainly becoming more dangerous as these platforms allow them to quickly call individuals to arms.


Social media platforms like Facebook, YouTube, and Twitter are the most used to distribute propaganda, radicalize, and mobilize individuals.[4] Far-right extremist organizations have almost certainly become increasingly reliant on social media platforms and communication channels as a result of technological improvement, increasing their propaganda spread. Communication platforms like Discord, Telegram, and WhatsApp are the most commonly used by far-right extremist groups for communication, almost certainly providing privacy and encryption, allowing for increased radicalization.[5][6] Misinformation and disinformation will likely spread rapidly through these communication channels and social media platforms. Social media platforms almost certainly allow for the spread and perpetuation of a specific narrative, further entrenching those involved and radicalizing new users. The online spread of far-right content is very likely to lead to self-radicalization.


Compared to mainstream media, alternative platforms provide an “echo chamber” for like-minded groups to reinforce their beliefs and engage with others.[7] YouTube, Facebook, and Twitter became the new space for far-right extremism in the early 2000s as they grew in popularity.[8] With recent regulations monitoring content and banning individuals, alternative platforms drew millions of new users to their sites because of fewer restrictions.[9] Platforms like Gab and Signal that have encryption and data protection settings to keep user data private almost certainly leave those privacy policies intact to keep users engaged on the platforms. The business aspect of social media platforms almost certainly influences the type of policies the platforms enact to generate revenue and keep their users on the application. With a lack of monitoring for misinformation and threats coming from far-right extremist groups, extremists will likely continue to spread their messages to reach more users for recruitment purposes. Inefficient measures to limit the online spread of threats will almost certainly lead to real-life action by these groups.


Encrypted messaging platforms like Telegram, Parler, and Signal have become safe spaces for extremist groups to organize, recruit, and spread content.[10] Individuals’ ability to organize in private chats and share radical content almost certainly presents regulation concerns. Government data regulations will likely be difficult to enforce because of the platforms’ refusal to impose limits so that they can remain in control of their business. This difficulty is very likely due to the current legislation’s vague terminology that limits the US government’s ability to impose restrictions in the domain of social media companies. Social media platforms could almost certainly be court ordered for US federal agencies to access online user information. The inability of US federal agencies to impose restrictions on the type of content circulated is very likely due to constitutional rights like the freedom of expression, which allows for engagement in any organizations and forms of writing or speaking that an individual would like to participate in without the government forbidding those actions.[11] With this right preventing the US government from explicitly defining what is not allowed, platforms could almost certainly allow individuals to voice their opinions without the content being considered harmful. Even if law enforcement received user data from a court order and requested the removal of the online content, users could almost certainly create new accounts to continue spreading extremist content. The lack of platforms’ monitoring and regulations almost certainly allows this cycle of account creation to continue.


The lack of content monitoring due to private chats and encryption on alternative platforms like Parler or Telegram allows for attacks on individuals and minority groups like Jewish communities.[12] Hate speech and racism found within the messaging boards of far-right extremist groups very likely pose a threat to minority groups and authority figures if direct threats lead to action. Extremist groups have encouraged users on Telegram to spread COVID-19 to law enforcement and Jewish people by spreading saliva on door handles of facilities like the Federal Bureau of Investigation (FBI) and synagogues.[13] Violence against minority groups could very likely occur if far-right extremists act on their online violent rhetoric. The establishment of security measures such as security guards in minorities' places of worship could very likely help deter threats when law enforcement is limited to act if the threats are not considered crimes that justify a police response.


Social media platforms have exposed anti-government groups spreading radicalized beliefs on Telegram’s public channels, calling for protests and violent opposition.[14] On Gab, millions of new users signed up to support groups with anti-governmental views after the 2020 US presidential elections.[15] These online actions very likely contributed to increases in anti-government views as conspiracies against leaders and democratic values like voting were spread. Misinformation rapidly spreading and reaching a wider audience very likely increases radicalized beliefs amongst citizens. If misinformation continues to surge due to online far-right extremist groups, it could very likely lead to violent discourse in future elections. If violent actions are threatened on online forums, federal and state governments who could be targets will almost certainly set up deterrence measures like physical security around government infrastructure. Through the use of data collection, online content could be checked for legitimacy, which will almost certainly help combat misinformation.


Far-right extremism can almost certainly lead to real-world violence, as seen with the January 6, 2021 attack at the US Capitol.[16] Such groups will almost certainly be able to organize real-world violent acts if social media platforms’ content policies do not prevent them from sharing and spreading radicalized beliefs. With a lack of governmental response to written threats, these groups will likely find it acceptable to escalate the posted threats to real action. Organizations like the boogaloo movement advocate violence through a civil war in response to political polarization.[17] The move from online discussions to real-life action very likely poses a threat to US national security. These security concerns will almost certainly require governmental action to monitor online forums and prepare the necessary measures to secure any threatened infrastructures.


During the onset of the pandemic, Facebook, Instagram, and Twitter flagged COVID-19-related posts to avoid disinformation and continued the same strategies during the 2020 US presidential elections to fact-check posts.[18] The Twitter "Safety Mode" feature allows users to report content if they have said mode enabled in their settings[19] Other online platforms' adoption of similar features could likely reduce the spread of extremist content. Although the use of this feature for content reporting is at the user’s discretion, its combination with fact-checking will very likely aid to reduce the online spread of far-right violent rhetoric. Without such measures, misinformation campaigns and the spread of radical beliefs could very likely continue. These mainstream media platforms’ policies are options that alternative platforms could very likely implement while still regulating the spread of content as they see fit for their platform. However, establishing regulations for alternative platforms could likely be difficult if the limitations do not follow the data privacy policies established within their guidelines.


Amazon’s banning of Parler from their platforms is a significant step to prevent extremist content’s rise.[20] Adopting such measures can likely help reduce the misuse of social media to spread hate speech and violence. Discord removed around 2000 communities dedicated to extremist and violent content.[21] This likely shows that platforms are working proactively to advance online safety by being more investigative and transparent in dealing with anonymous actors and their extremist activities. YouTube’s tab that allows the display of breaking news on the homepage for trusted non-partisan news sources could likely be a step other organizations should follow. This move will allow people to access impartial content, likely decreasing their engagement with misinformation. The private sector’s involvement will likely help the US government formulate more precise legislation that addresses and further streamlines the monitoring of extremist content.


Preventing the spread of extremist content online involves leveraging technology and platforms through a public-private partnership.[22] The creation of legislation for monitoring extremist content will likely help online platforms take down user-generated content that advocates extremist dialogue through legal means. Measures to blacklist keywords and filter out extremist content can likely help in reducing the spread of such content. Companies can likely strengthen digital literacy and consumption programs to counter extremist narratives by increasing public engagement. Advancing these tools will likely allow law enforcement to counter and deter misinformation with other government agencies. Companies can likely use Artificial Intelligence (AI) systems indicators for more content scrutinization which can otherwise go unfiltered and incite hateful content.


The Counterterrorism Group (CTG) recommends that law enforcement and the intelligence community increase vigilance of platforms such as Parler, Gab, and Telegram due to their high use by far-right extremists. CTG encourages the creation of legislation and policies towards deterring the use of online platforms by far-right extremism while encouraging open channels of communication on encrypted messaging platforms. Social media platforms should increase monitoring staff to filter and remove violent online content. Agencies, organizations, and other companies (AOCs) are encouraged to improve public and private sector security relations to increase cohesive collaboration and successfully shut down radicalized narratives online. The Counterintelligence and Cyber (CICYBER) and NORTHCOM Teams will continue to collaborate to monitor the issue. The CTG’s Worldwide Analysis of Threats, Crime, and Hazards (W.A.T.C.H.) Officers will remain vigilant on potential threats made by far-right extremist groups to help monitor and deter future violent acts.


The Counterterrorism Group (CTG) is a subdivision of the global consulting firm Paladin 7. CTG has a developed business acumen that proactively identifies and counteracts the threat of terrorism through intelligence and investigative products. Business development resources can now be accessed via the Counter Threat Center (CTC), emerging Fall 2021. The CTG produces W.A.T.C.H resources using daily threat intelligence, also designed to complement CTG specialty reports which utilize analytical and scenario-based planning. Innovation must accommodate political, financial, and cyber threats to maintain a level of business continuity, regardless of unplanned incidents that may take critical systems offline. To find out more about our products and services visit us at counterterrorismgroup.com.

 

[2] Far-right groups move to messaging apps as tech companies crack down on extremist social media, The Conversation, January 2021, https://theconversation.com/far-right-groups-move-to-messaging-apps-as-tech-companies-crack-down-on-extremist-social-media-153181

[3] Ibid

[4] “The Use of Social Media by the United States Extremists,” START, 2018, https://www.start.umd.edu/pubs/START_PIRUS_UseOfSocialMediaByUSExtremists_ResearchBrief_July2018.pdf

[5] Ibid

[6] Ibid

[7] Post-Truth and Far-Right Politics on Social Media, E-International Relations, November 2020, https://www.e-ir.info/2020/11/17/post-truth-and-far-right-politics-on-social-media/

[8] Far-right groups move to messaging apps as tech companies crack down on extremist social media, The Conversation, January 2021, https://theconversation.com/far-right-groups-move-to-messaging-apps-as-tech-companies-crack-down-on-extremist-social-media-153181

[9] Ibid

[10] Ibid

[11] Your Right To Free Expression, ACLU, 2021, https://www.aclu.org/other/your-right-free-expression

[12] Parler is bringing together mainstream conservatives, anti-Semites and white supremacists as the social media platform attracts millions of Trump supporters, The Conversation, November 2020, https://theconversation.com/parler-is-bringing-together-mainstream-conservatives-anti-semites-and-white-supremacists-as-the-social-media-platform-attracts-millions-of-trump-supporters-150439

[13] White supremacists encouraging their members to spread coronavirus to cops, Jews, FBI says, ABC News, March 2020, https://abcnews.go.com/US/white-supremacists-encouraging-members-spread-coronavirus-cops-jews/story?id=69737522

[14] Far-right groups move to messaging apps as tech companies crack down on extremist social media, The Conversation, January 2021, https://theconversation.com/far-right-groups-move-to-messaging-apps-as-tech-companies-crack-down-on-extremist-social-media-153181

[15] Ibid

[16] Ibid

[17] On Telegram, The Paramilitary Far Right Looks To Radicalize New Recruits Ahead Of Inauguration Day, The Intercept, January 2021, https://theintercept.com/2021/01/12/boogaloo-telegram-violence-recruit/

[18] Big Tech’s rejection of Parler shuts down a site favored by Trump supporters – and used by participants in the US Capitol insurrection, The Conversation, January 2021, https://theconversation.com/big-techs-rejection-of-parler-shuts-down-a-site-favored-by-trump-supporters-and-used-by-participants-in-the-us-capitol-insurrection-153070

[19] Ibid

[20] Ibid

[21] Group-Chat App Discord Says It Banned More Than 2,000 Extremist Communities, NPR, April 2021, https://www.npr.org/2021/04/05/983855753/group-chat-app-discord-says-it-banned-more-than-2-000-extremist-communities

[22] “Countering the appeal of extremist online,” Institute for Strategic Dialogue, 2014, https://www.dhs.gov/sites/default/files/publications/Countering%20the%20Appeal%20of%20Extremism%20Online-ISD%20Report.pdf

3,333 views
bottom of page