top of page

ARTIFICIAL INTELLIGENCE AND ECHO CHAMBERS

Allegra Berg, Sophie Provins, Gabby Silberman, Kesa White, Extremism Team

Week of Monday, May 31, 2021


Visual Representation of an Echo Chamber[1]


Social media platforms including Facebook and Twitter use Artificial Intelligence (AI) software to learn what users’ preferences are and suggest similar content to the user’s present feed, including users, posts, pages, and advertisements. This in turn creates echo chambers, which refer to an environment created by online platforms in which users are surrounded by viewpoints that reflect their own, created by algorithms and by the user’s choices in groups they join and users they follow.[2] As users interact with other posts, individuals, groups, or advertisements, the AI begins to build a detailed profile of the user. The system then exploits, feeding similar information back to the user to maximize their social media usage. Due to these AI-enhanced echo chambers, individuals’ social media feeds can quickly become filled with extremist propaganda, misinformation, or disinformation due to their viewing, liking, and following history. If nothing is done to address the security issues posed by AI and echo chambers, the chance of acts of terror being conducted as a result of online radicalization, as well as manipulation and interference by nation-state actors is highly likely.


Extremist groups have manipulated social media for decades. Most famously, the Islamic State (IS) would contact potential recruits on Facebook at their peak around 2014. If an individual responded, they would receive friend requests from hundreds of IS-associated accounts. This would, in turn, create a type of echo chamber in which only positive aspects of the group would appear in the potential recruit’s feed, which the group hoped would inspire them to travel to one of the regions they controlled or to conduct an attack in their home region in the name of the group.[3] Right-wing extremist groups have started utilizing similar methods, such as interacting with users on posts related to the Second Amendment or articles regarding the outcome of the election.[4] This demonstrates that extremists of all ideologies can use echo chambers to attract new recruits. Thus, legislation must be introduced with urgency.


While a user may follow a wide array of groups, over time social media sites begin to learn what the user spends more time viewing and will ultimately promote that information. Sites like Facebook also naturally try to promote similar advertisements based on the users viewing history to increase the chances of the user purchasing the advertised item. This is because social media platforms gain revenue from any items sold after one of their users clicks on the site and purchases an item, as well as revenue from the advertisers that use their site. Following the US Capitol Insurrection in January 2021, Facebook began to allow the sharing of military gear ads next to false posts about election results and the insurrection.[5] While the use of AI for promoting similar products to a user is nothing new and results in more sales, when it comes to extremist views or misinformation, it has the potential to have negative and disastrous consequences. These posts are deliberately shared with individuals who are deemed likely to believe the disinformation by AI algorithms and therefore seek to act on it. There is a likely chance of an attack being carried out that is influenced by AI algorithms on social media advertisements, particularly when they are shown next to posts designed to trigger a user identified by the algorithm as susceptible, when it can create a feeling of resentment and desire to act on it.


While social media platforms use algorithms to create a personalized experience for the user, it can very likely contribute to radicalization and recruitment because individuals are continuously engaging with like-minded users and information. When a user interacts with an individual from a radical group on a social media platform, they are then likely to be introduced to other posts that the radicalized individual shares which they may become interested in and interact with. As a result, they are exposed to increasingly radical posts, which can include introductions to extremist groups or rhetoric. This can lead to an increase in the size of extremist groups, as well as an increase in attacks conducted by individuals who support that ideology. A user can fall deeper into a path of radicalization as they begin to form bonds with like-minded people to ultimately create their echo chamber, sometimes consciously and sometimes not. The echo chamber will only consist of individuals with similar views which does not leave room for differing opinions and can act as an enabler. This also allows for other individuals outside of the echo chamber to radicalize as their connections will begin to like and share content that their connections on social media may begin to view.


As a result of the COVID-19 pandemic, users are spending more time on these platforms due to a lack of an ability to have physical interactions and increased time availability, and subsequently, the impact of echo chambers and AI on social media platforms have increased. This has created a stronger dependence on the virtual realm and has enabled social media sites to gain an exponentially higher amount of information on their users. This allows social media platforms to tailor their echo chambers to be more personal than before. With the ongoing pandemic and global lockdowns, users are less likely to interact with others outside of social media platforms and consequently have become more reliant on the support that is found on online platforms. As users begin to spend more time on social media, sites begin to create virtual information profiles on users that are much more tailored and specific, which contributes to the creation of that user’s echo chamber. Additionally, while it only takes a short period for AI to create these echo chambers, the ability for users to free themselves from these echo chambers can be complicated if not impossible.


The evolution of technology and the creation of new social media platforms increases the chance of radicalization and the presence of dangerous echo chambers under certain circumstances. Google and other social media platforms automatically provide all the information that may be necessary to self-radicalize. Even after Googling a term or a phrase, AI on other sites will attempt to get the user to continue engaging with the content they previously viewed. While AI can create a personalized user experience, the content they recommend does not seem to be monitored because if this was the case the AI would not continue to show violent content.[6] An individual can likely become radicalized online due to the failures of AI algorithms, and the sites that run them, to realize the dangers and failing to prevent them.


In the case of Dylann Roof - the American Neo-Nazi and white supremacist who perpetrated the Charleston church shooting in 2015 - credits the internet as the primary tool that led to his self-radicalization in court documents.[7] The roof began with simple google searches which later created more curiosity around the information he was engaging with. After his curiosity developed, it was relatively easy to seek and find communities that were interested in similar topics. An individual that does not have an interest in the discussion, is less likely to engage in that community or join, which leaves the entire community full of users that all believe the same way. Echo chambers can be beneficial depending on the topic, however, it is not healthy to have a chat of only racist and violent users because they can contribute to radicalization. The self-learning process Roof engaged in allowed him to develop his own opinions and ideas about the world without anyone interjecting to voice alternative opinions.


The creation of echo chambers by AI is not only connected to mis/disinformation or extremist rhetoric but can also be connected to child pornography and pedophilia. YouTube previously had a large-scale incident regarding this. Users on YouTube can view a video and receive multiple recommendations of “related” videos. Pedophiles were able to use this website and view videos that involved scantily clad children such as showing off children's bathing suits and recommended videos would include other videos that had scantily clad children, further feeding into the creation of a type of echo chamber. Researchers found that the algorithm was feeding similar videos to users, sometimes that were not directly connected, which resulted in videos that were otherwise harmless to have sexual undertones due to users viewing history.[8] This demonstrates that there are many nefarious results of echo chambers on a variety of social media and video sharing sites, as the lack of effective action ensures that it is likely that this will continue.


Nation-state actors also play a key role in the impact that echo chambers can have. For example, a nation-state actor can sow division and radicalize people’s views by posting an article on social media, which is highly likely to be interacted with by a high number of users. This may lead to some genuine users on the site either strongly agreeing or disagreeing with the article, which can be exacerbated by bot accounts from the nation-state actor or other opposing forces who seek to increase interaction on the post. By utilizing bot accounts that both agree and disagree, the nation-state can target a wide range of user’s echo chambers; for example, if they publish on a political issue in the United States (US), then they can attract both Republican and Democrat supporters to their false news post and contribute towards the extremely tense political atmosphere. China particularly is in a strong position to take advantage of these weaknesses because the sites with the most prevalent AI algorithms such as Facebook, Twitter, and YouTube are banned in the country.[9] They can do this by targeting individuals with strong beliefs in countries that they compete with, namely the US, and stoking those tensions with disinformation campaigns. Nation-state actors are very likely to continue to take advantage of echo chambers as they can obtain a competitive advantage over other nation-states by stoking hatred and distracting their government; therefore, legislation needs to be introduced as a matter of urgency to prevent external nation-states from contributing to the spread of fake news and deepening the political divide in the US and around the world. It also needs to be introduced urgently to prevent democratic elections from being swayed by false articles from a nation that would prefer to work with one political candidate compared to the other choice.


Conspiracy theorists and conspiracy groups such as QAnon can share their ideas at rapid rates due to echo chambers. By feeding false narratives, as seen with the conspiracy that 5G technology caused COVID-19, this can inspire the user to join extremist rhetoric groups, make further connections within the social media space, and ultimately become violent and attempt to conduct an attack. When a user interacts with one post that contains a conspiracy theory, they are then likely to be exposed to similar articles to increase the chances that they will continue reading and remain on the site. However, this can have negative consequences when lots of users read false conspiracy theories and genuinely believe them. When a post has a lot of interactions, a user is more likely to believe that the post is true, even if the interactions come from bot accounts or believers of the ideology. QAnon is relatively successful at manipulating echo chambers to share conspiracy theories with users that are more likely to believe them. For example, those that shared posts related to Make America Great Again (MAGA) in support of former US President Trump were identified as a target group for conspiracies surrounding the election. Women who shared pictures of their children were identified as a target group for conspiracies around the mistreatment of children by celebrities and senior Democrat politicians.[10] When they successfully attract one user to reading and interacting with their posts, echo chambers then increase the chances of others in their feed who are likely to believe it seeing it. This is almost certainly why conspiracy theories can quickly become dangerous, particularly when they have the potential to radicalize an individual.


Most social media companies perceive echo chambers as a non-issue and lack little incentive to detect, deter and defeat the impact of these chambers; thus, these echo chambers will likely persist on social media platforms. However, some platforms like Youtube, following exposés of the situation, have restricted their content connected to children due to regulations like the Children's Online Privacy Protection Act (COPPA) which is increasingly enforced, other content is still not restricted.[11] The lack of regulation of content and policies on these platforms allows vulnerable individuals to fall prey to extremist ideologies and misinformation in an environment that is artificially created due to their previous choices. Users are, in many cases, not even necessarily aware of what they can do to avoid these situations and that information is being tailored towards them. Officials and lawmakers should push for legislation in their prospective countries to regulate these online platforms, which would likely enable accountability and incentivize platforms to change their assessment of echo chambers as a threat. As these social media platforms are global, increased international pressure and concern will likely be the only way to enforce change.


In the US, one change could be to repeal or amend Section 230 of the Communications Decency Act which states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[12] Essentially this means that the providers of the platform - such as Facebook or Twitter - cannot be held accountable for the statements and ultimate actions by their users. This has resulted in situations like doxxing - the public release of identifying information on an individual - which can result in the stalking of individuals and ruining of reputations. Platforms currently cannot be held accountable for the actions of their users, such as the posting of content and placement of advertisements. This means that this information and posts can be circulated into the AI system and pushed to other users. Until social media platforms are held accountable for the information that users are posting, banning or limiting such extremist material and rhetoric will not occur. When companies are held accountable for their user’s actions, through either legal or financial repercussions, their oversight on such posts will likely occur.


An amendment on the AI algorithms themselves is needed to filter the content being promoted to the user. While a user may be looking for extremist material, the AI has a cut-off point on what information will be fed back or not feedback certain information with certain phrases, images, or information attached to it. As AI is not in stages that it can operate without oversight, increased staff may be necessary to monitor for dog-whistle phrases - phrases that communicate certain things to specific audiences - and images connected to these groups that may seem innocuous but have deeper meanings - seen with the Boogaloo Bois and their use of phrases like “alphabet gang” against federal agencies and Hawaiian patterns to show affiliation. This will help prevent users who searched for posts related to an ideology from being able to identify the most extreme elements and find users who will assist in their radicalization process. It will also stop extremists from being able to share messages that are currently not being picked up by AI algorithms as they are encoded, and therefore reduce the amount of information that they can share publicly.


Universal criteria that can be used across social media platforms should be developed to specify what is categorized as problematic. This must be developed so that it is applied to all extremist ideologies and is constantly updated to successfully contend with the threats posed by new groups and growing ideologies. AI can then be used to identify extremist posts, by detecting certain terminologies that are associated with extremist groups and removing their content. Already, Facebook owns several popular social media platforms, including Messenger, Snapchat, and Instagram, so creating universal criteria with the support of Facebook can be implemented quickly and successfully. Although this may be deemed a violation of the First Amendment of the Constitution as it is restricting content posted online, the dangers posed by radicalization online far outweigh these concerns. The European Union approved a similar law in June 2021, whereby any post flagged by a nation-state must be removed within an hour from the social media platform.[13] This will prevent users from sharing these ideologies online and hinder echo chambers from developing, which can very likely reduce the impact of online radicalization.


A simpler process for users to report content online should be developed, as the process as of now is timely and complicated.[14] This can likely discourage users from reporting posts that they deem to be radical and inappropriate for public forums. By making it easier to report, social media sites will probably have an increase in reported posts. As a result, social media platforms will need to invest in staff and provide training to promptly respond to all identified threatening posts before they can gain widespread popularity. Social media response time should increase and a follow-up message to the reporter should be provided when the action is taken based on the content they reported. The follow-up message is key as it demonstrates social media sites are taking these threats seriously and addressing the issue, making it likely the user will be more willing to report a similar post in the future.


There should also be an increase in the number of extremist accounts removed, as this will reduce the impact of echo chambers. When a user shares a post that is identified as radical, either through another user’s report or through the improved AI algorithms, then the platform needs to decide whether the content was a one-off, a mistake or misunderstanding, or whether the account had shared similar posts and therefore are using the platform to share extremist rhetoric. An increase in the removal of extremist accounts will likely prevent users from being able to interact with them and their posts will be less likely to enter the public mainstream. This may prevent extremist echo chambers from being created, and therefore users will be provided with a level of protection and members are less likely to interact with extremists.


The Counterterrorism Group (CTG) is continuing to monitor the impact of extremists on online social media platforms, and the risks of online radicalization with our Threat Hunters. CTG’s Worldwide Analysis of Threats, Crime, and Hazards (WATCH) Officers are working on daily reports that examine upcoming and developing threats or extremists. CTG’s Extremism Team is continuously monitoring the rhetoric of extremists, both in the US with the Domestic Extremism Project, and across the globe in collaboration with teams across CTG. Extremism is continuing to identify the impact of echo chambers on online radicalization. We are also examining updates in the use of AI and how it can be utilized to more effectively deal with extremists online.

________________________________________________________________________ The Counterterrorism Group (CTG)

[2] Digital Media Literacy: What is an Echochamber?, GFC Global, n.d., https://edu.gcfglobal.org/en/digital-media-literacy/what-is-an-echo-chamber/1/

[3] Americans Attracted to ISIS Find an ‘Echo Chamber’ on Social Media, The New York Times, December 2015, https://www.nytimes.com/2015/12/09/us/americans-attracted-to-isis-find-an-echo-chamber-on-social-media.html

[4] The echo chamber effect on social media, PNAS, January 2021, https://www.pnas.org/content/pnas/118/9/e2023301118.full.pdf

[5] Facebook Has Been Showing Military Gear Ads Next To Insurrection Posts, Buzzfeed, January 2021, https://www.buzzfeednews.com/article/ryanmac/facebook-profits-military-gear-ads-capitol-riot

[6] Measuring magnetism: how social media creates echo chambers, Michele Travierso, Nature Italy, February 2021, https://www.nature.com/articles/d43978-021-00019-4

[7] Prosecutors say Dylann Roof ‘self-radicalized’ online, wrote Another Manifesto in Jail, Mark Berman, Washington Post, August 2016, https://www.washingtonpost.com/news/post-nation/wp/2016/08/22/prosecutors-say-accused-charleston-church-gunman-self-radicalized-online/

[8] On YouTube’s Digital Playground, an Open Gate for Pedophiles, The New York Times, June 2019, https://www.nytimes.com/2019/06/03/world/americas/youtube-pedophiles.html

[9] The Complete List of Blocked Websites in China & How to Access Them, VPN Mentor, n.d., https://www.vpnmentor.com/blog/the-complete-list-of-blocked-websites-in-china-how-to-access-them/

[10] How QAnon rode the pandemic to new heights — and fueled the viral anti-mask phenomenon, NBC, August 2020, https://www.nbcnews.com/tech/tech-news/how-qanon-rode-pandemic-new-heights-fueled-viral-anti-mask-n1236695

[11] YouTube officially rolls out changes to children’s content following FTC settlement, The Verge, January 2020, https://www.theverge.com/2020/1/6/21051465/youtube-coppa-children-content-gaming-toys-monetization-ads

[12] Section 230 of the Communications Decency Act, Electronic Frontier Organization, n.d., https://www.eff.org/issues/cda230

[13] Daily News 07 / 06 / 2021, European Commission, June 2021, https://ec.europa.eu/commission/presscorner/detail/en/mex_21_2883

[14] How online platform transparency can improve content moderation and algorithmic performance, Brookings, February 2021, https://www.brookings.edu/blog/techtank/2021/02/17/how-online-platform-transparency-can-improve-content-moderation-and-algorithmic-performance/

1,540 views
bottom of page