top of page

THE CHALLENGES OF COMBATTING DISINFORMATION IN FOREIGN LANGUAGES

Chelsey Addy, Allegra Berg, Sophie Provins, Camille Rogers, Kesa White; Extremism

May 2, 2021


Disinformation, Fake News & Propaganda[1]


Disinformation is increasingly becoming an issue on the internet and is becoming overwhelmingly difficult to counter. As some social media sites, such as Twitter, allow for anything to “trend” with posts garnering thousands or millions of views, these posts can ultimately be perceived as fact while reaching a massive audience. Disinformation can pose a significant security issue, as people’s views are influenced by the news, which can lead to democratic elections being affected, or the radicalization of an individual based on false information. Successfully countering it presents a multitude of challenges. While there are automated systems that are supposed to do this - such as putting warnings on posts on Facebook and below Tweets that may use certain phrases - challenges begin to arise when posts are in other languages besides English. Due to artificial intelligence (AI) predominantly being targeted towards the English language, other languages that disseminate disinformation towards non-English speaking audiences, like in Spanish or Arabic, are not addressed nor combatted successfully. If nothing is done to improve this, then disinformation is highly likely to continue to spread on social media platforms, which will contribute to global tensions and can lead to riots, democratic elections being affected, and ultimately violence.


Disinformation can take oral, written, or visual forms, but the purpose across all mediums is the same, aiming to sway the viewer toward the creator’s narrative - one that is false and usually has little to no truth. When disinformation is spread to a virtual international community throughout the internet, there is a possibility for mistranslations that can lead to an even darker echo chamber. Disinformation can sometimes be accidental as when posts start in one language and are translated to others via translation applications like Google Translate. Using these applications can cause meaning to be lost in translation, along with incorrect words being used to change the connotation altogether. However, in many cases, the disinformation is deliberate with the creator tailoring it towards a certain audience to connect with their specific grievances. Many times, this disinformation is based on some kind of fact, such as an election or a piece of information. This fact is then transformed and skewed so that the information ultimately distributed has little to no connection to the truth and therefore has the ability to transform the reader’s perception of the initial event.


Individuals that generate disinformation posts make it difficult to decipher factual information. Social media platforms are used by individuals across the world, which makes finding posts in various languages easier than ever. According to Statista, a website that collects market and consumer data, English is the most popular language spoken online, with Chinese following close behind.[2] The disparity in how disinformation policies are designed to target the English language contracted to those in Chinese has led to the disinformation increase in alternative languages, as there is a far higher chance of significant interaction and a far lower chance of being detected by AI algorithms and removed by the social media site.


As social media platforms continue to increase their targeted users, the risk of encountering disinformation increases. The creators of disinformation will create posts on their personal accounts or they will use false accounts and “bots” to spread their information. A “bot” is an automated communication mechanism that is programmed by a developer to perform a specific task. While not all bots are malicious, they have been exploited recently to spread disinformation.[3] Companies such as Twitter and Facebook, have ‘bots’ or ‘sock puppet’ accounts that generate information to reach the masses whether it be fact or fiction.[4] Programmers can create these entities to influence users to provide inaccurate translations of stories. A user aiming to spread disinformation may have multiple bots to increase the chances of users with varying interests the ability to see the disinformation. Disinformation can spread easily because a user may not fully read into the story they’re being exposed to. The average user is more likely to “like”, “comment”, or “retweet” before examining the profile of the user that posted the information


AI algorithms are being increasingly tailored to attempt to detect any posts that contain disinformation before it has had the opportunity to gain numerous views. Success is currently limited in this field, particularly on social media sites where a post can gain significant popularity in a short space of time, which is before the algorithm has detected it.[5] At this point, many users are likely to have taken screenshots or alternative ways of immortalizing it, and therefore removing all traces of the post is near impossible. These posts are particularly unlikely to be detected when in reference to an event that many users are discussing online. However, in the case of events such as the Capitol insurrection in January 2021, they can lead to further planning of significant violence and harm amongst various communities. However, AI algorithms have been particularly unsuccessful at detecting disinformation in alternative languages. When a post goes viral, a quick method utilized by those seeking to spread it is to post it on various accounts in different languages, so that when the algorithm detects that it is disinformation, it has already reached numerous people across the globe who speak a variety of languages. This enables disinformation to spread globally at accelerated speeds, which is likely to contribute toward a greater response from viewers. Additionally, social media companies are more likely to feel as though they are successful in combating disinformation as they detected some of the posts, particularly those in the most popular languages online such as English, and therefore are more likely to miss those in alternative languages. This false sense of achievement is one of many contributing factors as to why disinformation spreads so successfully online, particularly on social media platforms. There is a strong sense amongst social media companies that they are the solution to the problem, and fail to acknowledge that they themselves are a significant part of the issue.


An increasing occurrence on social media is state-led and/or subcontracted disinformation campaigns that can be distributed to a targeted region or globally. State-sponsored disinformation is when the state manipulates its citizens and outsiders into believing a narrative. While not all users may speak the language, they may still “retweet” (Twitter form of re-posting a message) the post as a way of showing support. The “retweet” is doing more harm than good because now others can see the post and it will continue circulating to the point people may actually believe the content. State’s may have an underlying strategy for promoting disinformation in their native language especially when it is a controversial issue and they are attempting to fix their image. Creators of disinformation can use native-lingo and rhetoric that is not easily translated because it may not exist in other languages, so applications such as Google Translate will utilize wording that can change the meaning. For example, the Virginia Department of Health’s website stated to Spanish-readers that the COVID-19 vaccine was “not necessary” to be protected from COVID-19 while meaning it was not required by state law.[6] The Spanish wording - “la vacuna no es necesaria” - has the possibility to be misinterpreted as such. This displays that there is the potential to misinterpret the translation depending on the English words chosen. Such mistranslation can increase vulnerabilities for minority or non-English speaking communities to COVID-19, general health as safety, as well as in other situations such as during voting. As many websites use and rely on automated systems like Google Translate, certain phrases can be mistranslated and upon such convey a completely separate meaning.


Specific examples of the spread of disinformation can be given by Facebook in 2019 when the social media platform removed 783 specific pages, groups, and accounts for engaging in “inauthentic” behavior tied to Iran.[7] Iran was not the only country involved in spreading disinformation on Facebook: Afghanistan, Egypt, France, Germany, Iraq, Libya, and Pakistan are among several others. According to Facebook, account owners that spread the untruthful information represented themselves as locals using fake accounts and posting news on current events some of which included topics like Israel-Palestine relations as well as conflicts in Syria and Yemen[8]. The danger of posing as a local is that viewers are more likely to believe that there is truth to the account, as they are claiming to have firsthand knowledge and experience. This atmosphere has been added to by the notion of ‘fake news,’ as some viewers are more likely to believe a post written by someone claiming to be on the ground contrasted to a reputable news site. In examples such as the Israel-Palestinian crisis, a user claiming to be Palestinian has the potential to create sympathy for their cause and contribute to a global feeling of anti-Semitism or anti-Israel.


The potential for extremist group recruitment is also a growing threat in tandem with disinformation in different languages. Many extremist groups distribute newsletters both in their home countries and abroad, making translation an important aspect of the publishing process. One example of a newsletter utilizing disinformation for recruitment purposes is “Al Naba” written by the Islamic State of Iraq and the Levant (ISIL). One of the issues stated that “committing acts of terror makes jihadis immune to COVID-19 and that its supporters should take this opportunity to mount further attacks.”[9] Newsletters such as “Al Naba” contribute greatly to disinformation surrounding the pandemic and COVID-19 while also promoting new recruitment tactics that could be picked up by other extremist groups. As many are increasingly affected by issues resulting from COVID-19, such as mental health concerns. It is probable that more people may be vulnerable to recruitment methods which could lead to future radicalization. As anger is also increasing over international and domestic issues, there is an increased likelihood of people either searching for and identifying with radical ideas posted online. Extremist groups have the ability to also continually translate information into other languages accurately to reach a larger target audience, able to overcome traditional phrases that can result in being banned or having posts be removed.


Languages that include characters not used in the English language, such as Arabic, Japanese, or various Chinese languages, are a particular target for disinformation campaigns. This is because characters can have multiple meanings, and the grammatical structures are different and therefore less likely to be picked up by AI algorithms. Should they choose to, nation-states are in a strong position to spread disinformation in character-based languages, as they have an increased awareness of the colloquialisms and alternative meanings within that language. Social media platforms are largely based in the United States (US), and therefore their focus is on the English language. As a result, they are unlikely to be able to even successfully translate these posts as well as counter disinformation that takes advantage of the alternative meanings of certain characters. In order to combat this, social media sites need a far more global workforce who can decipher or interpret the alternative meanings of various phrases, and therefore will be stronger at implementing AI to detect extremist viewpoints or disinformation.


Although Facebook pledged to help prevent the spread of disinformation, it is currently failing when it comes to falsehoods about COVID-19, vaccines, election fraud, and conspiracy theories in Spanish language posts. Facebook has tightened its policies to address misleading content, however, this kind of enforcement has left a communication gap for Latino communities that proves them even more vulnerable to disinformation.[10] To combat this, a media coalition called ‘Ya Basta Facebook’ is calling on the social media platform, as well as others, to create new executive roles that oversee Spanish-language content in the US.[11] The campaign is also encouraging companies to disclose more information about how to prevent and deter automated systems that handle posts in Spanish. In order to do this, Facebook must be able to give details on how it treats posts in Spanish (and in other languages), which includes content moderators knowledgeable in specific language and updates and developments for the algorithms that review these posts in original form and the English translation. If Facebook can disclose the algorithms, it is likely that prevention could be achievable and further prevent any posts or articles that could be seen as confusing or lead to disinformation. Additionally, other languages could benefit from this type of future platform and technology to help avoid confusion, as well as the opportunity for Facebook to stop spreading wrong information through poor translations.


During the COVID-19 pandemic, QAnon followers were notorious for spreading disinformation in various languages online to the point the movement took root in places such as Germany, with a spin-off group named QPatrionen24.[12] The movement gained momentum and followers across the globe which spread their conspiracy theories on social media platforms in their respective languages. The insurrection of the Capitol in the US led to QAnon targeting Germany, as many citizens were proud of the steps taken by the insurrectionists to protect their homeland. Due to the high tensions caused by the situation, social media sites took certain precautions to curb the spread of hateful speech that relied on disinformation to anger others, such as Twitter removing former President Trump’s account permanently and beginning to flag tweets that are inaccurate. However, this led to many users being able to continue exchanging their thoughts freely, as long as they exchanged their ideas in a language other than English. Therefore, in times of high tension, it is essential that social media companies look at trends happening in regions that do not speak English, or their efforts are unlikely to be effective, no matter how drastic.


Combatting online disinformation is difficult because of the First Amendment. Social media companies are creating stricter regulations and policies surrounding disinformation by screening content and stating that the post may contain false information. This warning alerts users that the post their viewing may contain false information and they should be cautious about circulating the content on the platform. Accounts that continuously spread false information and bots should be removed from the website because they are almost certainly fueling the issue. Further analysis and programming should be developed to monitor posts in various languages and flag them for deletion. It is easier to deter the threat of disinformation than disrupting it entirely. In July of 2020, NATO released its response to the growing threat of disinformation during the pandemic. Their response includes “enhanced communications'' particularly in Russian, as well as increasing the number of videos available on their Youtube channel that contain different languages on news around the world.[13]


The Counterterrorism Group (CTG) assesses that the threat climate is high, especially in the near future as the coronavirus pandemic continues to hit different countries in waves. COVID-19 based disinformation has become a tool for both extremist recruitment and national campaigns to support regional interests. The rollout of various vaccinations in different countries has bred an environment that is likely to lead to an increase in disinformation. Additionally, CTG assesses that it is highly likely that disinformation that has been mistranslated can lead to violence outside the country from which it was published. As disinformation can be created by anyone across the world with no connection to the target of the disinformation campaign, the origins of the disinformation may have little to no connection to the topic being posted about. CTG will continue to monitor disinformation posts and the impact of them online in different languages. They monitor disinformation through the use of Threat Hunters, and our Worldwide Analysis of Threats, Crime, and Hazards (WATCH) Officers evaluate the impact. CTG’s Extremism team is evaluating the impact of disinformation on radicalization and the extent to which it can cause violence across the globe. CTG’s Extremism team is working with regional teams to ensure that a global evaluation is conducted successfully, as well as with the CICYBER team to establish the dangers of online threats. Awareness of the spread of disinformation, especially concerning translations from one language to another, needs to continue to grow and maintain momentum in order to help prevent and deter further confusion and misunderstanding of information that could be seen as dangerous if the message is misread due to language translation via technology.

________________________________________________________________________ The Counterterrorism Group (CTG)

[1] Propaganda by Allegra Berg via Canva

[2] Internet: most common languages online as of January 2020, by share of internet users, Statista, June 2020, https://www.statista.com/statistics/262946/share-of-the-most-common-languages-on-the-internet/

[3] Twitter Bots Poised to Spread Disinformation Before Election, New York Times, October 2020, https://www.nytimes.com/2020/10/29/technology/twitter-bots-poised-to-spread-disinformation-before-election.html

[4] Cyborgs, trolls, and bots: A guide to online misinformation, Associated Press, February 2020, https://apnews.com/article/us-news-ap-top-news-elections-social-media-technology-4086949d878336f8ea6daa4dee725d94

[5] How to deal with AI-enabled disinformation, Brookings, November 2020, https://www.brookings.edu/research/how-to-deal-with-ai-enabled-disinformation/

[6] Translation on Virginia Department of Health’s Website Told Spanish Readers They Didn’t Need COVID-19 Vaccine, American Translators Association, January 2021, https://www.atanet.org/industry-news/translation-on-virginia-department-of-healths-website-told-spanish-readers-they-didnt-need-covid-19-vaccine/

[7] Government Responses to Disinformation on Social Media Platforms: Comparative Summary, Library of Congress, December 2020, https://www.loc.gov/law/help/social-media-disinformation/compsum.php

[8] Ibid.

[9] NATO’s approach to countering disinformation: a focus on COVID-19, NATO, January 2020, https://www.nato.int/cps/en/natohq/177273.htm

[10] 'Ya Basta Facebook' Says Company Must Curb Misinformation In Spanish, NPR, March 2021, https://www.npr.org/2021/03/16/977613561/ya-basta-facebook-says-company-must-curb-misinformation-in-spanish

[11] Ibid.

[12] As Donald Trump exits, QAnon takes hold in Germany, Deutsche Welle, January 2021, https://www.dw.com/en/as-donald-trump-exits-qanon-takes-hold-in-germany/a-56277928

[13] NATO’s approach to countering disinformation: a focus on COVID-19, NATO, January 2020, https://www.nato.int/cps/en/natohq/177273.htm

73 views
bottom of page