top of page

ARTIFICIAL INTELLIGENCE BIAS EFFECTS ON EXTREMISM

Lydia Pardun, Katelyn Ramirez, Extremism Team

Week of Monday, November 22, 2021


Artificial Intelligence[1]


As security fields digitize and increase efficiency through technology, predictions and analysis improve. Since the creation of artificial intelligence (AI), organizations and companies have explored its use for increased user experience on media platforms and to aid in criminal investigations.[2] Police and federal agencies use AI to help identify faces from photos or video samples, retrieve data on individuals, and filter through outdated system information.[3] AI can become a liability given previous instances of racial and ethnic bias. Perpetuation of racial stereotypes through AI imposes more risks on marginalized groups and bolsters far-right narratives about these demographics.[4] If police departments continue to implement biased AI technologies, the overemphasis of minorities as a threat is likely to continue. This will likely stimulate far-left narratives and encourage more government distrust and far-left extremism against law enforcement.


In July 2018, the AI tool COMPAS, which was used by US judicial systems, was placed under review after displaying racial bias when determining the probability for repeat offenses.[5] The tool examined individuals’ past offenses, employment status, community ties, educational problems, and substance history to determine the risk of repeat offenses.[6] The specific selection of these characteristics will likely continue to focus on marginalized groups given bias in selected data. Since individuals and companies regard AI technologies as moderately accurate, policing agencies will very likely interpret results as substantive and factual as opposed to biased and misrepresented. These errors will very likely worsen ongoing racial tensions between police and marginalized groups and encourage far-left individuals to further criticize and violently engage with policing and regulatory bodies. The intensification of this dynamic will likely lead to retaliatory and violently defensive efforts by policing agencies to prevent such occurrences, targeting marginalized groups further. The use of biased data will likely increase existing inequities in opportunities for marginalized groups. Marginalized groups will likely feel helpless if technology reinforces existing inequities and stereotypes, and there is a roughly even chance that far-left extremists may feel that the only way to change the status quo is through violence.


The Massachusetts Institute of Technology recently conducted a study to find error rates in AI classifying gender through photos, where errors for light-skinned men were 0.8 percent, while it came to 34.7 percent for darker-skinned women.[7] If such large margins of error exist towards darker-skinned individuals in large companies with big datasets, it is very likely that smaller AI companies marketing products to law enforcement agencies will contain more biases. These findings display the danger in unchecked AI products and how their bias will likely continue to target minority groups, who are often victims of narratives regarding crime and extremism. The reinforcement of these beliefs through technology will very likely intensify ongoing problems such as racial profiling, over-policing, and religious discrimination. Marginalized groups will likely be increasingly surveilled and targeted as perpetrators of crime and extremism. Since AI technology depends on human behavior to work, companies will likely have to recognize that AI cannot be used as an infallible alternative to manual investigation. Companies will likely have to utilize datasets that are equitable across identities and improve AI algorithms to be more inclusive of darker-skinned individuals.


AI products have been scrutinized for disproportionately targeting Muslims and other minorities, creating more bias in the AI technology that policing agencies use.[8] Smaller companies, such as social media monitoring firm Voyager, have advertised the efficiency of their AI technology in aiding law enforcement by identifying prospective extremists based on content and engagement type on social media. AI goals such as these almost certainly aid in creating more bias and false positives than manual intelligence collection in crime prevention and counterterrorism. Biased AI applications will likely reinforce specific narratives about marginalized groups, and the distribution of these products to agencies with the power of enforcement and implementation will likely create more risks for these groups. Bias also almost certainly leads to ethical concerns involving federal use and can likely intensify distrust in government and encourage extremism from the far right and far left. Bias will very likely reinforce racial stereotypes and can be used to manipulate the societal image of certain groups to scapegoat them as extremists and criminals based on identity alone. This will likely impose major risks on these vulnerable groups and likely create an opportunity for extremism based on demographics and targeted attacks to occur against these groups.


Analysts studying bias in AI and machine learning (ML) conclude that bridging gaps and inadequacies in these technologies’ learned interpretations of race, gender, and ethnicity begins with enlarging the dataset for the AI, using more accessible information, and minimizing the use of limited datasets reinforcing criminal narratives about minority groups based on identity alone.[9] Subject matter experts are almost certainly beneficial in the mitigation of pre-existing biases, as they very likely provide insights relating to missing data or outliers in companies’ datasets that technology companies likely miss. The absence of neutrality in these early steps has very likely led to biased algorithms in AI that almost certainly reinforce specific narratives. The implementation of subject matter experts will likely expose societal differences between various identity groups. Data cleaning will likely force companies to identify historical discrepancies in data caused by unequal racial accessibility to certain opportunities and advantages. How a company chooses to respond to these errors will very likely determine the feedback it receives from marginalized groups, and negative responses will likely lead to increased security risks to workers and the company’s cyber security components. If neutrality is not established, violent opposition to these companies’ findings and interpretations is likely to occur. The issue of biased AI should be reduced by addressing biased data collection and cleaning.


Tests of the AI company Voyager’s threat monitors show the AI labels fundamentalist and extremist Islamist affinity as a higher threat than association to far-right extremism.[10] Biases in AI monitors very likely lead to an overemphasis of minorities as a threat to be monitored by falsely labeling religious and racial minority beliefs as extreme at disproportionate rates. The imbalance of extremism identifiers very likely creates disproportionate false-positive threat scores for Islamist extremism and disproportionate false negatives for far-right extremism, likely creating the illusion that Islamist extremism is a larger threat than it is. Increased focus on Muslims and racial minorities is likely to lead to police overlooking credible threats from demographics the algorithms’ biases favor. Additionally, AI biases likely reinforce the fear of racial minorities as security threats through confirmation bias, likely intensifying racism and motivating hate crimes. White supremacist extremists and other racially-motivated extremist groups are very likely to exploit the confirmation bias as evidence for racist narratives. Extremist groups would then likely use their narratives as justification for extremist activity and recruitment.


Grinnell, Iowa went on high alert in response to an alert from its AI monitor, Zenicity, concerning false online rumors of a racial lynching in the community despite limited available law enforcement resources, no evidence of race as a factor in the investigation in question, and the rumors originating outside the state.[11] AI monitor biases are likely to intensify the growth and impact of misinformation, likely creating new threats. False positives through a less accurate system will almost certainly lead to ineffective use of policing resources. Law enforcement is likely to assume the algorithm is more accurate at assessing risk than it actually is and attribute human error to disagreements with AI results, limiting the capabilities of other police projects. If law enforcement uses resources on false-positive threats, they will almost certainly have fewer resources available to them for other threats. Believing there is a threat that does not exist will likely also send communities into a panic, creating chaos and forming new threats.


The results of an AI’s threat scores often are difficult to explain because AI companies maintain secrecy about the content of their algorithms.[12] AI secrecy very likely protects company algorithms from hackers and economic competition. However, keeping algorithms secret will very likely lead many individuals to fear that an AI monitor’s designation of extremism does not have oversight. Increased distrust in the algorithm’s accuracy and law enforcement’s intent is likely to originate from the fear of the unknown. Social media users are also very likely to feel their free speech is limited because they are almost certainly aware law enforcement uses AI social media monitors to identify threats based on online speech, but they almost certainly do not know how the scores are determined or the impact the scores might have on individuals. Conspiracies that the US is turning into a surveillance State are likely to spread misinformation, which may likely increase extremism. The legality of online extremist language likely leads extremists to believe they are targeted by law enforcement for a crime that has not been committed. Extremist groups are likely to exploit angry individuals who feel targeted to increase activity and encourage the development of more complicated communication codes to avoid detection by the AI monitors, which would very likely make identifying and tracking extremism online more difficult.


Law enforcement agencies have recently stopped using AI social media monitors that received backlash, but the effects of AI use are still present.[13] As biased AI monitors are identified, law enforcement is very likely to stop using those algorithms to improve the quality of threat identification and improve public trust. Because AI companies are privately owned and economically competitive, consumer demand from police departments is likely to incentivize innovating AI accuracy from AI monitor companies. Law enforcement will almost certainly continue to use AI because of its potential to identify threats in law enforcement blindspots. Implementing more trusted AI monitors will likely lower the impact of algorithm biases. Continued efforts to identify and minimize biases in AI algorithms will likely improve disproportionate threat associations and lower the impact of racist narratives from extremist groups.


Experts have recommended using larger datasets to lower the bias in AI algorithms.[14] Larger datasets would very likely lower the impact of skewed data used to train algorithms. However, increasing the monitoring and analysis capacity of the AI also increases the cost and labor required to maintain working conditions. Companies are unlikely to increase datasets without increasing the price of their software, limiting law enforcement’s access to AI monitors. Economic competition alone is unlikely to motivate AI companies to raise the quality standard of their monitors to the expectations of the public and law enforcement. The government would likely see more quality control in algorithm companies by increasing economic incentives or adding greater regulation.


The Counterterrorism Group (CTG) recommends that law enforcement carefully review the effects of the social media monitors they are currently using or considering using to identify and address any negative impact the AI may have. If police do not monitor the quality of the AI programs they use, AI algorithms will likely continue to limit the accuracy and effectiveness of police intelligence. Implementing non-automated monitoring and verifying AI-identified threats would likely allow law enforcement to monitor the accuracy and effects of AI monitors over time. Additionally, steps to increase transparency from AI companies about factors in the algorithms while maintaining the companies’ rights to intellectual property would generate insight into the quality of particular algorithms and their potential impacts.


CTG provides daily monitoring and analysis of threats, including extremism, through its Worldwide Analysis of Threats, Crime, and Hazards (W.A.T.C.H.) Officers. The CTG’s Extremism Team monitors the activities of extremist groups online and provides analysis for extremist ideologies and patterns. The Extremism Team can also give insight into the cultural contexts surrounding extremist narratives. Additionally, SOCMINT and OSINT gathered by CTG help monitor and analyze the social impacts of AI bias in law enforcement and other public services by observing social media reactions to monitoring and the adaptation of extremist groups online to monitorization.


The Counterterrorism Group (CTG) is a subdivision of the global consulting firm Paladin 7. CTG has a developed business acumen that proactively identifies and counteracts the threat of terrorism through intelligence and investigative products. Business development resources can now be accessed via the Counter Threat Center (CTC), emerging Fall 2021. The CTG produces W.A.T.C.H resources using daily threat intelligence, also designed to complement CTG specialty reports which utilize analytical and scenario-based planning. Innovation must accommodate political, financial, and cyber threats to maintain a level of business continuity, regardless of unplanned incidents that may take critical systems offline. To find out more about our products and services visit us at counterterrorismgroup.com.


________________________________________________________________________ The Counterterrorism Group (CTG)

[3] “Artificial Intelligence and Policing: First Questions,” Seattle University Law Review, 2018, https://digitalcommons.law.seattleu.edu/cgi/viewcontent.cgi?article=2550&context=sulr

[4] Ibid

[5] Eliminating AI Bias, Towards Data Science, October 2021, https://towardsdatascience.com/eliminating-ai-bias-5b8462a84779

[6] “Practitioner’s Guide to COMPAS Core,” Northpointe, 2015, http://www.northpointeinc.com/downloads/compas/Practitioners-Guide-COMPAS-Core-_031915.pdf

[7] Study finds gender and skin-type bias in commercial artificial-intelligence systems, Massachusetts Institute of Technology, February 2018, https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

[8] The software that studies your Facebook friends to predict who may commit a crime, The Guardian, November 2021, https://www.google.com/amp/s/amp.theguardian.com/us-news/2021/nov/17/police-surveillance-technology-voyager

[9] Eliminating AI Bias, Towards Data Science, October 2021, https://towardsdatascience.com/eliminating-ai-bias-5b8462a84779

[10] LAPD Documents Show What One Social Media Surveillance Firm Promises Police, Brennan Center for Justice, November 2021, https://www.brennancenter.org/our-work/analysis-opinion/lapd-documents-show-what-one-social-media-surveillance-firm-promises

[11] This AI Helps Police Monitor Social Media. Does It Go Too Far?, Wired, July 2021, https://www.wired.com/story/ai-helps-police-monitor-social-media-go-too-far/

[13] LAPD ended predictive policing programs amid public outcry. A new effort shares many of their flaws, The Guardian, November 2021, https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform

[14] Dealing With Bias in Artificial Intelligence, The New York Times, November 2019, https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html


350 views
bottom of page