Artificial Intelligence: A Tool for Terrorism?
Artificial Intelligence (AI) is the ability of machines to learn and perform tasks that are usually completed by humans. As the development of new AI systems proves to be helpful to the efficiency and effectiveness of routine tasks, society must also prepare for and examine future risks posed by AI technology. Terrorists can use AI as a tool due to exposed vulnerabilities, accessibility through commercial markets, partnership with other terrorist networks, and overall AI capability improvements. Although private companies and the federal government strive to refine AI security and standards, terrorists can still use AI technology to their advantage, threatening national security.
AI is becoming a tool for terrorists, as there are a high amount of unresolved vulnerabilities. These vulnerabilities are making it easier for perpetrators to infiltrate systems. In the public and private sector, Information Technology (IT) systems are vulnerable to simple attacks with the exploitation of unpatched and unprotected systems. If employees are uninformed on how penetrable their infrastructure is, they also open the door for cyberattacks. With exposed vulnerabilities, hackers can target everyday individuals or those holding high profiles with ease. Some organizations may become entirely reliant on AI technology in the coming years, which can result in more room for vulnerabilities, exploitation, and less cybersecurity protection.
AI developers are eager to put their products on the market, yet some products lack proper security controls, which contributes to vulnerabilities. While consumer interest is at the forefront, security takes second place. There are currently no rules in effect to test AI security within the public or private sector before implementing AI technology throughout an organization, creating thousands of unknown weaknesses. AI technology also suffers from false positives and flawed classification algorithms. Hypothetically, if the U.S. military is using AI to deter threats, AI robots may misinterpret information or can be reprogrammed by an adversary to target U.S. soldiers instead. If the AI robots were on a centralized network with hundreds of other robots, the aftermath would be devastating with high civilian casualties. Vulnerabilities can also cause an AI defense network to be disabled all at once.
Aside from vulnerabilities, AI is a tool for terrorism as it is easily accessible, and individuals no longer have to be scientists or specialized researchers to use the product. AI is tremendously powerful to those who know how to use it and possess ill intentions. AI gives hackers and terrorists the ability to carry out attacks they may not have previously thought of due to increasing anonymity and physical distance. Hackers can influence a country's political atmosphere, creating highly detailed disinformation campaigns, which can appear unrecognizable from the original documents. Hackers can use synthetic images, text, and audio from AI to impersonate people online, as well. Enormous amounts of data from AI enables automated mass spear-phishing attacks, which can result in more actors, victims, and the loss of sensitive information or money.
AI’s constant improvement in performance, scalability, low cost of hardware, and progressing robotic achievements encourage terrorists to use AI as a tool and invest in the longevity and destruction it can cause. Mass distribution of commercial drones and easy accessibility allows terrorists to afford more attacks and produce damage at an elevated rate. In addition to purchasing drones, terrorists are also constructing their own AI devices and strengthening previously built military technology to enhance explosive devices. The Islamic State of Iraq and the Levant (ISIS) is the most advanced terrorist organization concerning AI technology. ISIS has launched many aerial attacks and has released several videos over the years, capturing their drone strikes. The group has displayed their choice of a consumer drone, the DJI Phantom, which Chinese retailers can sell for about $1,000 or less.
Several terrorist organizations are using drones and AI technology to augment their attacks. Hezbollah, Hamas, the Houthi rebels, and other groups have been receiving aid in reconnaissance and suicide drones from Iran, more heavily since 2015. Hezbollah is deploying military-grade drones, which are now becoming highly sophisticated with Iran’s assistance. Hamas owns and operates military-grade and homegrown drones. Nevertheless, Iran’s guidance made this group capable of creating larger drones containing missiles. The Houthi rebels use drones for surveillance and attacks, yet they are also becoming advanced while working with Iran. Explosive, remotely operated, unmanned maritime craft now exists with the ability to cause extensive damage to any enemy, including an enemy of Iran. Terrorists will go to any length to construct AI-enabled unmanned vehicles (UVs) and unmanned aerial vehicles (UAVs). Increasing knowledge and will to learn can result in further automated physical attacks and devastating impacts.
AI provides an advantage for terrorists as enhanced perception, and physical capacities offer planning attacks without an offender or risked lives in the equation. AI technology is resulting in smaller, faster, and more flexible weapons. The production of AI products and a possible understanding of AI technology within terrorist organizations is essential to note as the advancing capabilities can pose a threat to national security. Drone regulations within the U.S. are still in progress while balancing the freedom to purchase commercial drones and balancing security to protect citizens and prevent harmful attacks. Nonetheless, terrorists are acquiring AI products at a rapid speed. AI effectiveness and precision at the core of new terrorist operations must not go unnoticed.
Terrorists are benefiting from AI technology as it thinks and learns on its own. Manual human control does not compare to AI’s ability to carry out attacks with great accuracy. As software disrupted Iranian nuclear programs in the infamous 2010 Stuxnet attacks, AI technology can contribute to producing an attack much worse than Stuxnet with heightened infected systems, unable to receive commands. Additionally, hackers and terrorists alike can gain a substantial amount of valuable and sensitive information from AI, as AI requires detailed information to work successfully. Tools such as Deep Exploit were made to secure organizations and implement countermeasures. Yet, open-source developers must consider how hackers and terrorists can obtain the same tools. These tools enable the penetration of a targeted organization to find vulnerabilities in their systems within minutes. Although hackers and terrorists have to pass through millions of AI data codes, once they do, they have access to everything within an organization. AI can leave irreversible damage on society, especially when even the greatest scientists and developers misinterpret AI.
Following AI advancement in the aviation, farming, military, healthcare, automotive industries, and more, security needs have improved. Foundational research on AI is continuous as the public and private sector strive to understand AI, its capabilities, and how users can gain more control over the science. As hackers and terrorists can gain access to critical software, engineers are working to create new AI algorithms more frequently. Researchers are also studying patterns and probable attacks from Brute Force, a common security breach method. While organizations are enhancing AI technology, AI is also being used to help strengthen physical security measures. AI can monitor videos and office security without the use of employees and errors. AI-enabled security features offer real-time situational awareness for security personnel by guarding perimeters, detecting breaches, and other potential risks.
Security improvements are taking place as many developers believe that AI is a foundation for future cybersecurity. Current cybersecurity methods use AI to detect spam and malware on company systems. Engineers are also working on next-generation anti-virus software and modified algorithms to combat threats. Many organizations are expanding on AI uses with Active Endpoint Detection and Response (EDR) research. EDR consists of tools used to detect suspicious activities on hosts and endpoints which protect systems against advanced threats. These advanced AI tools are trained to identify viruses and their traits.
While private companies are working to fix weak systems, the U.S. military has been increasing its use of AI and implementing the technology into their strategies. One of these strategies includes the “Third Offset” created by the Department of Defense (DoD). DoD policies are also monitoring the development of autonomous weapons and have launched programs to defend the U.S. against drone attacks from terrorist organizations. The Defense Advanced Research Projects Agency (DARPA) has launched a $2 billion campaign to develop AI technology and cybersecurity research, and the agency continuously markets vulnerability discovery capabilities to the DoD and private sector companies. The Intelligence Advanced Research Projects Activity (IARPA) has worked on several research projects over the years, with a focus on cybersecurity. Cyber-attack Automated Unconventional Sensor Environment (CAUSE) is still going strong as it aims to develop and test new automated methods that can detect cyber-attacks earlier than existing methods. The private sector and academia with a focus in aerospace and science such as BAE Systems and Leidos, and the University of Southern California, can make these research projects become a reality.
Aside from cybersecurity, organizations are also focusing on the security of AI in robotics. The Institute of Electrical and Electronics Engineers (IEEE) has developed an AI standard, P1872-2015, directed towards ontologies for robotics and automation. The standard provides a way to represent knowledge, common terms, and definitions. Modeling developed standards by organizations like the IEEE can prevent unambiguous sharing across humans, robots, and AI systems. Standards also provide a foundation for the application of AI robotic technology. It is essential to adopt standards due to the growing AI marketplace, and standards should offer foundational support to software engineering, performance, metrics, safety, usability, traceability, interoperability, domains, security, and privacy.
Although there are countermeasures in place and additional research on AI, the work does not end here. The public and private sector must work together to combat AI threats and threats used against AI. The federal government must work with cyber professionals and engineers to conduct risk and vulnerability assessments focusing on threat actors, new capabilities and their sophistication, intent, the likelihood of attacks, and impact. Additionally, the government should consider policy measures regarding the manufacture of robots and AI technology. Organizations must consider how AI and autonomous weapon production can impact their physical security and cybersecurity with national threats.
Organizations must also consider the vulnerabilities they have and how hackers and non-state actors may further exploit them. Engineers in the private sector need to research the creation of high-quality datasets and technological environment for AI applications. Engineers should be responsible for creating suitable datasets for testing and accurately train running systems. Developers must monitor the growth of malware and how AI can detect changing patterns as well.
Lastly, it is necessary to increase consumer and employee awareness of AI security. Many networks are ill-equipped to defend itself against a cyberattack. Companies must encourage better security practices, using stronger passwords, and familiarity with two-factor authentication. The future of AI is hopeful as both civilians, and the military can benefit from its capabilities. Nevertheless, as AI expansion continues, national security leaders and developers cannot avoid ill fates of their creations.
It is vital to acknowledge AI's growing capacity as AI technology and software can encompass understanding, reasoning, planning, and communication. Actions that were once only capable by humans can now serve humanity with new opportunities for businesses, consumers, and a better quality of living. While all of the benefits of AI are evolutionary, CTG is still concerned with AI's growth and possible uses for terrorism. Terrorists can always use firearms and Improvised Explosive Devices (IEDs) due to the accessibility and fear it causes. Although cyberattacks may not have the same explosive effect that terrorists seek, AI infected cyberattacks can disrupt the welfare of society and cause significant harm. AI has already fallen into the hands of terrorists, and our weapons for offensive and defensive purposes are becoming their weapons for destruction.
The Counterterrorism Group (CTG) is on high alert for any AI cyberattacks and increasing AI-centered operations within terrorist organizations. CTG is currently monitoring any improvements in AI technology and cybersecurity announced by the U.S. Department of Defense (DoD), The Defense Advanced Research Projects Agency (DARPA), The Intelligence Advanced Research Projects Activity (IARPA), and The Institute of Electrical and Electronics Engineers (IEEE).
1. "What is Artificial Intelligence" by Ravirajbhat154 is licensed under CC BY 2.0.
2. Nicholas Gorssman, Are drones the new terrorist weapon? Someone tried to kill Venezuela’s president with one., The Washington Post, August 10, 2018, https://www.washingtonpost.com/news/monkey-cage/wp/2018/08/10/are-drones-the-new-terrorist-weapon-someone-just-tried-to-kill-venezuelas-president-with-a-drone/
3. Rowan Scarborough, Iran creating 'suicide' drones that threaten Israel, U.S. Navy: Pentagon, The Washington Times, April 8, 2015, https://www.washingtontimes.com/news/2015/apr/8/iran-creating-suicide-drones-us-army-report-warns/
4. Jon Gambrell, How Yemen’s rebels increasingly deploy drones, Defense News, May 21, 2019, https://www.defensenews.com/unmanned/2019/05/21/how-yemens-rebels-increasingly-deploy-drones/
5. Machine_learning_security/DeepExploit/, GitHub, February 10, 2018, https://github.com/13o-bbr-bbq/machine_learning_security/commits/master?after=4f516ee6e6be9bbf8bdbf855a03b60d3f712cf82+244&path%5B%5D=DeepExploit
6. Migo Kedem, Active EDR (Endpoint Detection and Response)-Feature Spotlight, Sentinelone, February 28, 2019, https://www.sentinelone.com/blog/active-edr-feature-spotlight/
7. Sydney J. Freedberg Jr., Artificial Intelligence: Will Special Operators Lead The Way?, Breaking Defense, February 13, 2019, https://breakingdefense.com/2019/02/artificial-intelligence-will-special-operators-lead-the-way/
8. DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies, The Defense Advanced Research Projects Agency (DARPA), September 7, 2019, https://www.darpa.mil/news-events/2018-09-07
9. Cyber-attack Automated Unconventional Sensor Environment (CAUSE), The Intelligence Advanced Research Projects Activity (IARPA), July 17, 2015, https://www.iarpa.gov/index.php/research-programs/cause
10. IEEE Standards Activities in the Robotics and Automation Space, The Institute of Electrical and Electronics Engineers (IEEE), November 9, 2018, https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/robotics.pdf
11. "Blackjack Perch" by Navy Petty Officer 2nd Class Brandon Parker is licensed under CC BY 2.0.