The Future of Artificial Intelligence in Physical Security, Counterintelligence and Warfare
- CTG Global Analyst
- Apr 23, 2021
- 11 min read
Updated: Nov 22, 2024
CICYBER
March 29, 2021

Earth[1]
Advancements in the realm of Artificial Intelligence (AI) have been producing fundamental changes in offensive and defensive capabilities around the world, which will significantly affect the nature of national security threats in the coming decades. Defense strategies and security mechanisms must adapt to the changing security environment to deter adversaries’ threats. Without a lucid understanding of the role that AI plays in physical security, counterintelligence services will not be able to sufficiently mitigate the threat that emanates from foreign espionage. This report discusses how AI is expected to affect the security systems of high-value targets globally and in peacetime and at times of war or perpetual mass violence.
AI refers to machines that approximate human-level decision-making.[2] AI systems possess three common qualities: intentionality, intelligence, and adaptability.[3] For these attributes, AI may have the capacity to eliminate the need for direct human involvement in military and espionage activities. Machines that possess intentionality are capable of autonomous decision-making in a dynamic environment through live data analysis.[4] Intelligence is demonstrated by the advanced analytical capabilities of AI machines during the decision-making process. AI systems are currently being programmed to encourage adaptability and continuous use even in unforeseen circumstances.[5]
The last ten years have shown exceptional and unparalleled developments in AI technology. Human innovation has created a new age of technology where machines can threaten powerful national security systems, such as that of the US.[6] The automation and optimization of data analysis, scientific experiments, hypothesis generation, etc. all contribute to the development of AI. Russia, China, and the US understand the economic and strategic potential of AI and have been actively competing for resources, funding research projects, and engaging in espionage activities for competitive advantages.[7] Yet, there are difficulties associated with developing AI technologies. World-class talent and massive amounts of investment, as well as time and energy, are basic demands of research.[8] The centralization of AI research in Russia, China, and the US remains, but smaller states, such as South Korea have achieved significant progress in AI research as well. This shows that AI research—and consequently, future development—rests on monetary costs and human intellectual capital. [9]
With the development of technology, it is expected that the price of research, manufacturing, and distributing AI and related technologies will significantly decrease, thereby making it more accessible for smaller states, and even non-state groups.[10] Consequently, AI will balance the asymmetries of traditional conceptions of power, such as military and economic capabilities,[11] and will enable smaller and weaker actors to cause considerable damage to larger states.[12] In all, the rapid development of AI is likely to emerge on the grounds of research projects of great powers and rapidly become available for smaller actors due to lower technology costs and an increase in human intellectual capital. The changing information environment will likely equalize some states that possess capital for investment while increasing the technological and economic gap between poorer economies and leading innovators.[13] Hence, it is predicted that AI will change the world’s security environment, with some specific centers emerging around leading states where national security will be often and seriously threatened by offensive use of AI technologies. Through AI, the current world order may be fundamentally altered.
The threat posed by cyberweapons that use AI can be assessed in a framework that distinguishes between different ‘theaters’. These ‘theaters’ are geographically delimited security environments where strategic and operational decision-making happens. To assess some specific, possible threats emanating from advanced AI technology, this report discusses the potential vulnerability of high-value targets in three theaters: in general and globally, in peacetime; and in times of war or perpetual mass violence. The first theater, to be hereafter referred to as T1, is characterized by global competition at multiple levels, including a massive great power competition, a conscious effort of small states and non-state actors to gain AI technology and competition in the commercial sphere.[14] This theater will likely be in the center of worldwide economic and strategic espionage but will be less reactive to even major security breaches. On the other hand, the two other theaters, specific geographical areas at peacetime and wartime, will be targeted by sophisticated, high-precision, and intensive offensive espionage activities and actual physical threats from militaries, non-state violent actors, perpetrators, and saboteurs. These theaters will be referred to as T2 and T3, respectively. Militaries and national security agencies must prepare for these coming threats, as well as allocate considerable funds for AI research and development, prioritizing maintenance of enhancement strategic advantages and positions against this changing security environment.

“US Military Personnel Training With AI-Operated Virtual Enemy Combatants”[15]
In general and on the global scale (T1), CTG estimates that competition among great powers between China, Russia, and the US will exponentially evolve. This is assumed on the basis that these states have already expressed their intention and willingness in expanding their cyber and AI capabilities,[16] and are reportedly aware of the threat that comes from the other ones having offensive AI capabilities.[17] In this quest for technological and strategic advantage, it is likely that cyber-espionage attempts and disruptive cyberattacks will become more common, either to gain unilateral advantages or to weaken adversaries. China, for example, already uses enhanced AI technology in its mass surveillance operations, which by now have been extended to a global scale.[18] Russia has demonstrated its AI and cyber capabilities through interference in multiple national elections, mainly through means of disinformation.[19] American defense initiatives have transitioned from the realm of AI to Russian and Chinese technological advancements.[20] These are just some indicators of the intentions and growing capabilities of these three major states. However, they are not the only actors exploring the power of AI. Experts at The Belfer Center expect small states with a sufficient economic basis to start significant research and development projects investigating AI.[21] They also predict that non-state actors, such as terrorist groups, will begin to use AI for sabotage and deception, although it is unlikely to emerge as a major priority for terrorist groups in the immediate future[22] Overall, global AI research will likely expand rapidly. AI will be available for a more diverse and numerous audience. Great power competition will continue with an added factor of emerging small-state and non-state adversaries in the global and national security landscapes.
The physical security threats posed by potential AI weapons in peacetime (T2), and at times of war or perpetual mass violence (T3) are also important to consider. In the present political environment, the US and the EU fall within T2 because there is no war or perpetual mass violence in their territories, while Syria, or US military bases in Syria, would be labeled T3, for there the threat is imminent and constant due to the civil war and terrorist attacks. Accordingly, T2 comprises only counterintelligence and intelligence efforts in the majority of cases, while T3 has to handle offensive, even physical, attacks in addition to fulfilling a more militarized counterintelligence function as well.
The theaters also differ significantly in their security environment. First, in T2, threats are usually rare and happen with lower intensity, meaning that security systems and protocols are likely to develop merely by trial-and-error or fail-to-learn methods after successful attacks or infiltrations by adversaries, while authorities are less prepared to cope with new technological developments. This forecasts insufficient reaction and counteraction by domestic national security agencies during attacks and leaves much for chance than for preparedness in defense. On the other hand, T3 threats are more likely constant and long-staying. As a result, security forces are on high alert. They are expected to have the capabilities of and readiness for instant and massive counterattacks with a greater deterring effect and more successful defense.
Second, T2 attacks are very likely to occur in certain, well-known areas where primary target facilities are located. This usually means government buildings, the security apparatus, and critical infrastructures. These places are well-known to the defense apparatus, making it reasonable to expect a strong local awareness, which, in turn, would likely result in more effective surveillance before and a higher chance of capture after an attack. Due to domestic surveillance efficiency, it is also expected of authorities to be more aware of adversarial activities in their local areas. T3, in contrast, is characterized by greater uncertainty and less reliability. Although the probability of infiltration is lower in these sites due to higher-level security measures, adversaries are less known to security forces. The nature of undeveloped local infrastructure makes it harder to track down successful infiltrators or saboteurs.
Both T2 and T3 theaters have three security priorities: control access to buildings and information, defeat and capture adversaries, and deter spies and attacks. One common method of denial of access in the USis the use of Common Access Cards (CACs). A CAC is the standard identification card for US government employees and provides physical access to buildings and computer networks.[23] A CAC card must be in an individual’s physical possession to gain access to a US government building or network. CACs are currently “designed to provide electronic means of rapid authentication” and largely resistant to counterfeiting and fraud.[24]



For an adversary to gain access, they must physically acquire the CAC card and enter the building or network. Therefore, an intelligence officer from an adversary nation seeking access to a secure government facility must obtain a government employee’s CAC, most likely through theft or bribery. Considering the extensive security training that government employees undergo, obtaining a CAC is exceedingly difficult for a foreign intelligence officer based on current protocols and procedures.
CACs may be replaced with AI soon in both the public and private sectors, which likely implies a new security vulnerability to government buildings. Advancements in AI have significant implications because completely digitized systems can lead to vulnerabilities that compromise physical security. The transition from physical security mechanisms to AI-based security will significantly impact federal buildings in the US and overseas. One example of an AI security system would be the replacement of CACs with digital facial recognition biometric identification software.[31] This is a beneficial development because it could reduce concerns regarding CAC cards’ physical location and vulnerability. However, the serious downside to complete reliance on AI for physical security is that even seemingly secure technology could be susceptible to cyberattacks. If government buildings and networks are secured by AI, an adversary nation may turn to its cyber capabilities to gain access to a building.

“A Common Access Card (CAC)”[32]
Security protocols regarding authorized access are also disposed to social engineering attempts which are expected to become more powerful and effective with the development of AI.[33] Automated intelligence collection and more concise online social engineering make it easier for enemy governments to target strategically important military, intelligence, and law-enforcement personnel to gain access to online information systems or even physical establishments, for example by acquiring CACs or passwords.
Physical site protection may also benefit from AI-provided enhanced threat assessment. As the analytical and decision-making capabilities of machines with AI develop, likely, they will increasingly contribute to surveillance around critical objects. With their quick and precise facial recognition systems, movement, and access logging, these systems will be able to provide real-time information about every person within a certain perimeter of the facility, making it much easier for security personnel to track and capture intruders.[34] For further implications of the predicted development of AI technology, see Figure 1 above.
At the government level, CTG recommends adopting policy ideas and nationwide security protocols to the changing nature of technological development. Governments are encouraged to invest in research and development, as well as in education programs about AI, cybertechnology, and hard sciences. Governments and C2 posts should explicitly state, plan, and frequently revise their technology-based policies, research projects, in addition to better define their short-term and mid-term objectives and accordingly allocate their resources. Intelligence agencies should prepare and develop their economic and technological espionage toolkits to cope with global competition. They should also emphasize AI in their counterintelligence training and technological development projects. Militaries should continue to develop cutting-edge AI-operated technologies in both offensive and defensive weaponry, data analysis, and strategic coordination. Wargames should be developed to model future threat scenarios and strategy planning military personnel should be educated about AI. To achieve these objectives, governments must establish effective strategies for the recruitment of top engineering and computer science students at competitive universities. Finally, when governments conduct diplomacy should not forget the inherent moral implications of AI-operated surveillance and weapon systems, should they like to avoid massive transnational advocacy against their developing AI projects.
CTG’s CICYBER team will continue to investigate emerging AI technologies to better detect, defeat, and deter modern security threats. CTG will work with government agencies and private sector security firms to develop plans and procedures to prepare for a new era of security threats. CTG recommends all defense organizations maintain training in traditional military tactics to avoid relying exclusively on AI. W.A.T.C.H. officers at CTG will continue to monitor emerging AI threats globally.
________________________________________________________________________ The Counterterrorism Group (CTG)
[2]Artificial Intelligence (AI), IBM, June 2020, https://www.ibm.com/cloud/learn/what-is-artificial-intelligence
[3] What is artificial intelligence?, Brookings Institution, October 2018, https://www.brookings.edu/research/what-is-artificial-intelligence/
[4] Ibid.
[5] Ibid.
[6] Khalilzad, Z., White, J. P. and Marshall, A. W. (eds.), “Strategic Appraisal: The Changing Role of Information in Warfare”, RAND Corporation, 1999. https://www.rand.org/pubs/monograph_reports/MR1016.html
[7] Sharikov, P., “Artificial intelligence, cyberattack, and nuclear weapons—A dangerous combination”, Bulletin of the Atomic Scientists, October 2018, https://www.tandfonline.com/doi/full/10.1080/00963402.2018.1533185
[8] Allen, G. and Chan, T., “Artificial Intelligence and National Security, Belfer Center for Science and International Affairs”, Belfer Center for Science and International Affairs, Harvard Kennedy School, Harvard University, July 2017.
[9] “AI in Korea”, EC/OECD Policy Advisory, March 2021, https://www.oecd.ai/dashboards/countries/SouthKorea/
[10] Rudischhauser, W., “Report. Autonomous or semi-autonomous weapons systems: A potential new threat of terrorism?” Federal Academy for Security Policy, January 2017, https://www.baks.bund.de/de/node/1527
[11] “Malevolent soft power, AI, and the threat to democracy”, Brookings, November 2018, https://www.brookings.edu/research/malevolent-soft-power-ai-and-the-threat-to-democracy/
[12] Egel, D., Robinson, E., Cleveland, C. T. and Oates, C., “AI and Irregular Warfare: An Evolution, Not A Revolution”, War On The Rocks, October 2019, https://warontherocks.com/2019/10/ai-and-irregular-warfare-an-evolution-not-a-revolution/
[13] Allen, G. and Chan, T., “Artificial Intelligence and National Security, Belfer Center for Science and International Affairs”, Belfer Center for Science and International Affairs, Harvard Kennedy School, Harvard University, July 2017.
[14] “Artificial Intelligence in Asia and the Pacific”, United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP), November 2017, https://www.unescap.org/sites/default/files/ESCAP_Artificial_Intelligence.pdf
[15] “US Military Personnel Training With AI-Operated Virtual Enemy Combatants”, U.S. Navy photo by John F. Williams, licensed under Public Domain (The appearance of U.S. Department of Defense (DoD) visual information does not imply or constitute DoD endorsement)
[16] “Weapons of the weak: Russia and AI-driven asymmetric warfare”, Brookings, November 2018, https://www.brookings.edu/research/weapons-of-the-weak-russia-and-ai-driven-asymmetric-warfare/
[17] “The Panopticon Is Already Here”, The Atlantic, September 2020, https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/
[18] “Combating disinformation and foreign interference in democracies: Lessons from Europe”, Brookings Institution, July 2019, https://www.brookings.edu/blog/techtank/2019/07/31/combating-disinformation-and-foreign-interference-in-democracies-lessons-from-europe/
[19] See, for example, “SOCOM leaders want to reduce the load on operators to create ‘hyper-enabled’ operators”, C4ISRNET, September 2019, https://www.c4isrnet.com/artificial-intelligence/2019/09/20/socom-leaders-want-to-reduce-the-load-on-operators-to-create-hyper-enabled-operators/
[20] Allen, G. and Chan, T., “Artificial Intelligence and National Security, Belfer Center for Science and International Affairs”, Belfer Center for Science and International Affairs, July 2017.
[21] Egel, D., Robinson, E., Cleveland, C. T. and Oates, C., “AI and Irregular Warfare: An Evolution, Not A Revolution”, War On The Rocks, October 2019, https://warontherocks.com/2019/10/ai-and-irregular-warfare-an-evolution-not-a-revolution/
[25]Cooperation Under the Security Dilemma, World Politics vol, 30, No. 2, January 1978, https://www.jstor.org/stable/2009958?seq=1#metadata_info_tab_contents
[26] Wartime or perpetual mass violence.
[27] Sharikov, P., “Artificial intelligence, cyberattack, and nuclear weapons—A dangerous combination”, Bulletin of the Atomic Scientists, October 2018, https://www.tandfonline.com/doi/full/10.1080/00963402.2018.1533185
[28] Allen, G. and Chan, T., “Artificial Intelligence and National Security, Belfer Center for Science and International Affairs”, Belfer Center for Science and International Affairs, July 2017, on pp. 58-9.
[29] For a more exhaustive list see: O’Hanlon, M. E., “The role of AI in future warfare”, Brookings, November 2018, https://www.brookings.edu/research/ai-and-future-warfare/
[30] Pros and Cons of Autonomous Weapons Systems, Army University Press, May-June 2017, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/
[32] “A Common Access Card (CAC)” by Department of Defense licensed under Public Domain (The appearance of U.S. Department of Defense (DoD) visual information does not imply or constitute DoD endorsement)
[33] Allen, G. and Chan, T., “Artificial Intelligence and National Security, Belfer Center for Science and International Affairs”, Belfer Center for Science and International Affairs, Harvard Kennedy School, Harvard University, July 2017.
[34] Dibley, M. J. “An Intelligent System for Facility Management”, Cardiff School of Engineering, October 2011, https://core.ac.uk/download/pdf/40007609.pdf