May 2-8, 2024 | Issue 18 - CICYBER
Janthe Van Schaik, Mihai Marian Calinoiu, Prim Thanchanok Kanlayanarak,
Alya Fathia Fitri, Senior Editor
North Korean State-Sponsored Groups Utilize AI for Cyber Operations[1]
Date: May 2, 2024
Location: North Korea
Parties involved: Democratic People’s Republic of Korea (DPRK); DPRK cyber group Emerald Street; US; Microsoft Corporation; South Korea
The event: North Korean state-sponsored actors target DPRK subject matter experts in spear phishing social engineering campaign via spoofed emails.[2] The DPRK cyber groups target policy advisors with socially engineered techniques exploiting the Domain-Based Message Authentication Reporting and Conformance (DMARC) policies to impersonate trusted journalists, think tanks, and academics, making the emails appear as if they are originating from a credible source.[3] Technology company Microsoft announced that the cyber group Emerald Street started using artificial intelligence (AI) in their spear-phishing operations.[4]
Analysis & Implications:
North Korean state-sponsored actors will likely attempt to gather intelligence through email spoofing campaigns against countries it defines as hostile like South Korea by continuing to impersonate journalists and subject matter experts. South Korea will almost certainly collaborate with the US to deter, detect, and defeat North Korean cyber attacks in the PACOM region. Worldwide governmental cybersecurity agencies will likely inform possible targets of the threat, with a roughly even chance of organizing awareness campaigns about spoofed emails and social engineering, likely providing organizations with resources to change their DMARC policy fields. These measures will very likely decrease the attack success rates and will almost certainly maintain security in the area.
North Korean threat actors will likely use AI to increase trust with the targets and impersonate journalists and think tanks by incorporating prompts of information to mimic the language style, likely increasing credibility. Malicious actors like Emerald Street will likely incorporate voice phishing techniques and Natural Language Generation (NLG) software to build connections and target South Korean experts by recreating authentic audio messages and public writings. The DPRK will likely improve the social engineering techniques to widen the target scope, reaching foreign policy experts, facilitating the acquisition of confidential information by using the Deep Language model and creating deep fake audio messages.
Date: May 5, 2024
Location: India
Parties involved: India; Governmental authorities; Law enforcement; Social media platform providers
The event: Artificial Intelligence (AI) generated deepfake videos and audios are interfering with the Indian elections aiming to spread false political narratives on social media. This year’s India general election, spanning from April 19 to June 1 with the number of eligible voters nearing one billion, authorities are closely monitoring for misinformation throughout the electoral process.[5] Deepfakes are created by superposing existing video content using Generative Adversarial Networks (GAN) consisting of neural networks including the generator and discriminator to produce deep fake videos or audio.[6]
Analysis & Implications:
The use of AI-generated content known as deepfakes by groups interested in shaping public opinion and spreading misinformation about political opponents or social problems will almost certainly continue to rise as elections occur in India. India’s difficulty in combating the emerging amount of deepfakes through collaboration between governmental authorities, law enforcement, and social media platforms will very likely raise concerns and create confusion in the population and almost certainly in vulnerable groups like the elders. Governmental authorities tasked with cybersecurity and the integrity of the voting process will almost certainly launch awareness campaigns about detecting deepfakes and fact-checking information accessible to anyone.
Research and Development departments of Indian cybersecurity agencies along with software engineering companies and social media platforms will very likely collaborate on developing technologies to detect AI-generated content. The focus will very likely range from the easily detectable to the most advanced deepfakes. Researchers will likely use machine learning or deep learning algorithms like Convolutional Neural Networks (CNN) to check videos frame-by-frame to detect anomalies in AI-generated videos, and very likely develop algorithms for removing or watermarking deepfakes on social media. Malicious actors will very likely improve the technologies used to create deepfakes to make generated content harder to recognize by humans and dedicated software, creating a constant need for improvement of the detection methods.
[1] Cyber Threat Actor, generated by a third party database
[2] NSA, FBI Alert on N. Korean Hackers Spoofing Emails from Trusted Sources, The Hacker News, May 2024, https://thehackernews.com/2024/05/nsa-fbi-alert-on-n-korean-hackers.html?m=1
[3] “North Korean Actors Exploit Weak DMARC Security Policies to Mask Spearphishing Efforts”, US Department of Defense, 2024, https://media.defense.gov/2024/May/02/2003455483/-1/-1/0/CSA-NORTH-KOREAN-ACTORS-EXPLOIT-WEAK-DMARC.PDF
[4] China tests US voter fault lines and ramps AI content to boost its geopolitical interests, Microsoft, April 2024, https://blogs.microsoft.com/on-the-issues/2024/04/04/china-ai-influence-elections-mtac-cybersecurity/
[5] Fake videos of Modi aides trigger political showdown in India election, Reuters, May 2024, https://www.reuters.com/world/india/fake-videos-modi-aides-trigger-political-showdown-india-election-2024-05-05/
[6] Manipulating reality: the intersection of deepfakes and the law, Reuters, February 2024, https://www.reuters.com/legal/legalindustry/manipulating-reality-intersection-deepfakes-law-2024-02-01/