HACKERS EXPLOIT CLAUDE AI TOOL TO IMPLEMENT ADVANCED CYBER-EXTORTION, DATA THEFT, AND EMPLOYMENT FRAUD, AND CYBERCRIMINALS ATTACK ANDROID USERS THROUGH MULTILINGUAL SCAM ADVERTS
- Senior Editor
- 6 minutes ago
- 3 min read
August 28 - September 3, 2025 | Issue 33 - CICYBER Team
Isabelle Hilyer-Jones, Sue Friend, Amelia Bell, Lucy Gibson, Agathe Labadi, CICYBER
Elena Alice Rossetti, Senior Editor

Hackers’ Coding[1]
Date: August 28, 2025
Location: Global
Parties involved: US; American AI startup Anthropic; Western organizations including major enterprises; Fortune 500 tech companies; HR departments; adversarial states; state-backed actors; North Korea; North Korean scammers; North Korean operatives; hackers; threat actors
The event: Anthropic reported that hackers exploited its Claude AI tool to carry out advanced cyber-extortion, data theft, and employment fraud.[2]
Analysis & Implications:
Innovative adversarial prompt engineering and the exploitation of emergent AI vulnerabilities will likely outpace future Anthropic’s mitigation efforts. Hackers will likely use exploitative techniques like TokenBreak, a manipulative text code that bypasses safety models and makes malware appear harmless, to access restricted content topics with Claude, evading new safety and content moderation guardrails. Using TokenBreak will almost certainly give hackers access to prohibited content, such as malicious code generation or victim-tailored ransom estimates, enabling sustained Claude and Claude Code misuse for cybercrime objectives, including ransomware attacks and other fraudulent activities.
Threat actors will very likely exploit the same AI-operated strategies in future attacks against Western organizations, exposing the dangers of lacking international legal restrictions on AI use. State-backed actors, such as North Korean government-linked scammers, will very likely exploit Anthropic’s Claude AI framework to launch multilayered campaigns while benefiting from governmental protection or inaction and concealed digital identities. Fragmented international laws will likely cause multinational coordination for investigation and prosecution to lag behind the operational tempo of these attacks. The absence of binding AI governance regimes will very likely complicate judicial processes due to competing opinions and national strategies, likely protracting investigations and exposing Western organizations to attacks.
AI-assisted remote job fraud, particularly by isolated adversarial states like North Korea, will likely reshape corporate hiring trust models and insider threat profiles in advanced economies like the US. North Korean operatives will very likely use deepfake technology and real-time AI assistants during interviews and assessments, likely making detection by HR departments of firms like Fortune 500 tech companies increasingly unreliable. There is a roughly even chance that widespread discovery of fake hires will prompt companies to rely less on digital recruitment methods, likely forcing sensitive sectors, such as cyber security and defence, to reintroduce face-to-face hiring to avoid talent shortages. Future workforce composition and insider risk paradigms of major enterprises will likely fundamentally alter, as the threshold for trust and verification rises, and the burden on screening tools and governance mechanisms increases.
Date: August 31, 2025
Location: Global
Parties involved: social media platforms and communication services’ company Meta; social trading network and  financial analysis platform TradingView; trading apps; Advanced Persistent Threat (APT) actors and cybercriminals; Android users; victims
The event: Cybercriminals are targeting Android users through multilingual scam adverts on Meta’s advertising platform using the crypto-stealing trojan Brokewell disguised as a fake TradingView app.[3]
Analysis & Implications:
This APT will very likely exploit the compromised data and diversify its campaign before Meta removes the ads. The APT actors will almost certainly use acquired data such as keystrokes, authentication codes, and credentials to perform unauthorized cryptocurrency transactions to their personal cryptowallets. The APT actors will likely use remote control techniques to extract sensitive information from the device, such as text messages, media, and cookies, to sell on the dark web and finance hacking groups’ activities. Hackers will very likely diversify their techniques by impersonating different trading and cryptocurrency apps and using new credentials to create Meta ads to avoid detection and maintain anonymity, likely continuing the exploit with a façade.
Containing the spread of the malicious ads will very likely prove difficult due to the fragmentation of its targets and the wide reach of Meta algorithms. Meta will very likely struggle to investigate all malicious posts of this campaign, likely requiring an extended time to remove these posts, and very likely allowing the ads to reach new victims in the meantime. The diversified language use and multinational reach will likely increase mitigation difficulties, almost certainly requiring TradingView to make persistent cross-platform cautions on social media, through emails to subscribers, and in-app alerts regarding the scam campaign. The established large-scale presence of fake ads on Meta platforms will very likely allow hackers to bypass any legitimate TradingView warnings, as victims will almost certainly click on the fraudulent links without opening the official TradingView app first.
[1]Â Hackers, generated by a third party database
[2]Â AI firm says its technology weaponised by hackers, BBC, August 2025, https://www.bbc.co.uk/news/articles/crr24eqnnq9oÂ
[3]Â Brokewell Android malware delivered through fake TradingView ads, Bleeping Computer, August 2025, https://www.bleepingcomputer.com/news/security/brokewell-android-malware-delivered-through-fake-tradingview-ads/Â