Blog

AI in Cybersecurity: Offensive AI, Defensive AI & the Crucial Data Foundation, Part 1 of 3

The AI Gauntlet: Navigating the Rise of AI-Powered Cyber Offense

The first in a series of 3 guest blogposts by Roy Chua, Founder and Principal at AvidThink

 

In a recent online discussion with HPE Aruba Networking Field CTO, Jaye Tillson, and Enea Sr Analyst, Laura Wilber, we explored how AI will actively reshape offense and defense in 2025 and impact security trends at the enterprise edge. Following that event, I’ve expanded our discussions into three blog posts for Enea.

In this first post, I’ll discuss how threat actors are weaponizing AI. We’ll look at today’s threat landscape and the initial contours of AI-assisted defensive strategies. Subsequently, I’ll explore the details of AI for defense, the need for a strong data foundation, and the key role of data quality for AI solution effectiveness.

 

The Current Landscape: Threats Amplified by AI

AI has been used in cybersecurity for many years now, particularly for anomaly detection and traffic classification. However, the arrival of generative AI (GenAI) and Large Language Models (LLMs) represents a significant acceleration of its impact. While AI hasn’t yet invented new categories of attacks, it can be a potent force multiplier, enhancing the speed, scale, and sophistication of existing tactics.

The most immediate and visible impact is on social engineering. Phishing campaigns, a staple of cybercrime, are now more effective. GenAI tools can craft emails, SMS messages, and social media posts free of grammatical errors or awkward non-native speaker phrasing that once served as red flags. These models allow attackers, even those lacking language proficiency, to generate fluent, personally, and contextually relevant messages tailored to specific targets. Recent research from security startup Hoxhunt suggests AI-crafted phishing emails now achieve higher engagement rates than traditional methods — a testament to AI’s personalization capabilities using scraped public and social data to create bespoke spear-phishing attacks.

The associated rise of deepfake voice and video presents an even more chilling evolution in deception. Diffusion models and other GenAI technologies can generate hyper-realistic audio and video clones of individuals, including corporate executives. One recent example emerged in early 2024 when fraudsters used deepfake video and audio to impersonate the CFO and other senior executives of the multinational firm Arup during a video conference. It resulted in an employee being tricked into transferring $25 million. Such incidents underscore how AI-generated content can achieve lucrative outcomes by manipulating human trust.

AI is also sharpening attackers’ ability to perform reconnaissance and identify vulnerabilities. Many nation-state actors (countries like Iran, Russia, North Korea, and China) have been observed using LLMs to research target organizations. AI has proven useful in identifying potential weaknesses or known CVEs (Common Vulnerabilities and Exposures) within target systems (like satellite communication systems or EDR products). Modern models like GPT-4 can now autonomously exploit specific vulnerabilities given only a CVE description. This automated capability allows attackers to sift through huge volumes of information and code faster than humanly possible and generate exploits on the fly before patches can be applied.

 

AI-Driven Offensive Tactics: Scaling Malice

One of the greatest cybersecurity threats of AI is not just individual attacks but the ability to automate attacks at scale. Just as the business world has embraced GenAI tools, so have cybersecurity threat actors. There was a 200% surge in mentions of malicious AI tools on dark web forums in early 2024. New purpose-built malicious chatbots like “WormGPT” and “FraudGPT” have emerged, marketed for criminal activities like phishing, malware creation, and finding vulnerabilities — this is akin to the ransomware kits for sale on the dark web. Like those kits, these new AI tools lower the barrier to entry, empowering less skilled individuals (“script kiddies”) to launch more sophisticated attacks at scale.

Meanwhile, attackers can use LLMs for malware development to generate code snippets for malicious functions (keyloggers, ransomware routines), translate malware between programming languages, and rapidly iterate on existing strains to evade detection — “vibe coding” of malware. This accelerates the diversity and development of evasive malware, challenging today’s antivirus solutions.

Furthermore, attackers can employ adversarial machine learning concepts to probe and bypass defenses. For example, Microsoft observed Iranian state-sponsored actors asking an LLM how to disable antivirus software or delete system logs to cover their tracks during an intrusion. While sophisticated attacks to poison defensive AI models are a future concern (which we’ll explore later), the use of AI as an “evasion co-pilot” is happening now.

AI can also help attackers analyze network traffic for target identification. By processing large datasets containing network structures or communications patterns, AI can help pinpoint high-value assets and weak points within a target environment more efficiently.

 

The Defensive Response: AI vs. AI

The weaponization of AI by adversaries requires a commensurate response from those responsible for defense — enterprises, managed security providers, security vendors, and technology providers. We have entered the era of AI vs. AI cybersecurity, where defenders will, likewise, use intelligent systems to counter threats. The sheer volume of data generated by modern enterprises — network flows, endpoint logs, cloud telemetry — already exceeds human capacity for timely analysis. Therefore, AI and machine learning are essential to help security analysts sift through this data deluge.

Initial defensive applications include threat detection. Traditional signature-based methods have struggled against rapidly changing malware and are shifting toward behavior-based detection models powered by AI/ML. Unsupervised (or semi-supervised) learning can establish baselines of normal activity for users, devices, and networks. By continuously monitoring for deviations— anomalous login times, unusual data access patterns, unexpected process execution — AI/ML can flag potentially malicious activity, including insider threats or zero-day exploits. Most modern EDR, MDR, and XDR companies tout self-learning algorithms and have expanded into detecting threats within encrypted traffic.

 

Speed and Visibility — Crucial Tools

Since AI-accelerated attacks unfold in seconds or minutes, far faster than human response times, speed of response is critical. This will drive the move towards autonomous response systems. The concept of systems that can automatically take containment actions — like isolating an infected machine or blocking malicious traffic — is gaining traction as a necessary countermeasure.

Likewise, visibility into what’s happening in the underlying systems, particularly remote network connections from potential threat actors, is critical. Inspecting and understanding the nature of the traffic, which may be encrypted, is a key part of an AI-enabled defense. Regardless, defending against sophisticated, AI-enhanced threats requires a defense-in-depth strategy, where AI capabilities augment security controls across multiple locations and layers — network, endpoint, cloud, and identity. Unified intelligence across the entire IT stack and multiple domains is key.

 

Looking Ahead — Upcoming Posts

We’ll discuss defense and visibility more in subsequent posts, but the message is clear: AI is no longer an emerging technology in cybersecurity; it is a present and potent force on the battlefield. Attackers actively leverage its capabilities to make their operations faster, stealthier, more scalable, and more convincing. While today’s AI systems are primarily augmentation for human attackers, the trajectory is towards potentially autonomous attack systems.

This escalating offensive capability demands an equally robust defensive posture powered by AI. But how should organizations build these AI shields? What strategies and technologies are proving effective?

 

Stay tuned for Part 2 of this series, where we will dive deeper into the world of AI for defense, exploring strategies, specific technologies like anomaly detection, and the concept of autonomous response, and introduce the importance of observability and data quality.

 

Discover how to maximize the quality, impact and efficiency of AI in networking and cybersecurity solutions with Enea’s DPI technology: click here.

Related insights

Maximize the Quality, Impact and Efficiency of AI-Based Networking and Cybersecurity Products with Enea Qosmos Deep Packet Inspection Technologies

AI in Cybersecurity: Offensive AI, Defensive AI & the Crucial Data Foundation, Part 3 of 3

Read more

Tags: AI, Cloud, Cybersecurity, Deep Packet Inspection, Edge, Observability

Maximize the Quality, Impact and Efficiency of AI-Based Networking and Cybersecurity Products with Enea Qosmos Deep Packet Inspection Technologies

AI in Cybersecurity: Offensive AI, Defensive AI & the Crucial Data Foundation, Part 2 of 3

Read more

Tags: AI, Cybersecurity, Deep Packet Inspection, Edge

Extreme Networks Chooses Enea DPI Software for the Extreme SD-WAN

Why Extreme Networks Chose Enea’s DPI Software for ExtremeCloud SD-WAN

Read more

Tags: Cybersecurity, ExtremeNetworks, network, SDWAN

Get Ready for Post-Quantum Cryptography with Enea Qosmos Technology, Part 2 of 2

Read more

Tags: AI, Cybersecurity, Deep Packet Inspection

Get Ready for Post-Quantum Cryptography with Enea Qosmos Technology, Part 1 of 2

Read more

Tags: AI, Cybersecurity, Deep Packet Inspection