AI in Cybersecurity: Survey Highlights and the Key Role of Network Traffic Intelligence
Enea recently partnered with Cybersecurity Insiders for a worldwide survey of CISOs and their frontline teams to find out what they thought about the impact of artificial intelligence (AI) in cybersecurity. The results show that cybersecurity professionals believe the transformative power of AI is very real, with an overwhelming majority (92%) seeing it having an important impact on both offensive and defensive strategies and capabilities over the next few years.
To better understand this impact and the role of network traffic intelligence in AI-enhanced cybersecurity solutions, Laura Wilber, Sr. Industry Analyst at Enea, interviewed two of her colleagues about the survey findings: Amine Larabi, VP of Traffic Intelligence R&D, and Mitrasingh (Danny) Chetlall, VP of Product Management.
Laura: Danny, Amine, the majority of the respondents in this survey anticipate a major increase in the impact of AI on cybersecurity over the next five years, with three-quarters of respondents (74%) stating that AI is a “medium” to “top” priority for their organization. Do these expectations align with the demands and priorities you are seeing from the vendors who embed Enea traffic intelligence technology in their cybersecurity products?
How does the implementation of AI for cybersecurity rank among your organization’s cybersecurity priorities?
How do you anticipate the impact of AI on cybersecurity to evolve in the next five years?
Danny: Absolutely. In fact, I would say this has been a higher priority and one on the radar for far longer for vendors than for their enterprise customers. Which is logical – it’s their core business and therefore it’s essential for them to stay out in front of opportunities and threats in cybersecurity. And we have the good fortune to collaborate with vendors who are true visionaries in the field.
Amine: I agree. Our technology partners have for a long time shown a strong interest in how we at Enea use ML and AI to improve the accuracy and relevance of the network traffic data that our DPI engine (Enea Qosmos ixEngine) produces so they in turn have a high quality data foundation on which to innovate with AI. And I’ve never seen the level of innovation higher than it is today as the era of AI really kicks into high gear.
Laura: I think the survey respondents would agree with you. They anticipate AI will bring many benefits, with improved threat detection in general, and intrusion detection and prevention systems (IDS/IPS) specifically, leading the lists of hoped-for benefits. Given the depth and breadth of Enea’s partnerships in cybersecurity, I know you are both tuned into the types of solutions that are the most mature in terms of AI enhancements. Do these expectations align with what you are seeing?
What do you see as the most significant benefits of incorporating AI into your cybersecurity operations?
Which cybersecurity domains do you think will benefit most from AI?
Amine: Yes, they are in line with what we are seeing. There is already significant innovation in AI, particularly in the field of IDS/IPS and anomaly-based threat detection. I think this is natural for two reasons: one, anomaly-based threat detection has become an important pillar of the zero-trust architectures that most of our partners employ, and two, the increasing sophistication of AI-enhanced cyberattacks make the detection of subtle network anomalies a must.
Danny: Amine’s observations are correct, and I would add that the importance of intrusion detection capabilities for our OEM partners – whether they focus on cybersecurity or networking – was a key driver for the development of our new threat detection SDK. It leverages core Suricata IDS functionalities, Enea Qosmos ixEngine’s extensive traffic visibility, and a unique SDK format to enable our partners to rapidly develop new or improved threat detection capabilities – whether as a discrete IDS/IPS function or as part of an encompassing network threat detection and response (NDR) or extended threat detection and response (XDR) solution.
Laura: Given the current threat landscape, I can see why the threat detection SDK is being so well-received. Survey respondents anticipate many benefits from AI, but they are also worried about threat actors’ offensive use of AI. In particular, they expect phishing and other types of social engineering attacks – which are often the launching pads for network intrusions – to become much more effective and dangerous with AI. Do you agree?
In your opinion, which types of attacks will become more dangerous in the near future because of AI input?
Danny: Yes, I do. And I think this is one of the reasons why interest in our security metadata has increased so significantly. Most people are aware of the Enea Qosmos ixEngine’s industry-leading role in application and protocol recognition. Fewer are familiar with the extensive security-related metadata we produce, like man-in-the-middle indicators or detection of domain-generated algorithms. Both the security-specific metadata and general metadata that the Enea Qosmos ixEngine produces (5900+ types to date) play an important role in the detection of social engineering attacks as well as in many other types of attacks.
Amine: I concur, and I would add that the use of ML and AI to produce unique, high value metadata has been an important priority for our R&D team. The task of identifying and foiling malicious masking and spoofing techniques will become increasingly challenging with offensive AI, so our ability to help our vendor partners fight back by delivering ground truth data, or as we call it wire truth data, will become critically important.
Laura: According to the survey, most cybersecurity professionals share your concern about offensive AI. An astounding one quarter of them worry that a malicious AI capable of defeating most known cybersecurity defenses is already out circulating in the world, while ¾ are somewhat to significantly concerned about rogue AI. Do you think this high level of fear – this pessimism – is warranted?
How close do you think the world is to malicious/adversarial AI that can evade
most known cybersecurity defenses?
Are you concerned about rogue AI (an application that becomes fully autonomous and behaves dangerously)?
Danny: I do think there is good reason to be very concerned about offensive AI and the potential advantage it may have over defensive AI. After all, malicious actors proceed without concern for regulatory or ethical constraints. That said, I think the fear about ‘rogue AI’ may be somewhat misplaced. In my opinion, the more immediate danger here comes from poor design. If an AI application – whether offensive or defensive – has been poorly constructed, there is high potential for unintended and unforeseen consequences, and as a result a strong chance of producing AI that acts in dangerously unpredictable ways.
Amine: I agree. General artificial intelligence – meaning a form of AI that can think, learn and act in truly human and autonomous ways – remains hypothetical. So, the sci-fi ‘bad AI’ that goes ‘rogue’ by choice remains (at least for now) in the realm of science fiction. While ‘going rogue’ due to poor design or bad data is a very real and present threat. In terms of data-related risks, the rise of ML and AI have greatly amplified the very old problem of GIGO (Garbage In, Garbage Out). AI can replicate GIGO at scale with potentially serious consequences. Already in the world of large language models (LLMs) and deep learning that use massive data sets, input quality has become of utmost importance.
At Enea, data quality has always been one of our core concerns. It is what we have built our reputation on, and the reason industry leading vendors choose us as a technical partner for traffic classification and analytics. With the arrival of AI and ML, this mission of ours has become even more important, and we are proud to see that the ‘wire truth’ data we deliver provide an ultra-reliable foundation for an innovative new generation of AI-powered cybersecurity solutions.
Laura: Thank you both for these very useful insights. To end on a positive note for our vendor partners, the survey results show that while organizations are currently at a very early stage in their AI journey, most have big plans to close that gap, with 2/3 planning budget increases specifically for AI-powered security solutions. And as we all plunge into this period of innovation and growth, a key takeaway is that great AI solutions begin with great data.
The AI in Cybersecurity Report detailing the results and findings of the survey referenced in this article is available here:
A discussion of the survey results with panelists from Arista Networks, Enea and Zscaler is available as an on-demand video here:
And you may also be interested in the article “AI, Misinformation and the Future of Cybersecurity Issues Cybersecurity Leaders Should Be Thinking About Now,” which includes commentary from the AI webinar participants: