Blog

From Shadow IT to Shadow AI: Why DPI Remains Foundational

This is a guest blogpost by Roy Chua, Founder and Principal at AvidThink  

As we round out 2025, I’ve been reflecting on the dominance of AI across events and conferences. Whether Paris, London, Singapore, New York, Dallas, or Las Vegas, I’m resigned to the fact that all conferences are AI conferences now. At AWS re:Invent 2025, my last major event of the year, I ran into a bunch of CISOs and security experts who discussed the rise of Shadow AI. 

Enterprise security teams have wrestled with Shadow IT for over a decade now. The arrival of cloud computing, SaaS, and mobile apps encouraged employees to surreptitiously adopt unsanctioned (but cool and sometimes useful) applications — all they needed was a corporate credit card. Today, we can add another higher-risk behavior to Shadow IT: Shadow AI. 

In my discussions with the security execs, we collectively acknowledged that every enterprise has employees using unapproved AI tools outside the visibility and control of enterprise security teams. 

This expansion of Shadow AI use will drive a fresh look at current enterprise security architectures, particularly around observability. And to shine the light on another pernicious shadow practice, I expect we’ll again turn to a technology mainstay, deep packet inspection (DPI) — enhanced and upgraded for the AI era. 

The Emergence of Shadow AI

Shadow AI differs from Shadow IT in both extent and consequence. 

Generative AI services are increasingly embedded across everyday workflows, from summarizing meetings and writing code to analyzing data and automating numerous other tasks that previously required human action. AI platforms furnish users with myriad ways to interact with them: chat apps, AI browsers, browser plugins, APIs, mobile applications, and embedded assistants in today’s productivity apps. Many of these communications are happening outside the purview of and regulation by IT. 

In many AI interactions (shadow or sanctioned), employees seeking the best results from their AI collaborators will provide substantial context in their prompts. The potential exposure of large swaths of proprietary corporate data — from sensitive code bases to legal contracts, even HR personnel reviews — has a greater impact than narrowly scoped SaaS applications. 

Diverse Corporate Goals for AI

Even without achieving artificial general intelligence (AGI) or super-intelligence, today’s many flavors of AI (predictive, generative, and agentic) show great potential and value across numerous corporate tasks. Yet many corporations are being cautious in how quickly they adopt generative and agentic AI. 

At one end of the spectrum, organizations want to block AI services outright, whether well-known chatbot services, consumer AI tools, or model-as-a-service APIs. At the other end, enterprises are encouraging and enabling the use of AI tools (usually on pre-approved platforms). In both situations, enterprise IT needs observability into AI use and the ability to enforce policy restrictions on data exposure, retention, and compliance. 

This isn’t that different from Shadow IT in the SaaS era. What has changed is not the policy logic, but the technical difficulty of enforcement. AI traffic increasingly blends into encrypted HTTPS sessions, shared cloud endpoints, and multiplexed APIs that evade customary port- or IP-based controls. More than legacy SaaS applications, AI services involve more symmetric data flows, multiple file and content uploads, structured prompts, embeddings, and downstream API calls, all of which have security implications that are not readily apparent on the surface. 

Given the rapid rate of change of the AI industry, pragmatic CISOs are not trying to detect every edge case or experimental tool. Instead, they are focused on establishing baseline control and auditability for major AI services. They want to know which AI platforms are in use, how they are accessed, and what categories of data are moving across the network. 

Why Network-Level Visibility Matters

For many enterprises, AI governance needs converge on familiar concerns: data sovereignty, privacy, regulatory compliance, and auditability. In regulated industries, uploading proprietary data, personal information, or intellectual property into external AI services can carry material legal and financial consequences. 

This is where network-level observability becomes important. The network remains the one control plane through which all AI interactions must pass, regardless of device, user, or application form factor. If properly implemented, network observability can provide fine-grained insights into AI-related traffic patterns, protocols, and transactions at the network level. It is the layer at which enforcement can be applied consistently, independent of user behavior or application design. 

DPI’s Continued (and Evolving) Role

Deep packet inspection has long served as a foundation of application-aware networking, enabling SD-WAN, next-generation firewalls, and SASE platforms to classify traffic, enforce policy, and improve performance. In some circles, DPI was assumed to be a mature solution, subject to little change. 

Shadow AI will challenge that assumption. 

Modern DPI engines are no longer limited to identifying static application signatures. They focus on protocol recognition, transactional metadata extraction, and behavioral analysis—capabilities essential for detecting AI-specific activity embedded within encrypted sessions. 

AI traffic introduces new requirements for application awareness. It is not enough to recognize that traffic is “cloud” or “HTTPS.” Security teams need to distinguish between AI chat interfaces, model APIs, code assistants, productivity copilots, and multi-agent platforms, each of which carries different risk profiles and policy implications. They also need visibility into transactions, including file uploads, prompt structures, API calls, and response patterns, without incurring the storage and privacy costs of full packet capture. More information about one such solution from Enea is available here. 

This is where DPI is evolving from a traffic classification tool into an AI observability platform. By extracting high-fidelity metadata related to AI applications and workflows, advanced DPI can enable real-time enforcement and forensic analysis, while remaining compatible with privacy and data minimization requirements. 

This evolution is also consistent with wider changes in enterprise IT architectures. As edge computing, digital sovereignty, and hybrid deployment models gain prominence, DPI provides a consistent mechanism for observability and policy enforcement across cloud, on-premises, and edge environments, regardless of where AI inference or data processing occurs. 

Significance for SASE and Security Vendors

For vendors building networking and security platforms, Shadow AI is not just the latest feature checklist item. It is an opportunity to differentiate while meeting leading enterprise needs. 

SASE solutions that rely on coarse traffic steering or cloud-only inspection will struggle to deliver the level of AI visibility enterprises increasingly demand. Conversely, platforms that integrate deep, continuously updated DPI capabilities can adapt more rapidly as new AI services, protocols, and usage patterns emerge. 

This is important as AI adoption accelerates beyond chatbots into autonomous agents, embedded copilots, and machine-to-machine interactions. In those scenarios, the network becomes both an observability sensor and an enforcement point. Solutions enabled by DPI can monitor agent behavior, enforce trust boundaries, and detect anomalous or non-compliant activity in real time. 

The takeaway for solution vendors should be clear: AI-aware security requires AI-aware traffic intelligence. DPI must develop in tandem with AI services, with ongoing protocol research, rapid application signature updates, and incorporation within higher-level policy and analytics frameworks. 

Conclusion: DPI is a Strategic Enabler for the AI Era 

Shadow AI represents more than a temporary anomaly. It is the natural consequence of democratized AI access colliding with enterprise risk management. 

What enterprises are asking for is measured control — visibility without corporate overreach, enforcement without productivity paralysis, and governance that can scale with employee innovation. 

Deep packet inspection is proving to be one of the most adaptable tools in the modern security stack, enabling observability and control of AI services. As AI traffic grows more diverse, dynamic, and consequential, DPI provides the granular intelligence to make AI usage visible, auditable, and governable. Network security vendors looking to meet the needs of the next generation of productivity workers will want to ensure their DPI engines are well-suited to the new tasks demanded of them.