← back

The Enterprise AI Security Paradox

There is a question every AI platform company eventually faces when courting enterprise customers: Can you deploy this inside our network? The honest answer is more complicated than anyone in the room wants to hear.

We recently explored whether our platform could be deployed within an enterprise's Azure Virtual Network, achieving some meaningful degree of private, isolated operation. If we restrict the model layer to Azure-hosted models, it seems feasible at first glance. The core application server, the API gateway, the orchestration layer — all of that can live inside the customer's VNet boundary.

But here is the uncomfortable truth: the application server is just the torso. The limbs and appendages are everywhere.

The Dependency Sprawl

A modern AI platform does not run in isolation. It is a composite system assembled from dozens of external services, each with its own data handling characteristics and network requirements. When we mapped our own external dependency surface, the list was sobering:

  • Cloud storage and state services — Redis, Cosmos DB, blob storage, each holding different classes of data with different retention characteristics
  • Authentication services — identity providers, token services, session stores that may live outside the VNet boundary
  • Model inference endpoints — not just your primary model provider, but fallbacks, specialized models for different tasks, embedding services
  • Compute sandboxes — code execution environments hosted by third-party sandbox providers, where user-submitted code runs in ephemeral containers outside your network
  • Cross-cloud dependencies — calling certain models requires staging files in entirely different cloud providers' storage buckets, meaning data traverses cloud boundaries as a matter of routine
  • Specialized media and inference services — audio generation, image synthesis, video processing, each running on dedicated infrastructure operated by different vendors

Some of these services are stateful in ways that matter deeply for enterprise compliance. Data sent to them may be retained indefinitely — storage services by design, model services unless you have explicit zero-data-retention agreements in place. Others are ephemeral in principle: a sandbox that spins up, executes code, and tears down. But even ephemeral services break the VNet boundary. The data leaves. That is the fact that enterprise security teams care about, regardless of how briefly it leaves.

The Enumeration Problem

Suppose you accept the premise that some external dependencies are inevitable. The next reasonable step is to enumerate them — build a complete manifest of every service your platform communicates with, what data it sends, and what retention policies apply. In theory, this gives enterprise customers a clear picture of their data boundary.

In practice, this is where things get genuinely hard.

It is not enough to audit the codebase once and produce a list. You must continuously maintain that list as the codebase evolves. Every new feature, every integration, every dependency upgrade could introduce a new external call. In a traditional software engineering environment, this is already challenging. In the era of AI-assisted development, it becomes a categorically different problem.

When any engineer — or any AI coding assistant — can generate an HTTP request in a few keystrokes, every new line of code is a potential boundary breach. A well-intentioned feature addition might call out to a new API that nobody thought to add to the dependency manifest. The surface area is not static; it is expanding continuously and, increasingly, autonomously.

The Absolution Question

This leads to what I think of as the absolution question. Suppose we get the core platform running inside the enterprise VNet. The enterprise's own developers and users then configure it to call third-party services — using their own accounts, their own API keys, their own contractual relationships with those providers. Are we, as the platform vendor, absolved of responsibility for that data flow?

Legally, perhaps. Architecturally, it is a reasonable boundary. But I suspect this clean division does not reflect how most enterprise customers actually think about the problem.


In my experience, many enterprise customers are looking to be absolved themselves. They want a deployment model where the hard questions about data residency, retention, and boundary control are solved by the vendor, not merely deferred to the customer's IT team.

This is not a criticism. It is rational behavior. Enterprise IT leaders are managing enormous surface areas of risk across hundreds of vendors and thousands of integrations. They do not want to inherit a new set of data-flow puzzles from their AI platform vendor. They want the platform to arrive with those puzzles already solved.

Toward an Enterprise-Grade Firewall

The practical path forward, as I see it, requires something like a code-level firewall for enterprise deployments. Not a network firewall — those already exist and are necessary but insufficient. What we need is an application-layer mechanism that enforces a strict allowlist of permitted external communications.

Imagine an enterprise deployment mode where:

  • Every outbound HTTP request passes through a policy enforcement layer
  • Only connections to explicitly whitelisted endpoints are permitted
  • The allowlist is version-controlled, auditable, and locked to specific deployment versions
  • Any attempt to reach an undeclared endpoint is blocked and logged
  • CI/CD pipelines include static analysis that detects new external dependencies before they reach production

This is, in essence, an application-level Great Firewall — not for censorship, but for data sovereignty. The technical implementation is not trivial, but it is tractable. The harder challenge is organizational: ensuring that the discipline of maintaining the allowlist survives contact with the velocity of modern AI-assisted development.

The Standards Gap

There is one more dimension to this problem that deserves attention. Enterprise security needs are infinitely varied. Every organization has its own regulatory context, its own risk tolerance, its own legacy infrastructure constraints. It would be irresponsible for any AI platform vendor to guess at what a specific enterprise customer requires and build a bespoke solution based on those guesses.

What the industry needs are shared standards for AI platform deployment boundaries. Not proprietary solutions invented in isolation, but frameworks that align with existing security standards and compliance regimes — SOC 2, ISO 27001, FedRAMP, and whatever emerges specifically for AI systems.

If something goes wrong in an enterprise AI deployment, the post-mortem should not reveal that the vendor and customer jointly invented a security architecture by gut feeling. It should show that both parties adhered to documented, industry-recognized standards for data boundary control. We are not there yet. But the demand from enterprise buyers will force the industry in this direction, likely faster than most platform companies are prepared for.

What Converges, What Remains Hard

The encouraging news is that the core problem converges. If you can enumerate your external dependencies and close that list — committing to a finite, auditable set of services — then the boundary control problem becomes tractable. Network policies, proxy configurations, and application-layer enforcement can lock it down. This is not unsolved; it is just engineering.

What remains hard is maintaining that closed list in a development environment where the barrier to introducing new external dependencies approaches zero. Every AI coding assistant, every quick integration, every "just call this API" shortcut is a potential crack in the boundary. The discipline required to maintain a closed dependency surface is at odds with the velocity that AI-assisted development promises.

That tension — between the speed of AI-era development and the rigor of enterprise security requirements — is the real paradox. And it is one our industry will be grappling with for years to come.