Governance You Can Actually Ship
Five Months
August 2, 2026. That's when the EU AI Act's general application requirements take effect. High-risk AI systems must comply. The penalties are severe: up to €35 million or 7% of global annual revenue, whichever is higher.
Colorado's AI Act takes effect even sooner — June 30, 2026. It requires developers and deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination. Illinois, Texas, and several other states have similar legislation moving through their legislatures.
The compliance experts say organizations need a minimum of five months to prepare. Most haven't started. Over half lack a systematic inventory of the AI systems running in their environment. 40% have unclear risk classifications for the systems they do know about. Only 37% have any governance policies in place.
And this is just the official picture — the systems that IT sanctioned and procured. In our first article in this series, we described how 49% of employees use AI systems that IT hasn't approved. The regulatory exposure from systems you don't know about is impossible to quantify because, by definition, you can't audit what you can't see.
The Document Problem
The standard enterprise response to a compliance deadline is to produce a document. A governance framework. A risk assessment matrix. An AI ethics policy. An acceptable use guide.
I'm not dismissing this work — some of it is genuinely necessary. You need a risk classification methodology. You need a way to inventory and categorize your AI systems. You need policies that describe your governance principles.
The problem is that for most organizations, this is where it stops. The document goes into a SharePoint folder. Someone schedules a quarterly review. The AI systems running in production continue operating with whatever permissions and logging they were configured with on day one.
This is the same pattern from every compliance wave of the last twenty years. SOX produced binders full of controls that existed on paper and in auditor presentations. GDPR produced privacy policies that described data handling practices that weren't reflected in the actual data architecture. In both cases, the compliance artifact and the operational reality diverged almost immediately — and the gap kept widening because nobody owned the connection between the two.
AI governance is heading for the same outcome, and the consequences are steeper because AI systems don't just store and process data — they make decisions, generate outputs, and increasingly take autonomous action based on that data.
The Gap Between Policy and Architecture
In our last article, we walked through the Amazon Kiro incident. An AI agent with operator-level permissions deleted a production environment because nobody had built the infrastructure controls to prevent it. Amazon presumably had policies about production access. What they didn't have was architecture that enforced those policies in real time.
That gap — between what the governance document says and what the system actually does — is the central problem of enterprise AI governance in 2026. And I keep coming back to a simple test: if your AI governance can be violated without anything in your infrastructure noticing, you don't have governance. You have documentation.
Here's what violation looks like in practice: The policy says agents should operate with least-privilege access. The agent was provisioned with broad permissions during development because it was easier, and nobody reduced them for production. The policy says all AI interactions should be logged and auditable. Logging was configured for the sandbox environment and wasn't updated for production because nobody owned that transition. The policy says sensitive data shouldn't leave the corporate perimeter. The AI platform sends data to a third-party API for inference, and the data residency clause in the vendor contract doesn't match what the system actually does.
Each of these is a real pattern from real engagements. Each one represents a governance document that's technically accurate and operationally useless.
What "Governance by Design" Actually Means
We use this phrase — "governance by design" — and I want to be specific about what we mean because it's at risk of becoming another empty buzzword.
Governance by design means the governance rules are enforced by the infrastructure, not by human compliance checks. The same way a firewall enforces network policy by blocking unauthorized traffic rather than sending a memo asking people not to visit certain websites. The same way database permissions enforce access control by returning "access denied" rather than relying on a policy document that says "only authorized users should query the customer table."
The governance and the architecture are the same thing. You can't have one without the other.
In practice, this means every design decision about your AI deployment includes a corresponding governance decision. Where does the agent execute code? In a sandboxed environment with explicit resource and network boundaries. How does the agent access credentials? Through a secrets management layer that scopes access, rotates keys, and logs every retrieval. What data does the agent see? Only what its permission scope allows, enforced at the infrastructure level. What happens when the agent attempts a destructive or high-impact operation? A mandatory review gate fires before execution.
How Fabric Does This
This is why we built Fabric. Our senior engineers spent years managing production infrastructure before they worked on AI systems, and the gap between standard DevOps discipline and how most AI agents get deployed was — honestly — alarming to them. Fabric exists to close that gap.
Governance ingestion. Fabric starts by ingesting your governance structure — your compliance requirements, your data classification policies, your permission hierarchies, your regulatory obligations. This isn't a one-time setup step. It's a living configuration that adapts as your governance evolves. When your compliance team updates a data residency requirement, that change propagates into how Fabric scopes agent access, not into a document that an engineer might read six months later.
Sandboxed code generation. When AI agents generate code in Fabric, that code executes in isolated, sandboxed environments with explicit resource boundaries. The sandbox is scoped to specific systems, specific data, and specific operations. An agent working on your logistics optimization can't accidentally (or intentionally) access your HR data because the infrastructure physically prevents it. This is the same principle behind container isolation in production Kubernetes environments — battle-tested patterns applied to a new context.
Private secrets management. Credentials, API keys, tokens, and certificates are managed inside your perimeter using the same patterns that infrastructure teams have relied on for years. Scoped access — each agent sees only the credentials it needs. Automated rotation — keys expire and regenerate on schedule. Full audit trails — every credential access is logged with context. Nothing flows through external systems. Nothing leaves your infrastructure.
Local Model Context Protocols (MCPs). Your business context — the operational data, the business rules, the customer information that agents need to work effectively — stays in your environment. MCPs run locally, keeping the context under your compliance framework and your data sovereignty requirements. The agent's knowledge of your business never passes through infrastructure you don't control.
Continuous runtime enforcement. Every governance rule is enforced at runtime, continuously. Not checked periodically. Not audited quarterly. Running. Permission boundaries are active on every request. Scope constraints are enforced on every operation. Every agent action — what it accessed, what it generated, what it attempted — is logged automatically because the logging is part of the infrastructure, not an optional configuration someone has to remember to enable.
Mandatory review gates. Destructive operations, high-impact decisions, actions that affect production systems — all require explicit approval before execution. The agent can propose the action. It can't execute it without a human checkpoint. This is the exact safeguard that was missing in the Kiro incident, implemented as infrastructure rather than policy.
The Compliance Connection
The organizations we're working with that are ahead of the August deadline share a common characteristic: they're treating compliance as an output of their architecture, not a parallel workstream.
When your AI infrastructure enforces governance at the runtime level — with audit trails, scoped permissions, and documented decision points — the compliance artifacts generate themselves. The auditor asks "how do you ensure least-privilege access?" and the answer is "here's the infrastructure configuration, and here are the logs showing it's enforced on every request." Not "here's a policy document, and here's our quarterly review process."
We're working on this with a financial services client right now. Their compliance team had produced a comprehensive AI governance framework — a good document, thorough and well-reasoned. The problem was that their deployed AI systems didn't reflect any of it. Agents had broad data access because that was the default. Logging existed but wasn't configured for the production environment. Permission scopes hadn't been updated since the initial sandbox deployment. The governance framework described an ideal state that bore little resemblance to what was actually running.
We started by mapping their governance requirements into Fabric's configuration — translating policy statements into infrastructure enforcement. "Agents should only access data relevant to their function" became scoped permission boundaries enforced at the infrastructure level. "All agent interactions must be auditable" became continuous, automatic logging with context-rich audit trails. "High-impact operations require human approval" became mandatory review gates that fire before execution, with no override path. The compliance team's document didn't change. What changed was that the infrastructure now enforces what the document describes.
This doesn't eliminate the need for governance documentation. You still need risk assessments, classification methodologies, and policy frameworks. But the documentation describes what the architecture actually does rather than what the architecture should theoretically do. The gap between policy and reality closes because they're the same system.
The Series in Full
This series has followed a thread that started in our first series and gets more urgent with every month.
Enterprise copilots don't solve the problems that matter, so employees find tools that do — creating a shadow AI exposure that most organizations can't see and can't audit.
AI agents are reaching production without production-grade governance, because the teams deploying them don't think in infrastructure terms. The Kiro incident is a preview of where this ends without the right engineering discipline.
And governance frameworks that live in documents instead of architecture will fail the same way every previous compliance framework has failed: by describing a reality that doesn't match what the systems actually do.
The fix for all three problems is the same. Purpose-built AI infrastructure, designed by engineers who understand both the AI and the production environment it runs in, with governance that's enforced at the architecture level because it is the architecture.
That's what we built Fabric to be. And it's how our forward-deployed teams deploy every system we ship.
Schedule a conversation with an engineer who ships governance this way.