Read time:
9 min

The Most Expensive Software Nobody Uses

If You Were There, You Remember This Fight

Steve Jobs brings his vision for a revolutionary mobile device to market. Consumers want it. Corporate IT says absolutely not.

If you were anywhere near enterprise technology at the time, you remember the arguments. Apple was the design company — not the enterprise company. Everyone typing away on their Blackberry or Palm Treo remembers coveting thy neighbor's iPhone, but IT had their reasons. Those were the enterprise devices. IT could manage them. Compliance could audit them. The fact that employees wanted something better was irrelevant to the procurement decision.

On the other side, employees had a device in their personal life that was years ahead of what IT issued for work. They couldn't check corporate email on it, couldn't access documents, couldn't do any of the things they could obviously do better on a modern phone. So they started using their iPhones anyway and hoped nobody noticed.

But Apple didn't ask global IT departments to relax their policies. Apple built enterprise-grade security and tooling directly into the iPhone — device management APIs, hardware encryption, enterprise provisioning — and relied on their best sales force: happy customers who sold the product up the chain, forcing corporate IT to take another look. MDM platforms matured around a device people genuinely wanted to use. IT didn't capitulate. Apple met their requirements and the users' requirements at the same time.

Anyone who lived through the BYOD era recognizes what's happening right now with enterprise AI. The dynamics are identical. IT is spending millions on sanctioned solutions that employees don't want to use, while the consumer alternatives that actually solve problems spread through the workforce unchecked. Except this time, the unsanctioned tools don't just store documents on an unmanaged device. They reason about your data, generate outputs based on it, and increasingly take autonomous actions — all outside your governance perimeter.

The Copilot Adoption Problem

The enterprise AI spending numbers are enormous. A 20,000-person organization pays north of $7 million annually for Microsoft 365 Copilot at list price. Microsoft is bundling it into enterprise agreements, and customers are reporting mandatory 25% cost increases on typical $10 million contracts to include AI capabilities they didn't ask for.

The adoption numbers don't match the spending.

Microsoft 365 Copilot has roughly 15 million paid seats out of 450 million-plus total — about 3.3% penetration. Among workers who have access, daily usage hovers around 30%. And here's the number that should concern every CTO who just signed a renewal: Copilot's share among U.S. paid AI subscribers dropped from 18.8% to 11.5% between July 2025 and January 2026. A 39% contraction in six months.

When researchers gave workers access to multiple AI platforms and let them choose, 76% chose ChatGPT. 18% chose Gemini. 8% chose Copilot.

To be clear, this isn't a Microsoft-specific problem — I'm using Copilot as the example because the data is public and the scale is massive. The pattern is the same across enterprise AI platforms: organizations buy licenses, employees don't use them, and the gap between what IT procures and what workers actually need keeps widening.

The Shadow AI Problem

Employees aren't sitting idle. They're doing what everyone did with their iPhones in 2008 — finding something that works and hoping nobody notices.

49% of workers admit to using AI systems that IT hasn't sanctioned. The number is almost certainly higher, because "admit" is doing a lot of work in that sentence. 69% of C-suite executives say they know it's happening and they're fine with it, which tells you something about how seriously organizations are treating the governance gap they're creating.

One in three employees is feeding enterprise data — research, client information, competitive analysis, proprietary business logic — into systems that IT can't audit, can't scope, and can't shut down without knowing they exist. Three out of four CISOs have discovered unsanctioned AI running in their environments, which means one in four hasn't found it yet.

The financial exposure is measurable. Shadow AI added $670,000 to the average cost of a data breach in 2025, according to IBM's annual report. That's the cost of the breach itself, not the cost of the lost intellectual property, the competitive intelligence that walked out through a consumer API, or the compliance violation that nobody knew was happening.

Why This Keeps Happening

The temptation is to blame employees for being reckless. That's the same argument IT made about iPhones, and it was wrong then too.

Employees use unsanctioned AI for the same reason they used unsanctioned phones: the sanctioned option doesn't solve their actual problem. A copilot that can summarize a Teams meeting or draft a generic email is fine for administrative tasks. But the operations analyst trying to reconcile three carrier contracts with seasonal volume commitments, or the finance team trying to model the impact of tariff changes across a seventeen-country supply chain — those problems need something purpose-built. When the official tool can't do the job, people find one that can.

We're hearing this directly from the technical leaders we work with. A CTO at a mid-market logistics company told us his team evaluated Copilot for six months. The verdict: useful for drafting emails, useless for anything involving their actual operational data. His supply chain analysts needed to model carrier rate optimization across contracts with conditional volume commitments — the kind of problem where the constraints are buried in legal documents and the business rules live in the heads of people who've been managing those relationships for fifteen years. Copilot doesn't know those constraints exist. It can't access the data. And even if it could, a general-purpose model isn't designed to reason about that kind of domain-specific complexity.

We wrote about this in our first series. The build vs. buy decision flipped when the economics of custom development changed. 76% of enterprises were defaulting to vendor solutions because building was too slow and too expensive. That math reversed. The same logic applies to AI infrastructure itself.

Generic copilots solve generic problems. They're the 80% solution — good enough for the tasks that don't differentiate your business. But the 20% that matters, the operational complexity that's specific to how your company actually works, gets ignored. That's the 20% your employees are trying to solve with ChatGPT.

The Real Cost of the Wrong Response

Most enterprises respond to shadow AI the way IT responded to the iPhone: with policy. Block the consumer tools. Write an acceptable use policy. Threaten consequences.

We're seeing this play out with our clients right now. CISOs are discovering unsanctioned AI, writing reports, and recommending access controls. Some organizations are blocking ChatGPT at the firewall. Others are mandating that all AI usage go through the enterprise copilot.

These responses share the same flaw as every IT lockdown policy from the BYOD era: they treat the symptom rather than the cause. If you block ChatGPT and your enterprise copilot still can't solve the analyst's carrier contract problem, that analyst will find another way. They always do. The history of enterprise technology is a long series of IT departments discovering, after the fact, that employees already found a workaround.

The organizations that actually resolved the BYOD problem didn't do it by blocking devices forever. Apple resolved it for them — by building enterprise-grade security and tooling into a device people actually wanted to use, and relying on happy customers to force IT's hand. Apple didn't ask IT to lower the bar. Apple met it.

That's the same answer here, except the requirements are more demanding. A phone stores files. An AI system reasons about your data, generates outputs based on it, and — increasingly — takes actions autonomously. The governance architecture needs to match that reality.

What the Right Answer Looks Like

The organizations we're working with that are getting ahead of this aren't choosing between "lock everything down" and "let people use whatever they want." They're building AI infrastructure that's actually worth using — infrastructure that solves the real operational problems, the ones the generic copilot can't touch — with governance designed into the architecture from day one.

This means AI infrastructure that understands your specific environment: your data residency requirements, your compliance framework, your permission boundaries. Infrastructure where the code runs in sandboxed environments under your rules, where secrets management keeps credentials inside your perimeter, where every interaction is logged and auditable because that's how the system was built — not because someone remembered to turn on a setting.

We're building this kind of infrastructure for our clients right now. The approach starts with understanding what problems your people are actually trying to solve — the problems that drove them to unsanctioned tools in the first place — and then building AI infrastructure that handles those specific problems with real governance baked in. When the sanctioned system is genuinely better than the unsanctioned alternative and it's secure by design, the shadow AI problem solves itself. Employees don't sneak tools into the workplace because they enjoy the risk. They do it because they need to get work done.

This is the lesson that Apple taught enterprise IT. You don't win by being more locked down than the alternative. You win by building what enterprise IT demands into something people actually want to use — and letting your best sales force, happy customers, do the rest. The same principle applies here: governance that's built into the architecture of purpose-built infrastructure doesn't restrict your people — it enables them to solve real problems without creating the exposure that generic, unsanctioned alternatives bring.

Purpose-built beats generic for the same reason it always has. The 80/20 problem we described in our first article — vendors solve 80% while you build workarounds for the critical 20% — applies to AI infrastructure too. Your employees don't need a better copilot for summarizing meetings. They need infrastructure that can handle the specific, messy, context-dependent problems that make your business run. And they need it built with the same rigor you'd apply to any production system that touches sensitive data.

The iPhone won because Apple built what IT demanded into something users loved — and let happy customers sell it up the chain. The same thing will happen with enterprise AI. The question for CTOs right now is whether you'll build that infrastructure intentionally — purpose-built for your operations, governed from the ground up, actually capable of solving the problems your people face every day — or wait until the shadow AI problem builds it for you, with none of the governance and all of the risk. Your employees have already made their choice. The question is what you do about it.

See how TechFabric builds purpose-built AI infrastructure or schedule a conversation with an engineer who deploys this way.

Categories

Put That AI Project On Your Roadmap

It doesn't have to stay stuck. Talk to a senior engineer about getting it to production. No sales pitch. A real conversation about what you need built.