Moving Average Inc.

Shadow AI: The Tools Your Team Uses Without You

When employees go rogue with AI and enterprise vendors add it without asking

The fourth session of my AI roundtable was a deeper conversation that went places the larger sessions hadn't. We spent an hour on the problems nobody talks about at AI conferences: the AI tools your employees are already using without permission, the enterprise software that's quietly training on your data, and what happens to code quality and costs when AI is everywhere and nobody's governing it.

What follows is shared under Chatham House rules — the insights are shared freely, the names are not.

Executive AI Roundtable

Conversations like the one behind this essay happen every week. I host a closed-door roundtable for founders and C-level leaders navigating AI strategy — no vendors, no pitches, just operators comparing notes.

Request an Invitation →

Shadow AI: your employees are using tools you didn't approve

Shadow AI and the Desire Path Problem

One founder brought up something he'd seen on a systems administrator subreddit: a discussion about shadow AI usage inside companies. When organizations introduce governance on which AI tools are and aren't acceptable, they force people who use non-approved AI to use it covertly. "They're using it on their phones," he said, "putting their data into the phone to get what they want so that they can do their jobs more efficiently."

The subreddit's conclusion was that the first step is to find out who's using AI and how they're using it, so you can provide them with safe tools to be effective.

I agree. If you're a large enterprise deploying AI to your employees, that's a way better strategy than blanket bans. Find out what they're using, what they're using it for, and just pay for it. Pay for it, get the terms you need to make it safe, protect your IP.

There's an apocryphal story about Stanford building their campus with no sidewalks. For the first year, they just let people walk wherever they wanted. After that year, they looked at the grass, and wherever somebody had worn a path, they put a sidewalk. Compare that to a university that pre-builds the sidewalks — you end up with well-worn paths through the grass because people decided the sidewalk wasn't useful for them and went off-roading through the middle of the quad.

The same approach works for AI. Figure out where people have already worn those paths and just legitimize them as much as you can. Don't pre-build the sidewalks. Your employees are already showing you where the demand is.

For many companies, the off-the-shelf enterprise offering for Claude or ChatGPT is adequate. By default, they don't train their models on your data. There's a degree of built-in auditing capability. Just get ahead of it — pay for it, get the IP protections you need, and consolidate your AI tools later as you learn more about what's actually useful.

When Your Tools Add AI Without Asking

There's a weirder version of the shadow AI problem, and it doesn't come from your employees. It comes from your vendors.

I see that JIRA just added AI to Jira Cloud without asking anybody. What is this thing doing? Is it looking at my tickets all the time? How does this work?

There are useful, low-risk opportunities for AI in Jira. AI-powered search — where you type what you're looking for in plain English and it translates to JQL behind the scenes — would be genuinely useful. Or detecting duplicate tickets as you're writing one: "Hey, this sounds an awful lot like ticket number 456 that was just created yesterday. Are you sure it's not a dupe?" That's a great application of AI.

What's not a great application is taking that data and training JIRA's models with your IP.

And it gets worse when you look beyond software. Some CAD tools have added AI features that train on user input. You're paying for an expensive CAD tool, and now they're potentially exfiltrating your proprietary designs. Somebody designs a manifold for a rocket and labels it "manifold" — that design plus label is used to train the model, and then a competitor says "I need a rocket manifold" and gets a starting point that looks suspiciously familiar.

As one attendee put it: "Is this actually making the product better, or is this just your way of adding AI so that you're keeping up with the Joneses when it comes to industry trends?"

I wrote about this in AI Tools and Trade Secrets — the fix is quarterly tool audits. Have your CTO inventory every tool in the organization and check whether its AI features protect your data or expose it. But you can't audit what you don't know about, which brings us back to the shadow AI problem.

How protected is your company?

Take the 2-minute AI IP Risk Assessment to score your organization across four dimensions — IP protection, policy coverage, documentation readiness, and vendor risk.

Take the Assessment →

Building Firewalls Between AI and Your Secrets

The conversation shifted to a practical problem: how do you give AI access to do useful things — like creating GitHub issues or searching your repositories — without handing it your credentials?

One attendee had been trying to build a Claude Code skill that would automatically open GitHub issues whenever he encountered recurring bugs. But it required a GitHub token, and he felt uncomfortable putting it where Claude could access it. "I don't know if I want that accessible with more and more AI tools being used on my laptop," he said.

The solution I recommended: build a small tool that acts as a firewall between your AI and the API. Have Claude build an MCP server or a command-line tool that does the specific task. The tool itself has access to the credentials. Claude never does — it just calls the tool's interface. You can verify that the software does only what it's supposed to do, and Claude can't go willy-nilly deleting repositories.

In your Claude Code settings, you can create a secrets directory or file and set permissions so Claude can't read or write to it. The tool has the credentials. Claude has the tool. You inspect the tool. That's your firewall.

I have tons of these little command-line tools that Claude developed for me — each one does a narrow task, holds the credentials it needs, and nothing else. It's the same principle as scoped API tokens: give the minimum access needed for the job and nothing more.

GitHub also has scoped access for tokens, although — I'll be honest — some of those permissions are hard to understand. You read the permission description and think, "Where's the documentation for this, guys?" When in doubt, wrapping it in a purpose-built tool is the safer approach. The principle applies regardless of your AI toolchain: never give an AI agent direct access to credentials. Put a narrow, inspectable tool between the agent and the API.

The Subsidy Window Is Still Open

All of this governance work assumes you're paying for AI. And right now, that's remarkably cheap — but it won't stay that way.

The APIs are definitely being subsidized somehow. We know these companies aren't covering their R&D expenses. Are they covering the computing costs? Maybe. But they wouldn't be taking more VC money if they were covering all their costs. And once these companies go public, they won't have as easy access to getting money injected directly to subsidize operations.

What this adds up to: now is a great time to build as fast as you can. If you have product-market fit, let AI rip inside your organization as much as possible. It's amazing what you can get for $200 a month from Anthropic or OpenAI — probably the equivalent of at least one employee.

This connects to what I've written about the strategic case for internal AI adoption. The subsidy window won't last forever. The organizations building AI muscle now — learning what works, building their feedback loops, training their teams — will have a compounding advantage over those who wait.

One interesting hedge we discussed: local models. Google recently announced new techniques for shrinking models to run on less sophisticated hardware. You can already download models from Hugging Face that run on an iPhone and are reasonably performant. In a few years, you'll have multiple specialized models running locally — no API calls, no data exposure, no connectivity needed. That's great for privacy and cost, but it's a big threat to OpenAI and Anthropic's business models.

AI Found a Market Gap. Is It Real?

One founder asked AI to map his competitive landscape. It came back with what he called "a huge open space" — a crowded market where nobody was targeting a specific audience. I redacted the details at his request because he's still pursuing it.

He is still verifying the market exists rather than relying purely on the AI's research. AI can scan a market faster than any human, but it will also confidently point you at a mirage. The founder's next step was to check whether real people actually want what AI says they're missing.

Keep Humans in the Loop

The conversation ended where many of my roundtable conversations end up: the limits of AI autonomy.

We discussed the recent headlines about AI replacing radiologists. AI in the diagnostic loop is great — it detects things humans miss. But the articles rarely talk about false positive rates. And as science evolves, you want humans to understand the science, not just the AI.

The same applies to software. If you have all your code written by AI with no humans in the loop, and you don't have access to AI because there's an outage — or the cost goes way up — you've completely outsourced your thinking. You may have been doing what humans have traditionally done for seventy years of computing: writing absolute garbage architecture that is completely unsustainable long-term.

I've walked into companies where the product was supposedly ready for market, only to find it was smoke and mirrors. Push buttons, things light up on the display, but the software wasn't actually doing what it was supposed to do. I've seen products that crash every five minutes being described as needing "a few little tweaks."

These problems existed before AI. The risk is that AI makes them easier to create at scale. You end up with a mountain of broken bricks, feature requests layered on feature requests, and when you need to change something substantial, good luck. People who think they'll just have AI completely rewrite everything are delusional.

Companies claiming to be 100% AI-built will encounter scaling or quality problems they can't handle because they lack the technical expertise to look beyond the prompts and see what's going wrong. Keeping humans in the loop isn't conservative — it's the only way to build something that lasts.

What to Do This Quarter

  1. Audit shadow AI usage. Find out who's using AI, which tools, and what data they're putting into them. Don't ban it — understand it first.
  2. Pay for enterprise-tier AI accounts. Claude Team/Enterprise or ChatGPT Enterprise. They don't train on your data by default. Get the IP protections you need.
  3. Inventory every tool for silent AI features. Have your CTO check whether JIRA, your CRM, your design tools, or your CAD software have quietly added AI — and whether those features protect your data or expose it.
  4. Build scoped tools as credential firewalls. Don't give AI direct access to your APIs. Build narrow MCP servers or CLI tools that hold the credentials and expose only the operations you want.

Resources From the Roundtable

  • Claude Code — AI coding agent discussed for building MCP servers and credential management
  • RevenueCat — subscription management platform discussed for mobile app monetization (with API frustrations noted)
  • Hugging Face — open-source model hub; discussed as a source for local models that run on phones and laptops
  • Schema.org — structured data schemas discussed for optimizing websites for AI search
  • MCP (Model Context Protocol) — protocol for building tools that AI agents can use safely
John M. P. Knox
John M. P. Knox

Founder of Moving Average Inc. 25 years across MedTech, enterprise platforms, and semiconductors — from writing 64-bit code at AMD to guiding 15+ products to market. TinySeed LP and mentor. Hosts the Executive AI Roundtable.

Get the next essay

I write about AI strategy, IP, and leadership. No spam, unsubscribe anytime.

Share this article

Want to Talk?

Send me a quick message and I'll get back to you.

Full form →