A SaaS founder built an AI system that reads his calendar, pulls transcripts from every past meeting with a prospect, and sends him a daily briefing before his calls. After each call, it grades his performance, drafts a follow-up email, and posts everything to HubSpot. He replies "yes" and it sends.
A technical founder asked AI to build a mobile app following MVVM architecture. The AI agreed, generated the code, and got it wrong. When he asked the AI to review its own work, it admitted the architecture was sloppy.
I hosted the third AI roundtable this week, and the gap between these two stories is the gap most companies are stuck in right now. AI that automates brilliantly in one domain and produces unreliable work in another. What follows is shared under Chatham House rules.
Executive AI Roundtable
Conversations like the one behind this essay happen every week. I host a closed-door roundtable for founders and C-level leaders navigating AI strategy — no vendors, no pitches, just operators comparing notes.
Request an Invitation →
AI That Replaces the Sales Grind
The most impressive system at the table wasn't a product feature. It was a founder's internal sales workflow, stitched together with AI and a CRM.
Here's how it works. Every morning, the AI pulls his calendar and checks who's booked. For new prospects, it runs background research — company info, role, likely pain points. For follow-ups, it goes deeper: it pulls the transcript from the last conversation, identifies the prospect's specific concerns, and surfaces the threads worth picking back up.
After the call, the AI pings him with a call grade and a draft follow-up email. One reply sends it.
The grading system itself was built through what he called meta-prompting. He starts with a clean chat to identify the right expert persona — a B2B SaaS sales expert, not a generic methodology. Then he uses a separate clean context to generate a prompt for that persona. Then he takes that prompt into a third context and has it build a rubric tailored to his ICP and what he's selling — not a generic MEDIC or BANT framework, but something specific to the people he actually talks to. The whole thing took about fifteen minutes. "The quality of output, when you get to more experts, works," he said. The key was using clean contexts at each step so the AI's assumptions from one stage didn't contaminate the next.
The hardest part of building it? Not the AI — the Google OAuth flow. Getting calendar access to play nice took longer than building the actual intelligence layer. That tracks. The plumbing is usually harder than the AI.
Why does it work? In his words: "The sales motion itself is fundamentally broken because everything lives in 15 different systems." The AI isn't doing anything magical. It's connecting systems that should have been connected years ago — and doing the prep work that humans skip when they're busy.
This is the kind of internal AI deployment I keep pushing founders toward. Not AI as a product feature competing against OpenAI and Anthropic — AI as connective tissue between your existing tools, doing the tedious work nobody on your team does consistently.
When AI Writes Code, Check Its Architecture
A technical founder at the table had been using AI to build a Kotlin mobile app. "It can generate tons of code," he said, "but other people swear by it, so I'm very confused." The code generation was fast. The architecture was a mess.
He'd been specific — he wanted MVVM, a well-established pattern for mobile development. The AI gave him a plan, he approved it, and then it generated code that didn't follow the pattern it had just agreed to. "Things are not in certain places that they should be," he said. "This is kind of sloppy."
The telling moment: he asked the AI to review its own code. "This looks suspicious," he told it. "Is this following best practices when it comes to MVVM?" The AI's response: "You're right, this is not." It knew the pattern well enough to critique its own work — it just hadn't applied it when writing.
I see this frequently. AI code generation is fast and sloppy in the same way a junior developer is fast and sloppy. It produces volume without discipline. And like a junior developer, the output quality depends on the tools and guardrails around it.
The conversation shifted to tooling. A SaaS founder with engineers on staff weighed in: the built-in IDE AI assistants — IntelliJ's, for example — aren't good enough. "I don't think the IntelliJ one is great," he said. "It seems really expensive because they give you a certain amount of credits and you burn through those credits." He recommended agentic tools like Claude Code or Kilocode — anything that can read your full codebase and iterate autonomously.
The first fix to keeping generative AI in line: plan mode. Instead of letting the AI jump straight into code generation, you ask it to outline its architectural approach first. You review the plan, give feedback, and then let it execute. The plan is just markdown — cheap to produce, easy to course-correct. It's the difference between telling a contractor "build me a house" and reviewing the blueprints first.
I've seen the same pattern in my own work. When I give Claude Code an architectural task in planning mode, the output is measurably better — because the AI has to think about structure before it writes a single line. And when it starts drifting, you catch it in the plan, not after it's generated 500 lines of code you have to untangle.
Testing matters too. AI-generated code that passes your test suite is far more trustworthy than AI-generated code that merely looks right. If you don't have tests, your AI code is a liability. If you do, AI becomes a force multiplier.
The difference between the sales system that works and the code that doesn't comes down to the same thing: the feedback loop. You need to teach your AI, not through training, but by improving the context and tools available to it.
The founder who built the sales prep system invested in encoding his workflow as persistent instructions — rubrics, skills, rules that improve with each use. The technical founder who got sloppy MVVM output hadn't done that yet. I think of this as closing the feedback loop: if the AI gets something wrong, you capture the correction in a file it reads every session. In Claude Code, the CLAUDE.md Management plugin can automate this — after a session, it reviews what happened and proposes updates to your project context. Over time, the AI gets better at doing exactly what you want because it's all captured in your repository. The people getting the most from AI aren't better prompters. They're better at teaching.
Your Free AI Tools Are Leaking Trade Secrets
The conversation turned serious when we got to data privacy. A technical founder mentioned that companies he's worked with now require explicit training — employees sign agreements not to use third-party AI tools with proprietary data.
The math is simple: if you paste proprietary code into a free AI tool that trains on user input, you've just destroyed your trade secret protection. AI-generated content can't be copyrighted. It can't be patented. The only IP protection most companies have for AI-assisted work is trade secret status — and trade secrets require that you actually kept the information secret.
Samsung learned this the hard way. On three separate occasions, employees pasted proprietary information into ChatGPT's free tier. A senior government official got caught doing something similar. These aren't edge cases anymore — they're a pattern.
The problem gets worse with tools you don't think of as AI. That CAD software your engineering team uses? It might have quietly added an AI feature that, by default, has permission to train on whatever your team feeds it. That's your product designs being ingested into a model that your competitors can query.
If you're not paying for the AI, you're almost certainly training the next model. Even paid tools require due diligence — you need to verify that training is turned off, that your data stays within your tenant, and that your terms of service actually protect you. One attendee pointed out that GitHub recently changed their data usage policies, and the implications for anyone who hadn't opted out were significant.
The enterprise-grade AI tools aren't just nicer versions of the free ones. They're the only versions where your trade secrets stay secret. As awareness grows, I expect more companies will accept the cost — because the alternative is losing the only IP protection that works for AI-generated output.
How protected is your company?
Take the 2-minute AI IP Risk Assessment to score your organization across four dimensions — IP protection, policy coverage, documentation readiness, and vendor risk.
Take the Assessment →The Subsidy Window Is Open
Foundation model providers are burning cash to acquire users. One attendee shared that Claude's subscription gives you roughly 10x the value compared to equivalent API usage. OpenAI shut down Sora — their video generation product — because the costs were unsustainable even with Disney money on the table.
These companies are using AI to build AI, and they're moving faster than anyone trying to compete with them head-on. Don't build AI products that compete with foundation model providers. Use their subsidized tools to make your existing business faster, cheaper, and more capable. The sales prep system from this roundtable is a perfect example — it doesn't compete with Claude or GPT. It uses them as infrastructure.
This window won't stay open forever. API margins exist, but R&D and capital expenditure make the overall business unprofitable at current prices. When prices rise — and they will — the companies that already embedded AI into their workflows will have the advantage.
Relative to an employee, this stuff is incredibly inexpensive. A max-tier AI subscription costs less than a day of a junior developer's time. And unlike that developer, it doesn't sleep. I told the group I feel guilty when I go to bed without something running on Claude — like leaving a tap running.
The founder who built the sales prep system and the technical founder whose AI wrote sloppy MVVM code are both right. AI is transforming how work gets done — and it's producing unreliable work while it does it. The companies that figure out where to trust it, where to check it, and where to keep it away from their trade secrets will pull ahead. The ones still deciding whether to start will be paying full price to learn what their competitors figured out on the cheap.
Resources From the Roundtable
- Claude Code — Agentic coding tool with plan mode; discussed for code quality and website development
- Kilocode — Agentic coding extension for IDEs; recommended as an alternative to built-in IDE AI assistants
- HubSpot — CRM platform; used as the backend for the AI sales prep system
- IntelliJ IDEA — JetBrains IDE; discussed for its built-in AI assistant's limitations with Kotlin/MVVM
- Brave Search — Privacy-focused search engine; discussed for ephemeral AI chat that deletes conversations within 48 hours
- Sora — OpenAI's video generation product; shut down due to unsustainable compute costs