Moving Average Inc.

Close the Feedback Loop: AI Lessons From Roundtable Session 2

Why the founders getting the most from AI aren't better prompters — they're better at teaching their tools

John M. P. Knox

John M. P. Knox

Founder

Connect on LinkedIn

An engineering director built and deployed a prototype internal app to Azure—in 5-minute chunks between meetings. A solo founder has Claude Code improving his website positioning and messaging. I hosted the second AI roundtable this week, and the gap between these two wasn't talent or tools. It was how much they'd invested in teaching their AI how to work with them.

The theme that kept surfacing: the feedback loop is everything. The people pulling ahead aren't better prompters. They're the ones closing the loop—capturing corrections, building instructions, and making every session teach the next one.

What follows is shared under Chatham House rules.

Executive AI Roundtable

Conversations like the one behind this essay happen regularly. I host a closed-door roundtable for founders and leaders navigating AI strategy — no vendors, no pitches, just operators comparing notes.

Request an Invitation →

If I have to tell the agent twice, then I go put it in the rules. That works 99% of the time.

Building Apps Between Meetings

The engineering director showed up having just built and deployed a review app—start to finish—using Copilot in VS Code. The app turns a spreadsheet-based performance review process into a proper tool where you rate people across categories and levels. He built it in 5- and 10-minute chunks between meetings, poking his head into the chat to check progress and give the next instruction.

The most telling part wasn't the build. It was the deployment. When he told the agent to push it to Azure, it set up the infrastructure, deployed the code, then immediately started self-debugging. It hit a 500 error—no node modules. The agent checked the logs, identified the problem, copied the modules up, fixed it, then figured out how to do it programmatically going forward. The whole deploy took about an hour of intermittent attention.

His takeaway: the experience of watching the agent fail and recover is as valuable as the output. When his developers hit the same issues, he can say, "Yeah, I've been there"—and mean it. That credibility matters when you're leading an engineering org through AI adoption.

If You Have to Tell the Agent Twice, Put It in the Rules

One of the most practical patterns from the roundtable came from a principal engineer: "If I have to tell the agent twice, then I go put it in the rules. That works 99% of the time."

The engineering director had a similar approach with Copilot. His first instruction: every time he accepted a change, auto-commit with a one-line summary. Then he added global rules—everything needs to be infrastructure-as-code, start with local storage but plan for a database, and so on.

In Claude Code, the equivalent is the CLAUDE.md file—a persistent instruction set that loads every session. The solo founder is using Claude Code for product development and recently started using it for website positioning and messaging, too.

The engineering director's team took this further. They're modernizing a 20-year-old platform, and one of their architects built a full domain glossary—a dictionary of every service in their .NET solution, what it does, and the different names developers use for it. "Oh, we have our scheduler service; you might hear it referred to as scheduler, or this, or that," he explained. That glossary lives in their instructions so the agent speaks the team's language from the first prompt. They're also farming out well-defined bugs and vulnerabilities to agents, freeing senior developers to focus on the complex domain work that actually requires a human.

The principle is the same regardless of the tool: invest in your instructions file. Every correction you make during a session that you don't capture is a correction you'll make again. The people getting the most out of AI coding tools aren't necessarily better prompters—they're better at closing the feedback loop.

There was one honest admission, though: even with good rules, sometimes the agent just won't listen. As that same engineer put it, there was still one night where he'd told it 20 times and it kept doing the wrong thing. Perfection isn't the goal. Fewer repeated mistakes is.

AI as a Thinking Partner, Not a Ghostwriter

Not all the AI use at the table was code. The engineering director uses AI heavily for management work—brainstorming how to frame a difficult message, checking tone before sending it to his team, building structured interview question sets that maintain a consistent style across hiring rounds. He even decided to have AI audit his entire interview flow and flag unintended signals after a candidate told him, "You asked a lot of questions about conflict. How is work?"

He hadn't realized how some questions landed until he saw them through a different lens.

But he was clear about a boundary: he still does the writing himself. AI suggests. He decides. "It mentally forms you as you write," he said. The act of composing a message forces you to think about what you actually mean. Skip that step, and you're just forwarding someone else's thoughts.

That boundary matters, because not everyone on his team shares it. A direct report and a peer were both throwing prompts into AI and forwarding the raw output—unedited walls of text. His reaction was blunt: "I can do that. What do I need you for then?"

The direct report improved after feedback. The peer—harder to address, because you can't give the same direct feedback up or sideways. A tactic from an online community worked well: when you receive an obviously AI-generated message, reply with "What are you expecting me to take away from this?" It forces the sender to actually think about what they sent. The director tried it. The response: "Oh, I didn't mean to send that to you." Which was revealing in its own way.

AI makes it trivially easy to produce text. That makes it more important than ever to actually think before you send. This connects to something I said at the first roundtable: the two most valuable skills in the AI era are reading comprehension and writing. The people who think clearly and communicate precisely will outperform those who just generate volume.

AI Writing Without Sounding Like a Slop-Bot

Buffer came up as a practical tool for social media automation. One participant had used it years ago for conference promotion—scheduling authentic, hand-written posts to hit at the right times. The posts worked because they were real. They were just scheduled.

I've taken this further by connecting Claude Code to Buffer's API through an MCP server. A slash command scans my website content, generates a social snippet in my voice, attaches preview images, and queues the post. The key investment: I had Claude analyze all my past writing and build a voice profile as a skill. It avoids AI tropes, matches my phrasing patterns, and mostly quotes from things I actually wrote.

The solo founder was interested in automating LinkedIn carousels—something that had worked well once but took too long to repeat. The trick he picked up: have the AI generate the carousel in HTML first, edit it manually, then export to PDF. Smarter than generating images directly, because you keep control over the layout.

The theme across all of this: automate the mechanics, keep the voice. Schedule posts, generate drafts, build templates. But review everything before it goes out. The moment your audience can tell it's AI-generated, you've lost the thing that makes social media worth doing in the first place.

Building an AI Executive Assistant

The solo founder raised a question that resonated with the whole table: what do you do about follow-ups? Specifically—chasing people who've ghosted you. It's the task every founder hates and nobody does consistently.

I shared my own approach: a Claude Code project that functions as an executive assistant. It pulls calendars through iCal feeds, monitors Gmail for sales outreach responses, manages a to-do list, and references a directory of markdown files—branding, personal info, quarterly goals, strategic priorities. I schedule morning and evening briefings through the Mac task scheduler (launchd), delivered via Pushover push notifications. The assistant checks email, flags responses in my sales pipeline, and tells me what needs attention. The main function, honestly, is accountability—it guilts me into doing the things I've been putting off. Like a human EA, but one that runs on a cron job.

For the solo founder's use case—he runs Zoho for CRM and email—the practical path is similar: connect to your email API, have the agent detect unanswered emails after a set period, and draft follow-ups in your voice. Not send them automatically—draft them. That lowers the activation energy enough to actually get the replies out. The engineering director pointed out that even Gmail has nudge features for this, but an AI agent can go further—it can detect the gap and draft something in your voice without you having to think about it.

The key for all of this: train the agent on your voice first. Otherwise, as the engineering director put it, the recipient is just going to think "this is an auto-generated message." Nobody wants to be on the receiving end of that.

AI assistants work best when they reduce the friction on tasks you already know you should be doing. Not when they replace your judgment about what matters.

If You're Ignoring SEO, Please Give It to AI

The solo founder admitted he hadn't paid much attention to Google Search Console. That's common—and expensive. Most founders don't think about their search presence until they realize they're invisible for the terms that matter most.

I shared how I connected Claude Code to Google Search Console and built an SEO auditor that pulls search data, identifies problems, and suggests fixes—bad titles, outdated sitemaps, content gaps. It keeps a log of experiments so it can track what's working over time and course-correct when something has the opposite effect.

If you rely on organic traffic at all, this is worth setting up. Even if outbound is your primary channel, prospects Google your category before they reply to cold emails. If they find your competitors and not you, you've lost credibility before the first conversation.

Another practical move: implement schema.org structured data on your site. There's a FAQ schema, an Article schema with author and date, and dozens of others. These give search engines and AI systems richer context about your content—Google might surface your exact FAQ answer for a relevant query. It's all JSON-LD that you embed in your pages, and Claude Code can add it to a Git-based site in minutes.

Close the Feedback Loop

Even though we didn't all use the same tools or apply them the same way, we did have something in common. It was the compounding effect of closing the feedback loop between you and your AI.

Every participant who was getting real value had the same pattern: they don't just use AI—they teach it. Instructions files that grow over time. Logs that prevent repetition. Voice profiles that improve output quality. Session reviews that capture what was learned.

The CLAUDE.md Management plugin does this for Claude Code—after a session, it reviews what happened and proposes updates to your instructions. My social media skill keeps a post log so it doesn't repeat itself. My SEO auditor tracks experiments so it can measure results.

The people who treat AI as a tool that improves with use will pull further and further ahead of those who treat it as a magic box you throw prompts into. The gap compounds. Every session teaches the next one.

Resources From the Roundtable

  • Buffer — Social media scheduling with a generous free tier and API access for automation
  • Claude Code — Agentic coding; invest in CLAUDE.md to make it compound
  • CLAUDE.md Management Plugin — Auto-update your project context after every session
  • Google Search Console — Connect to Claude Code for automated SEO auditing
  • Schema.org — Structured data schemas for enhanced search results and AI discoverability
  • Pushover — Webhook-to-push-notification service for AI workflow alerts
  • Zoho — CRM and email suite with API access for automation
  • MCP (Model Context Protocol) — Connect Claude Code to external APIs and services

I work with a small number of founders on AI strategy—where it creates real leverage, and where it's a distraction. If you're ready to close the feedback loop, let's talk.

Share this article

Want to Talk?

Send me a quick message and I'll get back to you.

Full form →