Moving Average Inc.

Managing a Workforce of Humans and AI Agents

Job descriptions, onboarding, guardrails, feedback loops — it's the same discipline, applied to a different kind of worker

I hosted an in-person roundtable at MicroConf US 2026 in Portland this week, titled "Managing Cybernetic Organisms." The premise: hiring — whether AI, carbon-based, or a combination — too often leads to procrastination or micromanagement rather than productivity. I asked the table a simple question: Who here has a team right now?

The answers were all over the place. Some had teams of humans. Some had teams of agents. Most were trying to figure out the right mix. The theme that kept surfacing: managing AI agents requires the same organizational thinking as managing humans. Job descriptions, onboarding, guardrails, feedback loops — it's the same discipline, applied to a different kind of worker.

What follows is shared under Chatham House rules.

Executive AI Roundtable

Conversations like the one behind this essay happen regularly. I host a closed-door roundtable for founders and leaders navigating AI strategy — no vendors, no pitches, just operators comparing notes.

Request an Invitation →

Managing AI agents requires the same organizational thinking as managing humans.

The Hire-or-Automate Fork

One founder with a clinical background had just lost his marketing person. The question on his mind: hire a replacement, or try to automate marketing with AI? "Am I better off hiring somebody that's really good with AI, or myself with a clinical background trying to do it straight up with AI?"

Another founder faced the same fork. He'd considered hiring a developer, decided against it, and bet on doing the work himself with AI for the next six to twelve months. "I think we can do without him," he said. "I think we can maybe do it ourselves."

A third founder took a different path. He hired the developer — but restructured the role around AI. "I'm trying to make sure our system is robust enough that he can use AI really quickly, with tests running automatically, so the AI does as much of his work as possible."

A fourth offered his developer a raise instead of hiring a second person. "I said, hey, we can hire this person. Or I can pay you more money." The developer thought about it seriously, because the founder made it clear: the expectations would be higher.

Every role now has an AI layer. The only question is how thick that layer is. Some roles are 90% agent and 10% human review. Some are the reverse. But "hire a person" and "automate it" are no longer separate options — they're endpoints on the same slider.

When Delegation Goes Wrong

The most memorable story from the table: a founder building an AI-native marketing agency had his VA dive into Claude Code, eager to learn. Within two days, she'd accidentally deleted all the leads from one of his platforms.

"It was a wake-up call," he said. "These tools are extremely powerful. It's not something you can just kind of train up a junior person on."

The VA wasn't careless. She was smart, motivated, and had taught herself the tool. The problem was that the tool had full access to dangerous operations, and there were no guardrails to prevent a filter mistake from becoming a bulk delete. "She set the filters wrong, and I think it would have been caught by my QA agent if she knew to use it. But she didn't."

His solution: Paperclip, an open-source agent orchestration platform that lets you build a hierarchy of agents with fine-grained permissions — which agents can spin up subagents, which can perform destructive operations, and which are constrained to safe actions only. "I wanted guardrails for my human AND the agents," he said. "I can say, interact only with these agents — that's your scope."

I pushed back on one point: how often do you actually need to delete things? If it's once a week, maybe don't automate the dangerous operations at all. "Why spend time developing something that's really dangerous, and then somebody stumbles upon it and is like, what does this big red button do?"

The broader principle: build tools that can only do safe operations. Don't give an agent — or a new employee — the full API. Create a CLI or MCP that doesn't have a delete function. If the agent can't access it, it can't invoke it. One founder offered a different approach: "I want some agents to have more permissions than others — like fine-grained access for different employees."

If you need to perform dangerous operations frequently, maybe you do need to give your AI agents access to dangerous tools. If so, consider whether you can incorporate an undo option. If you're only doing something once a week, then maybe your AI agent can instead produce a CSV you can inspect and then manually pass to a tool to complete the dangerous operation.

This connects directly to what I've seen with shadow AI usage. When tools have unrestricted access, and employees adopt them without governance, the damage isn't hypothetical. It's leads deleted, data leaked, and costs spiraling. The fix is the same whether the worker is human or silicon: design the permissions and test the workflow before you hand over the keys.

Founder AI Addiction

A pattern came up that several founders recognized in themselves: spending so much time perfecting their AI setup that it becomes the product.

"I see a lot of founders with AI addiction," I observed. "They're spending so much time tweaking their AI thing as if that was their product. And it's not."

The table laughed because they all felt it. Someone quoted the classic programmer joke: "Why do something manually in five minutes when you can automate it in five hours?" That's the modern founder trap — building an increasingly elaborate AI apparatus when the business needs you to execute strategy, sales, and customer conversations.

Why do something manually in five minutes when you can automate it in five hours?

But the counterpoint was real too. One founder had reduced his campaign production time from twenty hours to thirty minutes using AI agents for the execution layer. "Better, actually," he said. "If I'm doing it manually, I'm inspecting the pivot tables or whatever, and it sucks. Now I check the QA output — thirty minutes, I'm done."

The difference between productive AI use and AI addiction: productive use makes you do the work you were procrastinating on. I shared my own example — I have a tool that pulls data from Google Search Console, makes a report, and surfaces what needs attention. Because it's so easy, I actually do SEO work every day instead of putting it off. The AI didn't replace the work. It reduced the friction enough that I stopped procrastinating.

AI as Sparring Partner

"Sparring partner" — multiple founders landed on the same concept independently at MicroConf. "I'll write my ideas, and then I'm doing kind of a ping-pong where I throw it out, get criticism, and go back and forth." The creative work is still his. The AI just makes the iteration faster.

I shared some of my own sparring partners: a writing voice skill trained on my writing patterns, and a writing evaluation tool that catches structural issues. "John, you've got the weak middle again," it tells me — because I sometimes write chaotic middles where I lose the thread. Both tools function as feedback loops that compound over time.

Another founder described building evals for his marketing content. Everything passed the automated checks — but the output was still "complete garbage." The problem: "I didn't specifically tell it that the heading needs to make logical sense. That's my fault." He added a deterministic layer that checks for formatting issues — em dashes, structure — but realized the semantic quality still needed human eyes.

The eval is only as good as what you teach it to look for. Same lesson from deploying AI internally — someone human has to define what "good" looks like before any automation can enforce it.

Onboarding Agents Like Humans

"I realize now when I onboard a new agent, it's like onboarding a human," one founder said. "I need to communicate strategy, vision, mission. I need to give it a clear job description — what am I expecting you to do? And I don't know how to write it perfectly." He estimated two to four weeks of inspecting output, correcting, and giving feedback before a new agent is reliable. "It feels like I have to do one-on-ones with my agents."

I realize now when I onboard a new agent, it's like onboarding a human.

The table agreed that this is where a lot of companies will need a new kind of role — someone who's good at building the tooling and context that makes AI effective. "The tooling and context, more than the model, is what makes AI powerful now," one said. "It's no longer about prompt engineering. It's context engineering — getting the right data, the right procedures, and the right tools so it produces good results."

One founder offered a theory of what humans will still do when execution becomes free. There are two interfaces that still need people. Human-to-human — sales, customer success, translating what the agents have produced back to another human. Human-to-AI — the strategic architect who designs how agents work with your tools, so they're efficient and deliver good results at speed. Both are indispensable. Both are fundamentally about judgment.

But who does that job? If someone is modifying your agent instructions or skill files, a mistake can impact your entire operation — and you won't know until the output goes sideways. "You need evals," one founder said. Another pushed further: "It's a recursive loop. You need someone who understands enough to transmit your vision across this cybernetic organization — so your humans know what you're thinking and can control the AI to do what you're thinking."

You need someone who understands enough to transmit your vision across this cybernetic organization.

The skill set hasn't changed — clear communication, good documentation, structured feedback. But the audience now includes machines that interpret your instructions literally and silently fail when they are ambiguous. That's a challenge in a cybernetic org: transmitting your vision across a workforce where half the workers are embodied in fancy text files.

Hiring for a Cybernetic Workforce

Several hard-won hiring patterns surfaced toward the end of the session.

The unicorn trap. One founder runs an AI-native marketing agency and realized that the person he needed would require Claude Code proficiency, marketing skills, and founder-level judgment. "I'd be hiring me," he said. "Those people are hard to hire." Rob Walling's advice came up: "Don't try to hire unicorns." One founder had spent years looking for employees who thought like founders — people who could do marketing, sales, and strategy. Once he stopped looking for that, things got better. "People like that come to you. You can't find them through job postings."

Hire for slope, not intercept. The table preferred junior people with high learning velocity over senior people who are comfortable but unmotivated. One attendee from a software agency noticed this shift in their own industry: "The kind of profile I'm looking for — systems thinking, product building, broad perspective — that's more like a founder than an employee." The irony: their boss sent them to MicroConf specifically to retain them.

Screen for people who can't stop doing the work. I shared a story from my AMD days: the interview team would ask candidates about their hobbies. If someone said, "I built my own PC," they were probably a fit. "This is someone who just enjoys doing the kind of work we do here." Today, that proof looks different — a website, a small product, a half-finished side project on GitHub. You're looking for candidates who can't help doing the work — the ones who'd keep building even if you weren't paying them.

Design the job for the right level. Rob Walling's framework — task-level, project-level, and owner-level thinkers — resonated. One founder hires college students as BDRs and designs the role explicitly for task-level thinkers. "After a couple days, they're ramped up and ready to go, because I have very detailed SOPs." He'd made the mistake before of looking for owner-level thinkers for task-level roles. "They were unhappy because I didn't design the job around that."

Write job descriptions that don't suck. Don't post for "a junior developer" — that's meaningless. Be specific about what the person will actually do, who they'll learn from, and what they'll get out of it. "You'll be thrown into the deep end with AI tools, work on marketing, learn directly from the founder, own this thing, see the results of your own hard work." Same principle as writing agent instructions: vague inputs produce vague outputs, whether the worker is human or silicon.

Ask better reference questions. A founder shared advice he'd gotten the day before from a veteran MicroConf speaker: when calling references, don't ask "what do you think of this person?" Instead, ask two questions. First: "Construct the perfect environment for this person to thrive." You'll learn what they need to succeed. Second: "Construct this person's personal hell." They'll laugh, drop their guard, and then tell you the real weaknesses — without the corporate filter of "what's their biggest weakness?"

Screen for writing and reading skills. I made the case at the table: writing and reading matter more in the AI era, not less. If someone can't write clearly, how are they going to communicate with an AI agent? And if they can't read carefully, they won't catch it when the agent quietly deletes the database.

Someone brought up Joel Spolsky's old hiring principle: hire people with strong written communication because they make good programmers. That applies even more directly now. The people who write clearly will manage AI effectively. The people who read carefully will catch the mistakes. This connects to what came up in the feedback loop roundtable — reading comprehension and writing are the two most valuable skills in the AI era.

The Org Chart Has Changed

Every conversation at that table circled back to the same realization: the organizational skills that matter haven't changed. Clear communication. Structured onboarding. Defined permissions. Feedback loops. Evals (what we called QA or validation and verification outside of AI). These are the same disciplines that made human teams work well for decades.

What's changed is the workforce. One attendee at a software agency told us his designers now make pull requests directly in code — they have extracted their design systems from Figma into the codebase and are implementing their own designs using AI tools. "Designers making PRs is kind of mind-blowing," he said. When execution gets that cheap, everyone in the organization can build. The bottleneck shifts entirely to judgment, taste, and the quality of your instructions.

The founders who treat agent management as an organizational design problem — not a technical one — will build the most effective teams. The ones who think it's just an engineering problem will keep deleting their own leads.

Resources From the Roundtable

  • Paperclip — Open-source agent orchestration with hierarchical permissions, budget controls, and audit logs
  • Claude Code — Agentic coding tool used by multiple attendees for engineering, marketing, and operations
  • Context7 — Up-to-date, version-specific library documentation for AI coding tools; "anytime you need to make an API call, it does it almost perfectly because it just checks the docs"
  • MCP (Model Context Protocol) — Open standard for connecting AI agents to external tools and data sources
  • Who: The A Method for Hiring by Geoff Smart and Randy Street — Framework for structured hiring interviews, including bracketing candidates by what they love and hate doing
  • Who Not How by Dan Sullivan and Benjamin Hardy — Reframes delegation around finding the right person instead of figuring out the process yourself
John M. P. Knox
John M. P. Knox

Founder of Moving Average Inc. 25 years across MedTech, enterprise platforms, and semiconductors — from writing 64-bit code at AMD to guiding 15+ products to market. TinySeed LP and mentor. Hosts the Executive AI Roundtable.

Get the next essay

I write about AI strategy, IP, and leadership. No spam, unsubscribe anytime.

Share this article

Want to Talk?

Send me a quick message and I'll get back to you.

Full form →