I keep meeting CEOs who think AI is going to let them fire their team and run a one-person company. They're wrong — and the way they're wrong is going to cost them their best people.
The world is already full of one-person companies. With a few exceptions, most of them aren't very big. They don't grow fast once they reach a certain revenue level, and they require constant attention from the one person.
Executive AI Roundtable
Conversations like the one behind this essay happen every week. I host a closed-door roundtable for founders and C-level leaders navigating AI strategy — no vendors, no pitches, just operators comparing notes.
Request an Invitation →Maybe AI will change that. Maybe one day an agent or collection of agents will take over every role from the CFO to the janitor, and maybe that AI will be just as good as — or better than — its human counterparts. Not today.
Why Set-It-and-Forget-It Doesn't Work
AI has tradeoffs. The primary one is quality control. The more AI does for you, the more output it produces. The more output it produces, the more work is required to evaluate and correct its quality.
You might think you can use even more AI to evaluate the quality of these AI systems. It's a reasonable instinct, but putting it into practice becomes a real engineering problem. How do you define quality? How do you ensure that the evaluation of quality is independent of the system producing it?
You might also assume Anthropic and OpenAI are working on this. They are — but because they're building a general-purpose tool, their definition of quality must differ from yours in significant ways. The model's quality will improve, but the results will still seem milquetoast. That's why everyone complains about "AI Slop." It's not that Claude and ChatGPT don't write well — those models are fantastic writers. The issue is the banality of the writing style, stemming from the ubiquity of AI-generated writing.
Without an investment in context and tooling, AI writing output is undifferentiated. The writing uses the same structures and rhetorical tricks over and over again. Notice how flat these next two sentences land: It's not that AI writing is bad. It's that the writing is boring. You've seen that structure everywhere because AI loves it.
To deliver good business results, AI needs context and supervision to produce a competitive offering. Someone has to do that work — and it's more work than the demos make it look.
Consider McDonald's. The franchises don't hire thousands of high school students and set them to making Big Macs without training. That would result in a lot of food poisoning and very few recognizable Big Mac sandwiches. Even though it doesn't take a genius to make hamburgers, the operation still requires training and supervision.
You might think this suggests an equilibrium where AI could work without humans. Once you've established a system for making your "hamburgers" and monitoring quality, you might not need humans anymore. I've seen evidence of this in the real world among the CEOs at my weekly AI roundtable — a large initial investment in context and tooling can let AI agents take over the work for a stretch.
But your market isn't static, especially in the age of AI. Just as the menu and experience at McDonald's isn't frozen in time, your AI will need to be periodically monitored and adjusted to keep your business competitive. A set-it-and-forget-it AI agent can't run a sustainable business.
So: foundation models produce good output that's unoriginal without additional context and tooling. Quality and competition force ongoing human involvement. AI won't replace humans in your business.
The Promotion Hidden Inside AI Adoption
What AI does — used wisely — is magnify and accelerate what a team can do. It takes tedious tasks from humans, freeing them up for other work. To a large extent, that work represents a promotion. A coder no longer types much code; they supervise the execution of the coding work. A marketer might spend less time researching the competition or analyzing campaign results, and more time supervising both.
Each role becomes less about execution details and more about ensuring the output aligns with company goals and maintains the desired level of quality.
I saw a sharp version of this at a recent roundtable on managing humans and AI agents. One attendee — a software agency lead — said his designers now make pull requests directly in code. They've extracted their design system from Figma into the codebase and use AI tools to implement their own designs.
When execution gets that cheap, the bottleneck shifts to judgment, taste, and the quality of your instructions.
Who Won't Make the Transition
These promotions will be challenging for many employees. The majority of AI-related firings among the CEOs I know happened because the employee couldn't or wouldn't adapt. They didn't want to use AI tools, or they couldn't develop the skills to work with the new kinds of challenges that AI work creates.
But many CEOs find the opposite story too. Some employees love the power and freedom AI gives them. Engineers who enjoy the act of building tend to enjoy it just as much when they're directing the build instead of typing it. Some designers are having a blast using AI to render the exact layout and interactions they want. Salespeople love having a private AI sales coach to analyze where they can improve their pitch.
I don't want this to sound utopian. Many people see the way they work as an honored craft. They like the process and the details, and AI will, in some fields, reduce or replace the aspects of work they have tied to their sense of worth. This isn't unique to the AI revolution — SaaS replaced many categories of carefully crafted desktop apps, and that transition wasn't gentle on the developers who loved building those apps.
Demand for two roles will grow sharply. Quality engineers — a discipline that's quietly fallen out of fashion in recent years — will have highly sought-after skills in the AI era. The pace of AI-enabled work and the complexity it introduces will make it much harder to maintain a balance among execution speed, correctness, uptime, scalability, and data protection. We're already seeing the consequences when nobody plays this role: the PocketOS production-database deletion is the kind of failure quality engineering exists to prevent.
Data scientists will face a similar demand. The signal that keeps a company moving at AI speed without flying off the rails has to come from somewhere.
Managers will have a different problem. Many aren't equipped to make the specialized judgments their workers do. The manager of a design team, for instance, may not have the recent hands-on experience to evaluate an AI's design output. Manager roles will get pinched between strategic leadership above and individual contributor work below — both of which AI is pushing into the expert/supervisor zone.
Where to Start
- Stop framing AI as headcount reduction. The CEOs running this play are losing their best people while the work still needs to be done.
- Identify the supervisory work AI just created. Every team that adopts AI gains a new layer: review, correction, alignment to business goals. Staff for it deliberately or live with the slop.
- Hire and train for quality engineering and data analysis. These roles were quietly out of fashion. AI restores its importance.
- Have an honest conversation with the people whose work AI is changing. Some will love the new shape of the role. Some won't. Both responses are valid — but you have to know which is which.
For workers who can adapt, AI may make their roles more valuable and more rewarding. The workers and managers who don't adapt will be left competing for a shrinking pool of jobs that don't yet require AI skills. It will be a tumultuous time, but also one with growing opportunities for entrepreneurs and people who enjoy the cutting edge.