AI infrastructure is red hot. Agents. Cursor. Claude Code. Clawd. MCP. Skills. The air is thick with buzzwords, and I see founders spending days and weeks setting up elaborate automations for their businesses. And often, those setups are obsolete in weeks or even days.
Like Inbox Zero, keeping up with AI engineering is expensive. But there is another approach. OpenAI, Anthropic, Google, and others are burning money on AI research and infrastructure at an incredible rate. Don't try to compete with them. You can't. Instead, let that money work for you.
Here's my approach—and the philosophy behind it. (For a strategic perspective on why internal AI adoption matters more than AI product features, see The Real AI Advantage Isn't in Your Product.)
Delegate Aggressively, Measure Ruthlessly
The delegation I suggest starts with delegating to the top foundation model companies: OpenAI, Anthropic, and Google. I alternate between Claude and ChatGPT; I pay for both for access to better models (Claude Pro and ChatGPT Plus). As new capabilities are announced, try them for tasks that might be delegable. I use them for coding, getting feedback on my plans, creating marketing materials, research, cleaning up talk transcripts.
Give work to AI wherever you can get away with it. Your job is to delegate as much as possible.
At the same time, rigorously measure your time and ROI. These aren't contradictory: experiment aggressively, then double down on what actually moves the needle. You want to move faster, not waste time on AI applications that require too much cleanup. When you find something that gives you time back, keep delegating those tasks.
The rule: if my skills and perspective don't have a unique-to-me impact on the work, and I don't have security or IP concerns, I'll let an AI try it. If the output is sub-par or too slow, I move on. Maybe it will do better next month.
Inspect Before You Trust
LLMs (large language models like Claude and ChatGPT) need guardrails and quality control. The key is that I always inspect the quality of the output when experimenting. Bad output gets thrown out. Good output earns less oversight over time.
It's important to consider risk and reward trade-offs before removing oversight completely—if you ever do. My goal is much like hiring employees: expanding my productivity while maintaining control of my unique contribution.
For instance, I let AI edit this essay. It re-ordered sections to improve flow, suggested better phrasings, and added em-dashes. That's an hour I spent on distribution instead.
Legal, IP, and Security
Your security and legal teams should review what data touches AI tools. The security, IP, and privacy implications of AI are complicated, at best.
You probably want to configure paid plans that control how your data is used (e.g., retention, training new models) and who has access to it.
If your IP is valuable to your business, you must review the current case law on the copyrightability and patentability of AI-generated code.
Finally, you'll want to consider the potential liability if AI makes mistakes that impact your customers or employees.
Don't Build AI Infrastructure
Here's where I differ from common advice: don't spend much time building AI infrastructure or using tools beyond what Anthropic and OpenAI provide with their paid plans. Yes, you can achieve amazing results by experimenting with the rapidly evolving state of the art in AI engineering. However, you're watering your lawn before a hurricane.
The foundational models and the scaffolding shipped with them improve at an incredible rate. The half-life of AI best practices is measured in months. If you build a fancy orchestration layer today, you might find the work obviated by a new Claude feature next week. Very few businesses should prioritize custom AI tooling over selling and building product.
On Twitter and other social media, the current trend is rhapsodizing about Clawd, an agentic system that just came out. It takes hours for an expert to set up, poses a number of security risks, and probably will be productized very soon. But for most founders, that's not the highest-leverage use of your time. Stay focused on the easy wins and trust that the makers of foundational models are spending millions of dollars a day improving their $20-a-month tools.
I want to build my business, not perfect AI infrastructure that will be outdated in a few weeks.
Infrastructure Exceptions
Sometimes it does make sense to invest in AI infrastructure. People can and do build valuable AI tools that work fine even as the underlying model becomes outdated. If you can answer yes to these questions, then you should invest the time to automate an AI workflow:
- You've demonstrated the reliability of the AI workflow in your business
- The AI workflow has a positive ROI in terms of time and money
- You won't regret the automation if much better AI models or tooling releases in a month
- The AI workflow will trigger often enough to justify the investment in setting it up and maintaining it
What if AI is My Passion?
If AI is your passion or core competency—if building an AI-native organization is your strategic advantage—then go for it.
Just make sure you're not procrastinating sales calls, talking to customers or marketing.
Deploying AI to Your Team
Expanding AI workflows to your team will take leadership and some tough conversations. Using AI at work doesn't represent just a change in operations; people will correctly see it as a threat to how work gets executed in their chosen field.
CEOs have already told me about their engineers who decry the loss of craftsmanship, who resist AI coding tools. I expect to hear more.
The challenge is organizational, not technical. Getting a team to actually change how they work is a change management and communication problem. You'll need to model the behavior yourself, demonstrate empathy, celebrate early wins publicly, and give people permission to experiment without fear of looking foolish.
You'll also need to pay for tools. Some organizations give each employee a several-hundred-dollar monthly budget; others pay for enterprise accounts starting from around $25 a seat. Either approach works. Cultural factors and security requirements will influence your choice.
At some point, AI skills will be seen as a universal requirement, like being able to use a web browser. Until that day, AI delegation skills will be something you need to hire for. Unfortunately, as work expectations change (and they're changing rapidly), you may be faced with firing an employee who resists using AI.
The Bottom Line
Summing up:
- Pay for the best foundational AI models like Claude or ChatGPT
- Experiment and measure results
- Delegate as much as possible to AI
- Implement guardrails and quality controls on AI
- Don't over-invest in custom AI tooling
- Integrating AI with your team is an exercise in change management
The tools will keep getting better. Your job is to keep up and to make sure your team does too.