When I started my internship at AMD, I was given a cube with a desktop computer, a monitor, a phone, a stapler, some pens, and a notebook. I knew how to use this equipment without training. Even though it can speak and understand written English, generative AI is not like a stapler. Although ChatGPT or Claude might look similar to a messaging app, they're unlikely to know what to do with it without training.
Even CEOs who have been following the development of generative AI have misconceptions about how it works. For instance, many people assume that the AI is learning right there on the spot. When they start a new session, they're frustrated to find that, aside from perhaps a few basic facts about them, the AI doesn't remember what it did during the last session.
Generative AI is a very new technology, and both the tools and the best practices for deploying it are still evolving. This is a tactical guide to deploying it within a business. It's not about the what or the why — for the strategic case, see The Real AI Advantage Isn't in Your Product. This guide explains how to deploy it.
Getting this right separates companies operating at significantly higher productivity from those concluding that generative AI is nothing but hype. The companies that do this well contact more sales prospects, write more code, reduce churn, and improve SEO. I've helped numerous founders find that value — and used AI extensively in my own work for research, SEO, and sales.
At $30 per employee per month, the ROI bar is low: save an hour a week and you've paid for it.
Executive AI Roundtable
I host a weekly closed-door conversation for founders and C-level leaders navigating AI deployment. No vendors, no pitches — just operators comparing notes on what's actually working.
Request an Invitation →Who Owns AI Deployment
Start with a pilot program, not an organization-wide rollout. AI technology is not one-size-fits-all, and there is no established playbook. At this point in the development of generative AI, incorporating it into a business process works like an investment — it requires research and experimentation before you start to receive dividends.
Here is who generally owns the various aspects of deploying AI:
- The CEO: drives AI adoption, selects the pilot program, and ensures coordination across the organization
- The head of IT: configures accounts, manages subscriptions, and enforces tool policies
- The head of technology: monitors the business impact of AI and adjusts the implementation
- The head of HR: owns AI training — compliance, general knowledge, and role-specific
- The head of legal: protects corporate IP that interacts with AI and manages other legal risks
The CEO must own this initiative. Generative AI doesn't fall cleanly into IT, R&D, or HR — it has multiple stakeholders and needs coordination from the top. Without that, nobody owns it, and nothing happens.
Beyond that, the CEO has two execution choices. The leadership-driven approach means someone on the senior leadership team takes direct responsibility for deploying AI, or hires a consultant to implement the plan. The champion-driven approach means selecting an internal AI advocate to own the rollout. If a champion has already emerged in your organization, run with it. Either approach works with adequate trust and support from senior leadership; without that, deploying AI is a tricky business.
Either way, the CEO's job is to reconcile corporate objectives with where to deploy AI first, get buy-in from the leadership team, and develop consensus on how results will be measured. Add AI where it matters most, but don't put a deadline at risk. Like any new technology, generative AI has a learning curve. You have to invest before the payoff.
Which AI Tools to Buy
As of April 2026, buy Claude. If you prefer OpenAI, buy ChatGPT. Skip the rest. Both tools are excellent value against the API pricing, and both can accelerate everything from sales and marketing to IT and engineering. Which plan you need depends on your size.
Claude Team starts at about $30/seat/month, ChatGPT Business is about $25/seat/month.
Small Businesses
If you're a small business and don't have HIPAA concerns, get Claude Team. Start with the cheap plan, and upgrade anyone who hits the limits.
If you prefer ChatGPT, get their Business plan.
The Workaround for the Five-Seat Minimum Several of the startups I work with are small. This is often an advantage, but in the case of Claude Team, it's a problem: Anthropic requires you to buy a minimum of five seats.
There's a workaround: buy the five seats even if you only have two or three employees. That's the solution one CEO has used successfully. And they still use the extra accounts when they need more tokens.
Warning: Don't do the opposite. I'm hearing stories about founders getting banned for sharing one account with multiple people.
Medium and Large Businesses
Medium and Large Businesses, and those that need HIPAA or other compliance features will need the Claude Enterprise plan. While the prices look similar starting at about $25/month per seat, these plans will cost significantly more as all AI usage is billed at API rates. That is, no usage is included with the monthly fee.
However, the enterprise plans do allow for more fine-grained controls of permissions, data retention, audit logs, compliance, and so on. It offers all of the enterprise functionality your IT and legal department would expect, and like the team plans, by default, your data won't be used to train the next models.
If your organization is doing a small trial, the team plan might be enough.
For more on protecting your IP when using AI, see Protect Your IP When Employees Use AI.
What to Avoid
You may notice that many of your existing providers have started offering AI tools inside or alongside their products. In some cases, these tools are silently collecting your data or usage patterns (whatever that means) to train their next-generation AI tools. You probably don't want that -- not without thinking hard about what IP you're sharing with them.
In addition, some of these tools don't stack up against the top foundation models. I've talked to several CEOs who tried AI and concluded it doesn't work. In every case, they were using underpowered tools — a vendor's bolt-on AI feature or last year's free tier. The flagship models from Anthropic and OpenAI outperform most specialized AI tools. If your first experience with AI was disappointing, the model was probably the problem, not the technology.
The same goes for the free tools offered by even the top-tier AI providers. They're usually using your data for training, and they're often not giving you the best models. This isn't always apparent because the same model version can ship in multiple levels with multiple modes of use. For instance, the same AI model with planning mode often produces much better results than AI without planning -- for a greater cost, of course.
If you're not paying for it with money, you're probably paying for lesser models with your confidential data.
AI Usage Policy
The basic AI policy to give your employees needs to protect your IP, your confidential data, and your customer's data. I suggest modifying this sample policy as the framework for HR and legal to build from:
- Only the approved Claude account with corporate login should be used to process internal data, IP, or generate code.
- No free or paid AI accounts should be used for business purposes, including AI incorporated into software other than those using the approved AI logins
- No business data or IP, including code should be entered into search engines
- All AI outputs are to be treated as trade secrets covered by your employment or service agreements and NDAs
- Company AI tools are for business purposes only, and all usage may be logged, stored, and audited
- No AI tools may be directly involved with the invention of next-generation products except as approved in writing by the IP team
- No customer data may be entered into an AI chat or interactive tool. Customer data may be processed by production software — even software that was written with AI assistance — as long as that running software doesn't call an AI API
- All production code generated by AI must go through the full testing and validation process
- Employees using AI to generate work product are responsible for testing and reviewing the outputs
For a deeper dive into AI IP protection, see the five-step checklist and the IP ownership analysis.
How protected is your company?
Take the 2-minute AI IP Risk Assessment to score your organization across four dimensions — IP protection, policy coverage, documentation readiness, and vendor risk.
Take the Assessment →Training Your Team
Training is part of good governance and part of ensuring your AI investment pays off. Without training, you give your employee a license to continue on with business as usual. After training, if it's good, your employees will know what you want and how to do that work.
Remember: there are few skills that don't require training. We take for granted that we can drive cars or operate computers. We didn't arrive in the universe knowing how to do that. We were trained informally or formally, in some cases so long ago we don't remember. And those are mature technologies. Generative AI is so new it's still evolving.
There are three kinds of training you'll want for AI: compliance and legal, general knowledge, and role-specific. Each requires a different approach, format, and time investment.
Compliance and legal training covers your AI policy: what you can and cannot do with AI tools. Your employees and contractors need to know the rules, the reasoning, and who to ask when they have questions. This is the simplest layer — a 30-minute session during onboarding, backed by a written policy document. A wiki page or a PDF may suffice, depending on how your HR department handles these tasks.
General knowledge training teaches people how AI actually works and how to use it effectively. If your pilot program targets eager engineers, you might use the free training provided by your AI vendor, like Anthropic, or send them to an AI Engineering conference. For a broader audience, you should seek a vendor or consultant who can provide tailored training. Start with a 2-hour hands-on session where participants use AI on a real task from their role — not slides, not a demo, actual work. If your AI champion is an exceptional communicator, they might deliver this training themselves.
Role-specific training takes the most work and has the highest payoff. If you want to integrate AI into your sales team, there is no one-size-fits-all approach. Every organization uses different tools and different workflows. You will have to research and build the AI workflows in collaboration with the team that will use them. This is a job for experts, and training must be hands-on — one hour of pairing on a real task beats a full day of training slides.
There's empirical evidence that training matters more than access. MIT Sloan ran a field experiment with 250 employees at a technology consulting firm, randomly assigning them to use ChatGPT or not. The study measured creativity — as rated by supervisors and external evaluators. ChatGPT users were rated as more creative, but only if they had strong metacognitive skills: planning, self-monitoring, and revising their approach. Without those skills, AI had little to no effect. As the lead researcher put it: "Generative AI isn't a plug-and-play solution for creativity. Employees must know how to engage with AI — to drive the tool rather than letting it drive them." The study is about creativity specifically, not productivity in general — but the pattern matches what I see across every AI deployment: skill beats subscription. And the good news is that those skills are teachable. That's exactly what training is for.
Role-specific training bridges the gap between theory and implementation, and I've seen it work on a variety of technologies. For example, in the early 2000s, I participated in two weeks of Linux training. The first week focused on general Linux skills, where we learned the fundamentals of the technology and how to use it as a typical user. The second week of training was specific to our team; the trainer spent time learning what we did and developed a specialized curriculum to give us hands-on experience writing and debugging low-level kernel drivers.
On the Monday following the training, we immediately put that role-specific training to use. There was no theory-to-implementation chasm to cross because we had already experimented with the technology and asked the trainer questions. Those two weeks probably saved us a quarter or more of the time spent stumbling around with a new technology.
I see this in AI too. One team trained their SDRs to monitor an AI agent that drafts, personalizes, and handles replies for the first touch of cold outreach. The humans shifted to handling complex replies and catching the AI's blind spots.
Your AI champion might handle role-specific training, but for larger organizations in particular, you should consider hiring an AI consultant to do this work.
The 12-Week AI Pilot
Week 0
Your IT team buys and configures your AI subscriptions and configures accounts for the pilot program. HR gives the pilot participants your AI policy training.
Your expectation: that your employees involved in the pilot raise their awareness of the generative AI tools they're already using, and prepare to switch to the approved tools and/or request approval for existing tool usage.
Week 1
General knowledge training with hands-on experimentation. Your pilot participants gain access to your AI subscriptions (e.g., Claude) as they take your general-knowledge AI training.
Your expectation: pilot participants understand the abilities and limitations of their AI tools, and feel confident using AI in the workplace.
If it's not working: speak with each participant on a 1-on-1 basis to determine where the problem lies. Often, some employees have objections to using AI that go unaddressed. See The AI People Problem for the resistance spectrum and how to handle each profile.
Week 2
Begin experimenting with role-specific AI implementations alongside hands-on role-specific training.
Your expectation: the participants have hands-on experience with generative AI tools, as they will start using them in their roles. They feel confident and ready to return to their normal roles and use AI on a day-to-day basis.
If it's not working: meet with the trainers and evaluate if the training is at an appropriate level for the pilot employees. Is there an assumption about their work that hasn't worked out? If there are no obvious answers there, speak individually with each participant and try to understand why they don't feel confident using AI.
Week 3 - 4
By the third week, your pilot team should have identified at least two workflows that can be accelerated with AI. This is when you begin testing these new workflows and recording the results. Your AI champion or consultant should be available to work with the pilot team, checking in individually as needed and several times as a team.
Your expectation: all participants begin using AI alongside their work. For instance, software developers might give coding tasks to Claude Code while closely monitoring the output, revising the code, and improving how Claude Code is configured in their repository.
If it's not working: call a team meeting to identify where the problem lies. Is the problem in the tooling? The training? An unexpected issue integrating AI into real workflows? In a group setting it should be possible to identify possible barriers and solutions.
Week 5 - 8
This is the measurement phase. Your pilot team has been using AI for a month — long enough to get past the novelty and start developing real habits. Now you need to find out if those habits are producing results.
There are two traps to watch for. One trap is assuming that results are immediate and that if AI usage doesn't increase productivity, it won't work at all. The other trap is fooling yourself that AI is helping when it only feels like it. You need actual measurements, not vibes.
What to measure depends on the workflows you targeted, but look for:
- Time savings: Are tasks that used to take hours now taking minutes? Have your pilot participants track a few representative tasks before and after AI.
- Output quality: Is the work better, worse, or the same? Have someone outside the pilot team evaluate output quality blind.
- Usage patterns: Are people actually using the tools daily, or did they try it for a week and stop? Login frequency tells you more than any survey.
For example, if you're using AI to decrease churn, how does it compare to your previous interventions? Have your churn rates gone down or stayed steady? Does AI-powered churn automation have a better or worse return on investment?
If you're using AI to write software, are you shipping features faster? How are customer reviews trending? What about support costs? The results should be obvious. One CEO I mentor now ships projects in a few weeks that once took months — but obvious-feeling isn't the same as measured. Track cycle time, support ticket volume, and defect rates so you can tell the difference between "it feels fast" and "it is fast."
It's important, as you measure the results, to consider downstream effects as well. Sometimes, improving business processes in one area shifts the stress to another. For instance, improving sales often results in increased churn if the sales promise isn't fulfilled by the product.
Your expectation: the pilot team has found several areas where generative AI produces positive results for the company, and the use of AI has become a habit.
If it's not working: first examine if the pilot study results are uneven. That is, are some pilot employees using AI effectively while others are not? If so, there may be opportunities for a knowledge share. It is also possible that some employees are resisting AI. If the results are universally poor, you will need to determine if this is a people issue, a training issue, or a technology issue. This should start with 1-on-1 conversations with the pilot employees and a consultation with your AI experts.
Week 9 - 12
By week 9, your AI champion or consultant should adjust the implementation based on what you've learned. It is highly unlikely that your first approach to AI will work perfectly. Even if all AI workflows seem successful, there is likely room for improvement. Double down on what's working and either fix or abandon what isn't.
This is also when you start building the case for expansion. Document the wins — specific numbers, specific workflows, specific people. You'll need these when you pitch the next department.
Your expectation: your team has produced well-documented workflows incorporating AI. Those workflows should have had a measurable positive impact on business metrics.
If it's not working: revisit the pilot's history. Did a change in your AI implementation make something worse? Have expectations shifted from the start of the project? Has the added productivity in one business function created stress in another function? Finally, examine whether AI has negatively impacted your employees' sense of purpose or self-worth.
Expanding Beyond the Pilot
As your pilot program gains traction and, most importantly, measurable success, you should start marketing its outcomes internally. The more people learn about the improvements AI brings to the business, the easier it will be to recruit AI champions across the organization.
Because AI does require an upfront investment, you want your next introduction to AI to have some demand. You're not going to be able to force AI on a department; it's just too difficult to integrate it into a workflow without insider assistance.
The good news is that in my experience, when people see the impact of AI, ambitious people naturally want to start using it as well. If you pave the path to AI adoption and share what people see down the trail, others will start walking it too. At some point, AI will reach critical mass and become inevitable across the organization, as long as it has access to the tools, training, and some help from experts.
Common AI Deployment Mistakes
Failure modes fall into a spectrum: rollout missteps, people problems, IP leaks, and the twin traps of either over- or under-automating.
The Big Bang Announcement
"From this point forward, we're AI First." The CEO gets on stage. An hour of vague presentations. Nobody quite understands what it means. Will there be layoffs? The engineers look like they're sucking on lemons. The AI experts in the room feel insulted. Everyone else assumes the worst.
This is how most AI rollouts stall before they start. A surprise announcement that sounds simultaneously vague and ambitious. Deploy to a pilot group first, build internal evidence, and let results do the selling.
AI Resistance
Even in very small organizations, you will find good employees who refuse to use AI. This may stem from concerns about AI taking jobs, ethical concerns about AI in society, or a belief in handcrafted work products. People respond along a spectrum — excited, interested, lost, indifferent, hostile — and the hostile ones provide cover for the indifferent.
What works: train leadership first, then pair early adopters with the willing-but-uncertain. One hour of pairing on a real task beats a full day of training slides. I've written about the resistance spectrum and how to handle each profile in The AI People Problem.
IP Leaks via Free Tools
Your employees are already using AI — they're just using it on their phones, on personal accounts, with free tools that train on their input. Samsung employees pasted proprietary information into ChatGPT's free tier on three separate occasions. This isn't an edge case; it's a pattern. Your AI policy and enterprise-tier accounts exist to prevent this. I covered this in depth in Shadow AI: The Tools Your Team Uses Without You — including the vendor-side problem, where your existing software quietly adds AI features that train on your data. See Protect Your IP When Employees Use AI for the full IP checklist.
Death by Slides
PowerPoint and Apple Keynote are terrible mediums for AI training. Never read PowerPoint slides as training, and never let it enter your mind that letting your employees read a deck independently counts as training. That's not training.
Your employees need guided, hands-on experience. They need labs, not conceptual information. If you wouldn't train someone to drive a car by showing them slides about steering wheels, don't train them to use AI by showing them slides about prompting.
AI Addiction
I've watched CEOs disappear into their AI tools. They're not sleeping. They're not delegating. They're not focused on revenue. They're pair-programming with Claude at 2 AM instead of waiting for a few hours for their employees to take over.
The pull is real — it's genuinely satisfying to accomplish so much, so quickly. But a CEO's job is to connect activities to financial outcomes, and these AI addicts aren't posting great revenue numbers. The work is performative.
Operating AI isn't delegation — it's work that should be delegated. "AI Engineer" is going to be a real job title within companies that are getting the most out of AI. Your job as CEO isn't to run the prompts.
Runaway AI
On the other end of the spectrum, I've seen CEOs trying to automate too much. Their product is a pile of AI-generated garbage code. Their sales and marketing are run by an AI agent that is slowly going off the rails unnoticed.
People dream of massive one-person businesses, but those will be quite rare. As businesses scale, more things go wrong. That's simply the nature of most businesses, and that is why a successful AI-driven business will almost certainly hire employees even if their job consists of making sure their part of the AI system hasn't gone off the rails.
AI Deployment Checklist
If you take nothing else from this guide:
- Buy enterprise-tier AI. Claude Team or Enterprise, ChatGPT Business or Enterprise. Skip the free tier. Get the IP protections you need.
- Publish your AI policy and train everyone on it. 30 minutes is enough for the basics.
- Run a 12-week pilot with 5-10 people. Weeks 0-2 for tools and training, weeks 3-4 for workflow testing, weeks 5-8 for measurement, weeks 9-12 for adjustment.
- Measure everything. Time savings, output quality, usage patterns. Documented wins are your ammunition when you pitch the next department.
- Train before you scale. In MIT Sloan's field experiment, employees who had access to ChatGPT but lacked metacognitive skills saw little to no gains in creativity. Tool access without skill wastes the subscription.
Need Help?
If you'd like help deploying generative AI in your organization, drop me a line.