Why AI Adoption Is a Culture Problem, Not a Tech Problem
There was a time where employees asked each other general knowledge questions before consulting Google search. It was so bad that some genius made a sarcastic site "Let Me Google That For You", so you could poke fun at your colleagues for bugging you.
However, being behind the curve on Google was a minor offense. The AI revolution seems to be moving much quicker; I'm already hearing about jobs lost — not due to AI productivity gains — but because the offender refused to adopt AI in their workflow.
We're at a very strange point in AI innovation. Generative AI is brand new. GPT-3.5, which really kicked off the first wave of excitement for LLMs, released in 2022. The business world and the workforce are largely not taking much advantage of AI technology. Sure, large corporations occasionally blame AI for their layoffs, but that's largely a fig leaf to cover their poor financials.
Meanwhile, cutting edge startup founders are so immersed in AI, they shed little droplets of AI wherever they walk.
We're so early in the generative AI wave that the majority of otherwise qualified people applying for jobs will have little-to-no practical AI experience. A large fraction of those people may feel skeptical, or even hostile when it comes to AI. After all, the CEO of one of the most prominent AI companies seems to endlessly rattle on about the danger of AI and how many jobs it will replace.
Sam Altman's flippant lack of empathy for workers — while worrying about hypothetical SkyNet scenarios — creates a headwind for any organization wanting to embrace AI.
And when I say AI culture, what I really mean is a culture where learning and trying things is expected. You need an organization that is willing to expand their horizons and risk failure and frustration. And you need people who get excited about doing their job better.
Executive AI Roundtable
Culture, hiring, and resistance to AI come up every week in my closed-door roundtable for founders and C-level leaders. No vendors, no pitches — just operators comparing notes on what's actually working.
Request an Invitation →A calcified and "by the books" attitude is not currently compatible with an organization looking to improve productivity and quality with AI. There are few playbooks written for AI, and none of them have collected dust. By the time they do collect dust, they will have gone just as out of date as your Dummy's Guide to Windows 95.
In other words, the folks who respond to every innovation attempt with a story that predates US independence aren't going to help you adopt AI. "We tried that in the winter of 1779, and it didn't work!"
AI applications that didn't work well in 2025 are working quite well here in 2026!
How to Interview and Hire for AI Skills
Among the founders I've invested in, and the CEOs attending my AI Roundtables, I hear the same story over and over again: I had to let them go because they couldn't work with AI.
That's an expensive lesson. You can avoid it by screening for AI skills and attitudes during the interview process. Here are a few samples of what I'd ask:
Attitude and curiosity. Ask candidates to describe a time they used AI to solve a problem. You're not looking for a specific tool — you're looking for someone who tried something, evaluated the result, and iterated. The candidates who light up when talking about it are the ones you want. The ones who shrug or pivot to a rehearsed answer about "staying current" are telling you something.
Practical skill. Give them a real task during the interview, preferably something relevant, and ask them to use AI tools to complete it. For instance, request research to inform a decision, a writing sample, a code review, or a data analysis. Watch how they prompt, how they evaluate output, and whether they blindly accept what the model gives them. The best candidates treat AI like a sharp but unreliable colleague: they verify, they push back, they ask for tests, they iterate.
Openness to change. Ask how they'd feel if the tools they use today were replaced in six months. AI moves fast enough that the specific tools will not matter in six months — what matters is whether someone can adapt without treating every change as a personal affront.
Training Employees on AI When No One Else Will
You can't expect your existing employees — or even your new hires — to have good AI skills. I'm not aware of any university teaching practical AI delegation skills. There aren't many informal classes on it, and the ones that exist are already out of date.
You need to take matters into your own hands. Here's what actually works:
Start with leadership. If your VP of Engineering can't articulate how AI fits into the development workflow, nobody on the engineering team will take it seriously. Train your senior leaders first. Don't make them suffer through a video lecture or PowerPoint; give them hands-on training from an expert. Ask them to use the tools on real work for a week and report back. Their fluency and comfort (or lack of it) sets the ceiling for the rest of the organization.
Pair people up. The fastest way to build AI skills is to sit next to someone who's already good at it. Pair your early adopters with the people in category 2 and 3 from the resistance spectrum — the willing-but-uncertain. One hour of pairing on a real task beats a full day of training slides.
Make it about their work, not "AI." Nobody wants to attend a generic AI workshop. They want to know how AI can help them do in-depth research, write better proposals, craft clear copy, debug code faster, or analyze customer data in half the time. Tailor training to specific roles and specific tasks. The moment someone sees AI save them an hour on something tedious they do every week, the resistance evaporates.
Accept that training is ongoing. The tools change every few months. A training program that runs once and declares victory will quickly fall behind the curve. Build in regular touchpoints — weekly show-and-tells, a Slack channel for sharing wins, monthly office hours with your AI champions. Send your most advanced AI users to conferences and events to keep your organization up to date.
Setting an AI Strategy Without Losing Focus
Generative AI can do a lot. It isn't enough for a leader to say "use AI. Be more productive!" It's up to you to tell them where the biggest opportunities for AI live. Unless you have very entrepreneurial employees, most will feel lost or irritated at vague attempts to become "AI First."
There are many ways to approach innovation in an organization, but all of them require protection from the rest of the organization. If you've read the Innovator's Dilemma, you know that the people (and budgets) doing strategic work are often sucked in to ordinary day-to-day work, and for that reason never show results.
If you believe you or your senior leaders have the technical chops to devise your own AI strategy, then do it. Pick the areas of your business operations where you feel AI has the best chance of making an impact.
If you prefer a grass-roots approach to AI, then pick the people who are self-propelled and technically savvy, and let them find places to add AI to the business.
In either approach, the people implementing AI need a mandate to do so. Let them report to the CTO or CEO directly, give them their own budget, and let them sell their AI tooling to the people who would use it. It doesn't matter if the managers or directors or VPs want the AI tooling; they aren't the ones who will use it.
What matters is adoption, impact, and quality.
Overcoming Resistance to AI Adoption
Maybe you've been there before. There's a huge announcement, a "town hall," where the CEO gets up on stage and tells everyone the future of work. "We're announcing a new northstar initiative today, something that will take the company into the future. From this point forward, we're AI First."
And then there's an hour of vague presentations cribbed from Fast Company, Forbes, and a few whitepapers from the big three consultancies. Nobody quite understands what it means. Will there be layoffs? What does AI in the shipping department even look like? Why does everyone on the engineering team look like they're sucking on lemons?
This is the "big bang" approach to introducing AI, or any new approach to work. There's a surprise announcement that sounds simultaneously vague and ambitious. The approach creates several problems:
- The people with the most "hands-on" experience in your business are taken by surprise, and may feel resentful
- The announcement is vague (because it isn't informed by practical realities), so many people assume the worst interpretation
- AI experts in your audience may feel either insulted, or like you don't know what you're talking about
In essence, this approach triggers a shockwave of potential resistance to your plan.
AI is a sensitive topic. Just as technology has always done, AI will raise the demand for some roles while destroying the demand for other roles. This leads to fear. And whenever you're approaching a sensitive topic, you need to start with respect.
Respect is a complex attitude. I've had executives respond to my concerns by expressing their confidence in my abilities — what they meant as reassurance came across as a dismissive attitude. Real respect means acknowledging someone's perspective and investing in addressing it, not dismissing it with flattery. People have legitimate concerns about the impact of AI on their roles; avoiding those concerns leads to big trouble.
Good employees are happy to do tough things as long as they feel respected and supported. Of course, not all employees are good — not when it comes to change or innovation.
Profiles of Resistance
People respond to new technology along a spectrum. Here's what I've observed across the organizations I've worked with:
- Excited — eager to try and learn something new; has a positive anticipation of the challenge
- Interested — willing to learn, but wanting more guidance
- Lost — people who feel uncertain where to start or afraid of making a mistake or losing their identity
- Indifferent — people who refuse to acknowledge the change or participate in it
- Hostile — people who will sabotage innovative efforts
I'm already seeing this play out in AI. Some CEOs tell me they have early adopters so excited to use AI that the CEO worries they're not getting their day-to-day work done. At the other end of the spectrum, CEOs have told me that they have let employees go because those employees refuse to adapt to more productive AI workflows.
These problematic employees often aren't bad people. They feel like they have the best of intentions. A designer wants to create a pixel-perfect design guide. A developer wants to write code with craftsmanship instead of using an AI "shortcut."
Getting Through the Resistance
Getting past the resistance to AI will take work. Some leaders will find that they have had employees in the last two categories for years: people who barely pull their weight, or who actively pull in the opposite direction from your objectives.
First things first: malcontents will have to go. Anyone opposing the business's objectives provides cover for the indifferent employees, and their continued behavior signals either a lack of commitment to innovation, or the complete incompetence of leadership.
And both groups 4 and 5 will undermine the people with genuine excitement for helping the organization adopt AI.
People in group 4 may see the light after the malcontents of group 5 leave. As CEO, it's your call if you drop the indifferent as well.
If your organization has more than twenty people, you almost certainly have someone in category 4 or 5. You'll face some tough decisions.
The remaining employees in categories 1, 2, and 3 will need training, rapid feedback, and resources to help them adapt. Mostly, what these people want is confidence that they will still be valued once AI is adopted, clarity of what is expected of them, that those expectations are reasonable, and that they will get help to get there.
What Actually Works
Your three allies in overcoming resistance are curiosity, fun, and relevancy.
If you're like many leaders I know, you might hate the idea of fun at work. That's ok; not everything in business will make you comfortable. And no, we're not talking about "mandatory fun" — pizza parties in your poorly-lit break room. We're talking about actual amusement. The right application of curiosity, fun, and relevancy can pull employees in just like an old VHS of The Princess Bride can occupy a class full of middle schoolers for a couple of hours.
Meetings are a no-go. Meetings rarely induce curiosity or fun, and they're often irrelevant for half the attendees. What does work: small innovation workshops where participants collaborate on a shared goal, AMA events where employees can ask tough questions, and one-on-one conversations where you focus on listening. I covered specific training formats — pairing, role-specific exercises, ongoing touchpoints — in the Training section above.
Why AI Adoption Fails Without Real Investment
Another reason I have seen innovative projects fail is inadequate organizational support. AI has come a long way from where it was in 2022, but it's a mistake to think an organization can adopt it like a new version of Microsoft Word. AI is not an IT problem; it changes how everyone works. It will mean real work for everyone to learn and get to the payoff.
But you won't get that payoff without some effort. Employees sense the difference between a project that is performative and one that has real investment. You can see performative innovation lampooned in LinkedIn videos and Dilbert cartoons: initiatives where the sole investment is a milquetoast speech and a slogan.
One of the most well-supported technology rollouts I've seen was in the R&D phase of the first x86-64 CPUs. The successful commercialization and launch of K8 required a massive amount of internal and external software to be created. AMD invested heavily in tooling and training, to pave the way for the product launch. My team had multiple off-site training sessions customized to our plans. We expanded the team, bought expensive hardware, books, and so on.
Everywhere you looked was evidence that the organization had full confidence in the project. From the manager level up, all leaders were able to speak intelligently about the technology and our plans. There was no hint of hedging or a fall-back plan if we didn't deliver.
I'm sure you're familiar with examples of low-investment initiatives; they consist largely of meetings. If the initiative goes poorly, which is likely, the meetings vanish from the calendars without mention. Leadership's knowledge of the innovation is spotty at best, and leaders meet sharp questions or criticism with hostility rather than reflective answers.
Status meetings don't lead to results. There are no shortcuts to effort. If you invest less in AI training than your HR team does when explaining a benefits package, people will sense it. You need visible investment, and a plan that explains how you will get your organization from little-to-no AI knowledge and experience to being able to use AI as a productivity tool.
As I covered in the training section, start with your senior leadership team — you can't make good decisions or understand progress without grasping the basics of the technology. You should also include in that budget lots of excess time for your AI champions. Unless you have a tiny organization, AI isn't a one-day-a-week sort of endeavor.
As your top talent in AI emerges, you will want those people spending most of their time finding ways to use AI to improve quality and productivity. In a larger organization, this means they must transition off their current responsibilities — hand off their projects and focus full-time on AI integration.
Measuring Whether AI Alignment is Working
Among the CEOs I know, AI success looks like productivity. A project that might have taken a few months instead takes a few weeks. An SDR team that once produced hundreds of leads produces thousands of leads. A new bug is closed in hours rather than weeks.
From my vantage point, "time savings" and better margins are a side effect of AI, not the impact. The big companies that blame AI productivity on their layoffs? I think it's largely BS. The CEOs I'm talking to are moving faster, not leaner.
And it makes sense. When car manufacturers started using assembly lines, they didn't reduce the size of their factories. No, they did the opposite: they built more and more factories!
Why would you reduce the size of your organization if each person can do more work with higher margins? You wouldn't unless the organization was broken.
Besides, competition won't let you slow. The only moat to AI adoption is twenty dollars a month and leadership skills. The leaders who get AI aligned with their organizational goals are going full throttle to overtake the market before their competition catches on.
So, what does AI alignment look like? It looks like you're able to hit your objectives much faster than before.
Quantifying AI Impact
Benchmark before you start and measure after. Give the integration enough time to show results — and make sure an improvement in one metric isn't canceled out by an opposing one.
If you're increasing sales with AI, monitor churn rates and support load too. You don't want an AI-powered sales team to get credit for growing the pipeline when all they've really done is sign up more poor-fit customers.
Pick holistic measurements whenever possible. If your engineers write more code but the same number of features ship because there's a QA bottleneck, you haven't made a meaningful improvement in business performance.
And the goal of adopting AI isn't AI itself. You want to create repeatable systems that run over months or years to handle valuable workstreams — not just ad-hoc prompting scattered across the organization. The best of these systems get better with time. If all of your AI usage is one-off prompts, you're not taking full advantage of what AI can do.
AI Governance: Protecting Your IP and Controlling Risk
AI adoption without governance is a liability. I've talked to CEOs who discovered — after the fact — that employees had been pasting customer data into free-tier AI tools for months. Those free tools train on your inputs. Essentially, you're publishing your IP in a textbook for training AI.
There are three areas where you need guardrails before you scale AI across your organization.
Approved Tools and Data Classification
Decide what's off-limits before your employees decide for you. Customer data, proprietary algorithms, unreleased product designs — none of that belongs in a free AI tool. Marketing copy drafts and boilerplate code? Probably fine. Draw the line explicitly, because your employees will draw it inconsistently if you don't.
Then pay for enterprise-tier AI accounts. Free and personal plans may use your data for model training by default. Business and enterprise plans — ChatGPT Enterprise, Claude Team/Enterprise, and others — contractually guarantee they won't. Don't assume a paid subscription protects you; read the data-use terms.
IP Ownership and Employment Agreements
Here's the gap most companies haven't noticed: work-made-for-hire doctrine doesn't cover AI-generated output. The D.C. Circuit confirmed in 2025 that AI can't be an author, employee, or party to a contract. If nobody human contributed enough creative input, the output has no copyright at all — it's public domain.
Your employment and contractor agreements need explicit language assigning rights in AI-assisted work to the company. And if you plan to copyright or patent anything, you need to document what a human actually contributed. For code, that means separate git commits for AI and human work. For written content, track which sections a human wrote or substantially edited.
I cover this in depth — including trade secret strategy, the prior-use defense, and a five-step checklist — in Protect Your IP When Employees Use AI. For the case law behind these rules, see AI Work Made for Hire: Who Owns Employee AI-Generated Content?.
Quarterly Tool Audits
Software tools add AI features constantly. Your project management tool, your design tool, your CRM may all be feeding data to AI models now — and you might not know it. Have your CTO inventory every tool in the organization quarterly and check whether its AI features protect your data or expose it.
The Work Ahead
Aligning your organization to AI is a culture problem, a hiring problem, and a management problem — in that order. The technology is the easy part. Twenty dollars a month gets you access to the same models powering billion-dollar companies. The hard part is getting your people to actually use them, building the muscle to evaluate what AI produces, and creating an environment where experimentation is expected rather than tolerated.
Start with your leadership team. If they can't use the tools, they can't evaluate progress, set strategy, or earn the credibility to push the organization forward. Then hire for curiosity and adaptability over specific tool knowledge. Train relentlessly — and accept that the training never really ends, because the tools won't stop improving.
The organizations that get this right won't just be more productive. They'll be faster, more ambitious, and harder to compete with. The ones that don't will wonder why their competitors seem to have twice the team at half the cost.
For a practical framework on delegating work to AI and measuring results, see How to Adopt AI Internally. For the strategic case for operational AI over product AI, see The Real AI Advantage Isn't in Your Product.
