Should HR Drive AI Adoption?

TL;DR
There’s a specific meeting senior HR leaders are getting dragged into more and more right now. It starts as, “We’re behind on AI.” It ends as, “So why aren’t people using it, and what is HR doing about it?”
Episode 8 is all about how HR leaders can drive AI adoption without causing widespread panic. It’s all about what HR leaders are dealing with behind closed doors as CEOs push for “AI adoption” without understanding what adoption actually means, how risk shows up, and who ends up holding the bag when something goes wrong.
The Situation
Daniel’s calendar invite landed while Kelly was still replaying the board member’s face in her head. That brief moment when a room decides whether you’re credible or not.
Subject: Concerns regarding lack of AI adoption
Attendees: All C-Suite
Message: “Kelly, good work today in the Board Meeting. But I have major concerns regarding how we are not using AI at all in the company. Let’s discuss it together.”
She read it twice and felt the familiar irritation rise, sharp and immediate. She was a huge proponent of being AI-forward but Daniel’s version of urgency usually meant someone else’s cleanup.
Maya saw the invite too and asked, “What does he want from you?”
Kelly mentioned dryly, “He wants someone to make the company look modern by next Tuesday.”
Maya had a faint smile on her face as she said, “So he wants HR.”
Kelly didn’t answer. She was already mentally scrolling through the risks. Confidential information out in public. Bias. Employees using tools they don’t understand. Questions about lay-offs. Managers panicking and turning “AI adoption” into another performance weapon.
The Spiral
Daniel started the meeting without any delay. “We’re not using AI,” he said. “Not meaningfully. And the board is asking questions. Everyone else is moving.”
Jessica, CRO, leaned back and said, “Everyone in Sales uses it already. They just don’t announce it in Slack.”
Parker, CTO, said nonchalantly, “Engineering uses it too. Probably more than Sales. We’re not posting screenshots of it.”
Daniel held up a hand. “That’s the problem. It’s hidden. It’s inconsistent. It’s not measurable. We need adoption.”
Kelly waited for the right time. If she spoke too early, she’d become the obstacle. If she spoke too late, the decision would already be made.
Daniel turned to her. “This touches people’s behavior. It touches change. It touches performance. You’re closest to that.”
Kelly kept her voice even. “I agree - HR should be driving adoption. But it’s not straightforward. Let me explain - if we push this without guardrails, you don’t get productivity. You get people pasting confidential information into public tools. You get managers using AI outputs to justify decisions they can’t explain. You get employees thinking they’re being monitored. I’m sure nobody wants that.”
Daniel stared at her. “So what, we do nothing?”
Lena, CFO, finally looked up. “We also can’t pretend it’s not happening. If it’s happening in the dark, it’s riskier. I’m sure we don’t want confidential information on Chat GPT.”
The room went quiet in that way it does when people realize they’re arguing about different things. Daniel wanted motion. Lena wanted defensibility. Parker wanted autonomy. Jessica wanted speed. And Kelly wanted everybody on the same page.
The Pivot
Kelly opened her notebook. “Let me ask you something,” she said, looking at Daniel. “What would make you believe we’re ‘using AI’?”
Daniel didn’t hesitate. “Faster output. Fewer headcount requests. More leverage.”
Kelly nodded. “So outcomes. Not vibes. Right?”
He frowned slightly. “Obviously.”
“Then we treat it like any other operational change,” she said. “We pick where it matters. We make it safe. We make it normal. And we stop pretending you can mandate curiosity.”
Daniel bristled. “I can mandate expectations.”
Kelly didn’t flinch. “You can. But if the expectation is vague, people will perform it. They’ll act busy. They’ll hide shortcuts. They’ll do the thing that looks like adoption and creates the most risk.”
Samir’s pen clicked once, satisfied.
Kelly added point-wise,
- If you want real adoption, we first need people to realize how AI will benefit them and that it doesn’t pose a threat to their careers. The biggest question everybody’s thinking is if they are going to be replaced by AI - we have to create a safe space if we need to drive real behavioral change.
- Next, we need approved tools. Secure environments. Clear rules. Otherwise people will just keep doing it quietly and you won’t know what’s happening until something leaks.
- Lena, we need a cost model. If we’re buying licenses, we do it intentionally.
- We need to recommend where they should be using AI.
- And before I say the last point, I have a question for all of you. How much are all of you using AI in your daily lives?
The room went quiet pretty quick. Kelly was the first one to speak, “I’m not finger pointing but we cannot expect company-wide adoption if we as leaders are not walking the talk.”
Daniel looked back at Kelly. “So what are you proposing?”
Kelly took a breath. “A short AI use policy, written in plain English. Approved tools. Clear do-not-use rules. Training for managers on what not to do with it. A handful of use cases by function. And a way to report mistakes without someone getting publicly humiliated for it.”
Daniel’s eyes narrowed. “Report mistakes?”
Kelly answered carefully. “Because people will make mistakes. They already are. If we create fear around it, they’ll hide incidents. If they hide incidents, we find out when it’s expensive.”
Lena nodded once. “That part matters.”
Daniel looked unconvinced. “This sounds slow.”
“It’s not slow,” Kelly said. “It’s controlled. If you want speed, this is how you get it without stepping on a legal landmine.”
The Reframe
Kelly stayed late that night writing a one-page plan - it had six parts.
- Approved tools and where they can be used.
- Examples of acceptable use by role, so nobody had to guess.
- What cannot be entered into any AI system, full stop.
- A manager standard: AI can assist work, it cannot replace judgment, and it cannot be used as a rationale for people's decisions without evidence.
- Recommended champions from each department
- AI Enablement - a weekly session to highlight wins
The Meeting
The next afternoon, Daniel called her into his office. No calendar invite this time. That usually meant he wanted control.
He held up her one-pager. “We’re not a legacy company. We’re a product startup. I want people moving.”
Kelly sat down, slower than she needed to. She wanted him to see she wasn’t rushed.
“Do you want people moving,” she asked, “or do you want people rushing?”
Daniel’s expression hardened. “Don’t play semantics.”
“I’m not,” she said. “Because the fastest way to kill adoption is to make it a performance theater. People will use AI to look productive, not to be productive. Then you’ll get garbage work at a higher speed.”
He stared at her. “So what would you do?”
Kelly didn’t raise her voice. “I would choose three areas where the company is bleeding time. Customer support knowledge search. Sales enablement drafts. Engineering documentation. We roll out approved tools there with real training and clear boundaries. We measure output. We learn. Then we expand.”
Daniel shook his head. “That’s too incremental.”
Kelly’s tone stayed calm. “Incremental is what keeps you from announcing a big AI push and then having to explain why someone pasted a customer contract into a chatbot.”
Daniel paused, “The board wants to hear we’re moving.”
Kelly met his eyes. “Then we tell them the truth. We’re moving in a way that won’t backfire. You can say that, or you can say nothing. But you don’t get to say we’re behind because HR is cautious.”
Daniel’s jaw tightened. “You’re making this adversarial.”
Kelly shook her head. “I’m making it real.”
The Aftermath
Maya found Kelly sitting at her desk with her laptop open and her hands just resting on the keyboard.
“Hey Maya, I need you to do something,” Kelly said. “Quietly. Ask HRBPs to listen this week. Not for adoption. For fear. If managers start treating AI usage like a litmus test, we’ll see it in the way people talk.”
Maya’s face tightened. “You think they’ll do that?”
Kelly didn’t answer immediately. “I think someone will.”
The Pattern
If leadership confuses urgency with clarity, there’s a high probability that your AI adoption plans won’t work. If you don’t define what good use looks like, people create their own version. And the version they create will optimize for safety and optics, not value.
Kelly Recommends
If you’re trying to push AI adoption inside a company, the checklist that Kelly shared is a great place to start! HR cannot be the cheerleader and the cleanup crew at the same time.
And if you are looking for the Performance Management platform that AI-forward companies swear by, you’ve got Klaar.
Dear Kelly: For the next edition, Kelly is collecting the real-world HR stories that deserve to be told - the messy, painfully familiar ones. Drop yours here. What should we cover?
