A manager at a 500+ person utility company told me about her last AI rollout. 35 employees went through a formal Copilot training. Real syllabus, real exercises, real certificates at the end. Then she sent a follow-up survey to see what stuck. 30 to 40 percent answered.
"It's really hard for employees to participate," she said. "Even to respond to a survey is like pulling teeth."
Most AI training stalls at 90 days. People sat through the content, some of them remember bits of it, and almost none of them are using it. Leadership feels confused, because the training was good, the tools are paid for, and the team showed up. So they ask why nothing is happening.
The training taught the wrong thing.
The how vs. the when
Most AI training programs are built around the how: how to write a prompt, how to navigate Claude, how to set up a Copilot agent, how to chain a few tools together. The mechanics layer fits cleanly on a slide — button-clicking, interface tour, prompt structure.
The same manager named the actual gap when I raised it: "The biggest barrier is just figuring out when to use it, not even how. The how, people can kind of figure out themselves."
She's right. A motivated adult can teach themselves the mechanics in 20 minutes of fumbling. The interface is a chat box. The instructions are written in plain English. The mechanical layer is genuinely easy.
The hard part is when. Which moments in my actual work does this help me? Where am I going to waste 45 minutes wrestling with a tool to do something I could have done in 10 minutes by hand? When does AI cost more than it saves? Which of my outputs can I trust it for, and which ones absolutely cannot? Knowing when decides whether AI shows up in someone's Tuesday morning or sits dormant on their toolbar.
Almost no training teaches when.
Why "when" is harder to teach than "how"
The how of AI is universal. Everybody writes prompts the same way. Everybody navigates Claude the same way. A trainer can stand at the front of a room and demonstrate it once, and the demo is correct for all 35 people watching.
The when is situational. It depends on the specific work in front of the person. A claims processor's "when" looks nothing like a marketing coordinator's "when," which looks nothing like a procurement analyst's "when." The trainer doesn't know any of those workflows in detail. The participant does, but doesn't yet know enough about AI to spot the openings.
So the training defaults to the layer the trainer can actually teach. Everyone gets the same prompt-engineering tips. Everyone gets the same generic use cases ("summarize a meeting!" "draft an email!"). Everyone gets the same certificate. Then they go back to their desks, and the question that was always going to determine adoption — where does this fit in what I already do? — sits unanswered.
A university CIO described attending a different firm's AI workshop: "It was a bit optimistic to say in three hours you're gonna have a working chief of staff." Emergency office hours had to be added the following week. The mechanical demo landed. Application required the office hours.
What "when" training actually looks like
Application-layer training requires 3 things most tool-demo training skips.
1. Use-case discovery before prompt mechanics. Every participant should walk in and spend the first session mapping their own workflow: what they do every day, what drains them, what repeats. The mapping happens before they touch a tool. The trainer's job is to help them spot the moments where AI fits. The mechanics come after, taught against real candidate use cases the participant brought with them.
2. Peer-led examples on top of expert demos. A senior expert demonstrating "I built this clever automation" lands as theater. A peer 2 desks over saying "I tried using it for the part of my job I hate and here's what happened" lands as permission. Application training has to surface and circulate the real, ugly, in-progress examples from inside the room.
3. Follow-on coaching after one-shot certification. Application is a habit. A 2-hour session can plant the seed for one. The habit grows in the weeks that follow, when the participant is back at their desk trying to apply the lesson to actual work. Programs that produce real adoption build in a follow-on rhythm — office hours, monthly check-ins, a peer cohort, somewhere to bring the question "I tried it on this and it sort of worked, what now?" Application happens in that rhythm. Without a place for it, application never happens at all.
The reluctant convert moment
A self-described "very old school" VP at a government organization went through training and hated most of it. "It was pretty painful for me to do that. Once I started understanding, I was behind the class the whole way."
Then, weeks later, on a follow-up call: "I have been using AI and it has just tremendously helped me with my workload research."
Real success looks like a grumpy person finding her use case weeks after the formal training, against her own initial resistance. The training delivered a floor of competence. Application showed up later, when she had time, real work in front of her, and someone to ask.
A forum founder serving a 100-member executive community described the same shape from the other side. Members struggle to activate knowledge from presentations. They sit through good content, leave inspired, and 6 weeks later their workflow looks identical. The content was correct. The bridge from content to application was missing.
The question to ask any AI training vendor
Before you sign anything, ask the vendor 1 question: what does the application phase look like, and how long is it?
If the answer is "we cover application in module 4" or "we have a great use case section," you're hearing mechanics dressed up. The application phase happens after the formal sessions end: the coaching rhythm, the cohort, the office hours, the place where someone brings their actual half-finished attempt and gets help. Adoption is won or lost in that phase.
If the vendor doesn't have one, you are buying a syllabus. The syllabus has its place. It will not produce usage. And when the follow-up survey goes out 90 days later and 30% of your team responds, you will know why.
CitizenWorks designs AI training programs around application. Workshops start with workflow mapping. Strategy Circles provide the ongoing rhythm. Transformation Partner engagements embed application coaching across 6 to 12 months. The how is a 20-minute lesson. The when is the work.