AI is making people more productive. That part is real. A developer writes code in a fraction of the time. A marketing coordinator drafts campaigns in an afternoon instead of a week. An operations manager automates a reporting process that used to eat two days a month.

The productivity gains are showing up. What most organizations have not figured out is who actually benefits.

Steve Yegge, a veteran engineer and author, recently wrote about what he calls the "AI Vampire" — the phenomenon of AI-driven productivity burning people out. His argument: when AI makes someone 10x more productive, companies capture all that value by expecting 10x more output. The employee gets exhaustion. The company gets a temporary sprint that ends in turnover.

He's writing about developers. But the same dynamic is playing out in every department I walk into.

The unanswered question

I've run AI adoption programs across government agencies, insurance companies, creative firms, and healthcare organizations. The pattern I see most often has nothing to do with the technology and everything to do with a question that leadership has not answered: What do we want people to do with the time they save?

When that question has no answer, employees fill in the blank themselves. And the two most common answers they land on are both bad.

The first: "I'll do more." They absorb extra work, take on new tasks, and run faster because they assume that's what's expected. This is Yegge's AI Vampire. The employee captures zero value. The company captures all of it — until that person burns out or leaves.

The second: "I'll hide it." They automate parts of their job and keep it to themselves, afraid that being visibly faster means being visibly expendable. I've seen this one up close — an employee at an insurance company built AI tools that saved his team significant time, in secret, because company policy blocked AI use. The rational move was silence. The organization captured zero value.

Both outcomes are adoption failures. And both stem from the same root cause: leadership skipped the conversation about what productivity gains are actually for.

Productivity without purpose creates fear

An employee at a government-adjacent organization asked me point-blank after a training: "Should I be afraid of losing my job?" The trigger wasn't some major AI overhaul. It was a Calendly demo. Task-level automation — booking appointments — made her question whether she was needed at all.

That reaction is rational. If a manager can't answer "what do we want you to do with the 8 hours you save," employees will assume the answer is "we want to pay for fewer of you." And they're not always wrong to assume that.

This is where Yegge's framing misses something. He's focused on the company-vs-employee tug of war over productivity value, and he's right about the dynamic. But in most organizations I work with, the problem isn't that companies are consciously extracting value. The problem is that nobody thought about it at all.

Leaders bought AI tools. They told teams to use them. They expected adoption. They did not build any framework for what happens after someone gets faster. No one made commitments about how saved time would be redirected — toward higher-value work, toward professional development, toward projects that had been stuck in the backlog for years.

Without that framework, every AI productivity gain creates anxiety instead of momentum.

What the framework looks like

Before you train your team on AI, leadership needs to answer three questions out loud:

1. When someone automates part of their role, what do we want them to spend that time on? Be specific. "Higher-value work" is not an answer. Name the projects, the initiatives, the skill-building. If you can't name them, you're not ready to train.

2. What happens to headcount when teams get more efficient? If the plan is reduction, say so — people deserve honesty and they'll figure it out anyway. If the plan is reinvestment, say that too, and put it in writing. Ambiguity is where trust dies.

3. How will we recognize and reward people who find efficiencies? If the person who automates a two-day process gets nothing but more work, you've built a system that punishes initiative. The best adopters will either stop trying or leave.

These are leadership commitments, not training topics. They belong in a conversation with executives before anyone opens an AI tool. That's why the AI Pulse Check starts with a listening tour — you diagnose the environment before you prescribe the training, because the environment determines whether training sticks.

AI productivity is a leadership problem, not a tools problem

Yegge's solution is a shorter workday — three to four hours. That might work in software engineering, where output is measurable and individual contributors have leverage.

For most organizations, the answer is more structural. The teams I work with don't need shorter days. They need leadership that has thought through what AI adoption actually means for the people doing the work — before handing them the tools and expecting results.

The companies that stall on AI adoption almost always stall for the same reason: they treated AI as a technology rollout instead of a change management challenge. And at the center of every change management challenge is a question of trust.

Answer the question. Tell your people what happens when they get faster. Then train them.