Culture & Change

The Hardest Part of an AI Project Isn't the AI

Technology is rarely the bottleneck. Resistance, misaligned incentives and poor communication are. A guide to managing the human side of transformation.

Published · 8 January 2026 6 min read

After enough engagements, a pattern becomes undeniable: the technical risk of an AI project is never the thing that kills it. The model ships. The integration works. Then the humans don’t adopt it and the project is declared a failure.

The three forms of resistance

1. Fear of replacement

Spoken: “interesting technology, let’s plan a workshop”. Unspoken: “if this works, does my job exist next year?”. Until this is addressed directly and honestly, every adoption effort will be quietly sabotaged.

What works: explicit conversations with the affected team before the project starts. Tell them exactly what the tool is for, what it isn’t for, what changes in their day, and what doesn’t. People accept change they can predict; they resist change they can only guess at.

2. Loss of status

A process owned by a team is a source of power. When AI takes over the reconciliation, the report, the triage — even if nobody loved doing it — someone loses the leverage of being the person who does it.

What works: give the team a bigger scope. If you automate their reporting, make them the analysts now. If you automate their data entry, make them the quality reviewers. Never leave a team with a smaller role than they had before; redirect them upward.

3. Workflow inertia

Even when people want to use the new tool, their Monday already has 47 things in it. Adopting a new thing means dropping an old thing, and nobody has time to decide what to drop.

What works: be the ones who decide what to retire. Deploy the forecasting tool and write the change-management note that says “the 8am Tuesday status meeting is cancelled; this tool replaces it.” Without that, the old meeting stays and the new tool becomes extra work.

The communication mistake we see most

Engineering teams describe AI tools in terms of what they are (“a RAG system over your documents”). Adoption happens when you describe them in terms of what they replace (“this is instead of pinging the legal team every time you need a clause”).

People don’t adopt tools. They adopt changes to their day.

A simple test before launch

Ask three end users, separately, these three questions:

  1. What is this tool for?
  2. When in your week are you supposed to use it?
  3. What are you supposed to stop doing because of it?

If any of them hesitates on any of the three, you’re not ready to launch. The model might be perfect. The rollout isn’t.