AI Strategy

Why 80% of AI Projects Fail (And How to Be in the 20%)

The gap between AI ambition and AI results isn't a technology problem. It's a strategy problem. After dozens of engagements, here's what we've learned from the projects that actually delivered — and what sank the rest.

Published · 1 April 2026 8 min read

Most AI programmes don’t fail because the model was wrong. They fail because the question was wrong.

The pattern we see every quarter

A board gets excited about AI. A consultancy is hired. Six months later there’s a proof-of-concept that impressed the steering committee, looked great in a demo, and never made it to production. No KPI moved. No revenue was influenced. Budget burned, trust dented.

This happens in ~80% of corporate AI projects. The reasons are boringly consistent.

1. The problem was technical, the success metric was vague

If the kick-off meeting ends with “we want to use AI to improve customer experience”, you’ve already lost. The project will drift, because there’s no shared definition of done.

What works instead: Every engagement starts with a sharp, measurable outcome — “reduce Tier 1 support volume by 30% in 90 days”, “cut manual invoice processing from 12 min to 2 min”. The AI part is secondary; the business target is the anchor.

2. Nobody owned the change

Shipping a model isn’t shipping value. Value happens when a human changes their behaviour — an ops lead trusts the forecast, a salesperson uses the lead score, a manager decommissions the old report. If there’s no internal owner with skin in the game, the model gets ignored.

3. The data audit happened after the pilot

Data quality is the ceiling. You can’t out-model missing labels, siloed systems, or a CRM that three different teams maintain differently. We now refuse to quote implementation work before a two-week data audit — it saves clients hundreds of hours of wasted modelling.

4. They skipped the boring 80%

RAG, agents, and fine-tuning get the attention. The project actually depends on: how is the data refreshed? Who owns the prompts? How do you catch regressions? Where are the access controls? Every successful deployment we’ve done has had more glue code than model code.

How to be in the 20%

  • Start with the P&L, not the tech stack. Ask “if this works, what line on the P&L moves and by how much?” If you can’t answer, don’t start.
  • Commit to a measurable before you commit to a model. Define the metric, the baseline, and the target in writing. Before coding.
  • Pick one decision-maker and one workflow. “AI for the company” is not a project. “AI inside the weekly pipeline review for the sales director” is.
  • Ship something embarrassingly small first. If week 3 has no live output, you’re already in slide territory.

AI is the easiest it’s ever been to build. It’s also the easiest it’s ever been to waste money on. The 20% of projects that deliver aren’t the ones with the best models. They’re the ones with the clearest questions.