Here's a project I watched from the inside. A logistics company with 1,200 employees decided to "do AI." The VP of Operations had seen a demo at a conference — an intelligent routing system that could cut delivery times by 18%. He came back with a mandate: build something like that by Q2.

The team spent six weeks evaluating vendors. They picked one. They spent another four weeks on data access agreements and security reviews. By the time engineering actually started building, they realized nobody had documented how routing decisions were actually made. The dispatchers used a combination of a 10-year-old Excel model, personal knowledge of driver preferences, and phone calls to warehouse managers. None of that was in any system.

The project died at month five. Total spend: $220K. Lines of production code: zero.

This is not unusual. It's the norm.

The wrong problem gets picked first

When executives greenlight an AI initiative, they almost always pick the problem that sounds most impressive in a board presentation. Intelligent routing. Demand forecasting. Customer churn prediction. These are real problems with real value — but they're also the hardest to implement because they sit at the intersection of multiple systems, teams, and data sources.

The better move is boring. Find the process that burns the most labor hours per month. It's usually something nobody talks about in strategy meetings: invoice reconciliation, document classification, compliance report generation, data entry from one system to another. These processes are painful, repetitive, well-bounded, and — here's what matters — the people who do them can tell you exactly how they work.

Gartner's 2024 data puts the failure rate at 78% for AI pilots that never reach production. The average proof-of-concept costs between $150K and $300K. For the 22% that do make it, the average time from PoC to production deployment is 9 months. Those numbers should make anyone pause before chasing the flashy use case.

You can't automate what nobody has written down

This is the single biggest killer. Teams jump straight to technology selection without understanding the process they're trying to automate. They assume the process is documented somewhere. It isn't.

In most organizations, the real process lives in the heads of 4-8 people who've been doing it for years. They've built workarounds on top of workarounds. They know that when the system shows "approved" it actually means "pending second review" because someone configured a status code wrong in 2019 and nobody ever fixed it. They know you have to call Maria in accounting on Tuesdays because the automated report misses transactions from the weekend.

None of this is visible to the AI team. They build a system based on what they think happens. It doesn't match reality. The system produces wrong outputs. People stop trusting it. Project dies.

Before writing a single line of code, you need to sit with the people who do the work and map the process step by step. Every input, every decision point, every exception, every workaround. This takes 2-4 weeks. Most teams skip it because it feels slow. But skipping it is what turns a 3-month project into a 12-month failure.

The proof-of-concept trap

PoCs are seductive. You build a demo in a sandbox environment with clean data. It works beautifully. The executive sponsor is thrilled. The vendor is thrilled. Everyone agrees to move to "Phase 2."

Then Phase 2 hits reality. The demo used a curated dataset of 500 records. Production has 2 million records with inconsistent formatting, missing fields, and edge cases nobody anticipated. The demo ran on a standalone API. Production needs to integrate with SAP, Salesforce, a legacy mainframe, and two internal tools that don't have APIs. The demo processed one document type. Production has 14 document types with different layouts across 3 regional offices.

The gap between a working demo and a production system is where most projects die. The fix isn't to skip the PoC — it's to build the PoC against real data, real systems, and real edge cases from day one. Yes, it's slower to start. But you find the hard problems in week 3 instead of month 6.

What actually works

Start with the most painful manual process in your operation. Not the most strategic. Not the most impressive. The most painful. The one where people spend hours doing work they hate, where errors are common, and where the outcome is well-defined.

Map that process completely before you touch any technology. Talk to the people who do it. Watch them do it. Document every step, every exception, every workaround. You'll find that the real process is 40-60% more complex than what management thinks it is.

Then build a narrow system that solves that one thing. Not a platform. Not an "AI-powered solution." A specific tool that takes a specific input and produces a specific output that a specific person can verify. Deploy it alongside the existing process — not as a replacement, as a helper. Let people check its work. Fix the errors. Iterate.

This is not glamorous. It doesn't make for a good conference talk. But it's how you get from zero to one working AI system in production. And once you have one, the second is faster because you've built the organizational muscle: you know how to document processes, how to handle data quality, how to manage the transition from manual to automated.

If you want to see what this looks like in practice, our solutions page breaks down the approach by industry and process type.


Here's the blunt version: most companies don't need an "AI strategy." They don't need an AI Center of Excellence. They don't need to hire a Chief AI Officer. They need to pick one broken process, understand it completely, and build a system that fixes it. Everything else is a distraction that makes consultants rich and leaves operations teams exactly where they started.