The AI Productivity Illusion: Why Your PoCs Are Just Moving the Problem

dehakuran.com · March 2026 · 7 min read

I've spent the last year watching this play out across teams I work with.

The first document you generated with AI, you read carefully. The second, you skimmed. By the third, you were scanning paragraphs. Now you're sending documents without more than a glance — or better yet, asking AI to summarise the document you just asked AI to create.

Here's what nobody talks about: the person receiving your beautifully crafted document is doing the same thing. They feed it into AI to evaluate it, extract key points, draft a response. And the person after them does the same. And the one after that.

You now have a chain where AI writes, AI reads, AI responds — and humans are barely involved beyond clicking "send." Each link reduces human engagement. And because nobody is truly engaging with the substance, any flaw — a wrong assumption, a subtle error, a misaligned priority — propagates through the entire chain unchecked.

"What we're calling productivity is not productivity. It's a displacement of inefficiency from one end of the chain to the other."

The effort didn't disappear. It piled up at the point where someone finally has to act — and discovers the foundations are shaky.


The PoC Trap

This is exactly why proofs of concept feel so impressive and productisation feels so painful.

A PoC lives in a vacuum. It shows you the five-day-to-five-minute magic trick. Everyone applauds. But the moment you embed it in a real workflow — with real dependencies, real quality gates, real accountability — everything breaks. Because the PoC never fixed the underlying process. It just made the broken process faster.

And faster broken is still broken.


The Answer the Industry Keeps Selling

When leaders demand real impact, the consulting decks and vendor pitches all converge on the same word: augmentation. Make your people faster. Give them copilots. Boost their output.

Sometimes that's genuinely the right answer. A radiologist using AI to flag anomalies before reviewing scans — that's real augmentation. The core work demands deep human judgment, and AI makes that judgment better informed. Nobody is skimming an X-ray. A researcher synthesising hundreds of papers, a developer catching subtle bugs during code review, an architect stress-testing a design against edge cases — these are domains where AI changes the quality of human thinking, not just its speed.

But that's not what's happening in most organisations. What's happening is that we're taking broken processes — processes full of handoffs, redundant reviews, documents that exist because someone once asked for them — and making every broken step faster. We're not solving inefficiency. We're accelerating it.

"Augmentation works when a human is genuinely thinking. It fails when the human was already on autopilot — because all you've done is give autopilot a turbo button."

Augmentation Is the Exception. Redesign Is the Rule.

A simple example first.

Think about expense reports. An employee fills out a form, attaches receipts, submits it. A manager reviews and approves. Finance checks it against policy. Something's wrong — it bounces back. Fix, resubmit, re-review. Weeks pass.

The augmentation approach: give everyone an AI assistant that fills out the form faster, writes a better justification, helps the manager review quicker. You've sped up every step. It looks like productivity on a dashboard.

The redesign approach: connect the corporate credit card directly to the finance system. Transactions are automatically categorised, policy compliance is checked in real time at the point of purchase, exceptions are flagged before they happen. The expense report — the entire artefact — disappears. There's nothing to write, nothing to review, nothing to bounce back.

You didn't make the broken thing faster. You eliminated the broken thing.

Now a harder example.

Think about contract review. A legal team receives a vendor contract. A junior associate spends days reading it against a checklist of standard terms. They draft a redline. A senior associate reviews the redline. Outside counsel weighs in. Comments go back to procurement. Procurement negotiates. Another draft. Another review cycle. The process exists because contracts are long, language is ambiguous, and risk is real.

The augmentation approach: give the junior associate an AI tool that highlights non-standard clauses and drafts redlines. They're faster. The review cycle compresses from weeks to days. But you still have the same chain of handoffs, the same escalation ladder, the same documents bouncing between people who each add a thin layer of judgment.

The redesign approach: pre-negotiate a library of modular, machine-readable contract terms with your key vendors. When a new engagement starts, both parties select from pre-approved modules. The system validates compatibility, flags genuine exceptions that require human negotiation, and auto-executes the rest. The eighty percent of contract language that was always boilerplate never gets written, never gets reviewed, never gets redlined. Human lawyers focus exclusively on the twenty percent that actually requires judgment — novel terms, unusual risk, strategic trade-offs.

"The difference isn't speed. It's that the work humans do is now worth doing."

We've Been Here Before

This pattern should feel familiar. We've already lived through it with software itself.

We invented software to make human-machine interaction smoother. A screen, a button, a form — because machines couldn't understand us, so we built interfaces to bridge the gap. But today, the vast majority of software isn't for humans at all. It's APIs calling APIs, middleware connecting middleware, systems orchestrating systems. The human was never meant to be in the loop — and yet we still wrap machine-to-machine interactions in dashboards and approval screens and manual checkpoints, because that's what software has always looked like.

We're building human interfaces for inhuman processes. And now AI is writing much of that software. So we have AI generating code for systems that no human meaningfully interacts with, embedded in processes that no human fully understands.

That's not augmentation. That's a different paradigm entirely — and we're still pretending it's the old one.


Where Do We Go From Here?

Most AI initiatives today are expensive ways to preserve processes that shouldn't exist. We're using the most transformative technology of our generation to write emails faster and summarise meetings nobody wanted to attend.

If you're a leader demanding impact, the path isn't "augment everything." It's having the courage to look at a process your organisation has run for fifteen years and ask: why does this exist at all?

The real value of AI isn't helping humans do the same work quicker. It's redesigning systems so the work doesn't need to be done in the first place.

That's not a PoC. That's a paradigm shift. And it's the only thing that will still matter in two years.

AI StrategyProcess RedesignEnterprise

Deha Kuran

AI Executive, Engineer, and Evangelist. Head of AI Business Operations at Philips.

Follow the thinking on LinkedIn →