Move Fast
The Amazon story
Customer Obsession Meets AI
At Amazon, AI isn't a department. It's every department. Fulfillment centers run on it. AWS builds tools around it. Every team, in one form or another, is using AI to move faster. Jassy has said as much publicly. This isn't hype — it's operational reality.
When the CEO of one of the world's largest technology companies says AI is transforming how they work, people listen. But what does “transforming” actually look like on the ground? It looks like speed. Unprecedented, intoxicating speed. The kind of speed that makes you feel like you're finally working the way you were meant to. The kind that makes you forget why you had guardrails in the first place.
The Fire Drill
Here's a story about that speed.
Someone on my team used AI to summarize feedback from our team for an internal business report. After ten years of writing these reports, trust me — I'll take the help. But the AI mixed signals from another workstream into the summary, and what came out painted a picture of customers in pain that didn't match reality.
Amazon takes security seriously. Very seriously. And “customers in pain” gets attention fast. Leadership engaged. Resources started mobilizing. We were heading down the path of solving a problem for a customer who never reported one.
Then the correction: Sorry, that wasn't accurate. The summary was wrong.
The customer never knew. It was entirely internal — a simple business report with bad context. A wasted cycle. A few red faces in a meeting room. That's all it was.
And that's exactly why the system works. Amazon's culture of escalation caught it. Humans were in the loop. The process did what processes are supposed to do.
The Lesson
The story isn't that something went wrong — it's that the humans-in-the-loop system worked. Amazon's process caught the error before it reached a customer. That's what good organizational design looks like.
But it does show why we need to be thoughtful about risk when using AI. A routine internal report — the kind I've written hundreds of — nearly triggered an unnecessary escalation because AI-generated content was plausible enough to pass a first read.
Now imagine that same dynamic in systems where there aren't ten years of institutional muscle memory to catch mistakes. Medical diagnosis. Criminal sentencing. Military targeting. The speed that makes AI useful is the same speed that demands we keep humans in the loop.
Speed Is the Product
Here's the uncomfortable truth: organizations don't reward caution. They reward velocity. AI is the ultimate velocity tool. Asking people to slow down and verify is asking them to be less competitive. That's not a technology problem. That's a human one.
Every metric that matters — time to market, tickets resolved, decisions made — gets better when you go faster. Nobody gets promoted for the disaster they prevented. Nobody gets a performance review that says “took an extra two hours to verify AI output, avoided a false escalation.” The incentives all point one direction: ship it.
And AI is the most powerful shipping tool ever built.
The Question Nobody Asks
How many decisions at your company were informed by AI-generated content this week? How many were verified?
You don't know. Nobody does. That's the point.
We've built systems that produce confident, well-structured, authoritative-sounding content at machine speed. We've put those systems in the hands of people who are rewarded for moving fast. And we've done almost nothing to build the verification infrastructure that matches the generation speed.
My team's fire drill was contained because Amazon's process caught it — the way it's designed to. But it was also a reminder. Not of what AI does wrong — but of why we need to stay thoughtful about how we trust it at speed.