The Alignment Problem No One's Talking About
You know that feeling. You need one specific detail — the maximum number of supported X, the latest sales deck, who was at that customer meeting and what was actually said — and you can't find it. That's the alignment tax. Every business pays it.
How much time do you spend every day on this? Low-value, but required. Searching, asking, re-asking.
And the cost isn't just your time. How much does it cost your company when a sales rep promises a feature shipping "in Q4" — of an unspecified year? (You know what I mean, PM friends.) That's the alignment tax.
So surely one of the big platforms — ServiceNow, Microsoft, Salesforce — has fixed this with AI by now?
I haven't seen it. Have you?
Why no one has solved this
The simple reason: every company has a wildly different mix of processes, tools, data, and people. There's no one-size-fits-all solution, and there likely won't be for a long time.
AI can accelerate your work — but only if you know how to use it effectively. And there's a trap most people fall into when they start: they move fast, and the mistakes compound even faster.
I found this out firsthand.
The day I watched AI rewrite our strategy
I was building out our Product Roadmap with Claude Code — my current favorite AI tool for this kind of work. Most people think of Claude Code as something for developers. I'm about the farthest thing from a developer. I can barely spell "Pie-thon."
But Claude Code isn't just for coding. It's a logical, action-oriented model. You give it an objective, it gets to work. It can operate directly on your machine — inside a project folder you define, with permission required before going anywhere else. Everything it produces stays local. I found a way to make context persist across sessions. For a founder who needs to move fast without losing the thread, it's a game-changer.
At least, it is once you learn to control it.
Here's what I quickly discovered: Claude's bias toward action is a double-edged sword. In its own words, it has a "training bias toward composition over curation, no explicit retrieval step, no penalty signal for sprawl, and ambiguity in source of truth." Translation: left to its own devices, it will generate, generalize, and drift.
And it drifted. Quietly. Consequentially.
At some point, Claude renamed our core Evolutionary Framework. The original name captured the intent of the framework — our whole operational model, the first steps we coach every customer through. The new name sounded plausible. But the meaning had shifted. By the next document it generated after that one, the intent was completely lost.
I caught it. But if I hadn't? Every document built on top of that one would have been built on a broken foundation.
What I did about it
I knew this problem would only compound as the team grew. So instead of patching it, I treated it as foundational. I needed a System of Systems — one that could do three things:
- Maintain a single source of truth — one place where the concepts, terms, and frameworks are defined, canonical, and authoritative
- Detect and correct drift automatically — catch the moment something diverges from that source, before it propagates
- Scale without bottlenecks — work with the team's natural flow, not against it; invisible when it's working
I followed the same process we coach customers through: start with the Business Specification, define the core concepts and taxonomy, build validation rules against them, automate drift detection, and propagate changes downstream when the source of truth updates.
It took me about a day and a half.
By the end, I had a working document alignment system — one that keeps AI output anchored to what we actually mean, surfaces drift before it compounds, and gets out of the way the rest of the time.
The four principles that made it work
1. Start with a foundation worth building on
With AI, a little thinking up front saves enormous time downstream. AI does exactly what you ask. It's improving at inferring intent — but it can't read your mind, and it shouldn't have to.
The foundation for us was the Business Specification: a single source of truth for everything humans and AI do at the company. Every term defined. Every framework named with intent. If that document isn't clear and precise, nothing built on top of it will be either.
Every time AI produced output I didn't expect, I went back to the Business Specification. I refined the definition. I clarified the intent. Eventually, the output stopped changing — that's how I knew I was finally communicating effectively.
2. Your worst enemy is compounding
The faster you move, the faster mistakes multiply. Have you ever played telephone? Now imagine playing it at 20x speed.
You have to catch drift early — not eventually. The renamed Evolutionary Framework was a small change on the surface. In the next document, it became a different framework with a different purpose. Left unchecked, every document after that would have been wrong. By document five, we'd have been building on junk.
The system doesn't just flag errors. It flags early errors, before they have time to cascade.
3. Build systems for the work you repeat — not for everything
AI works well with rules. It's your execution arm. It doesn't know what to do or why — that's your job. But when you give it a clear system and a clear foundation, it becomes very good at staying inside the lines.
The key is knowing when to build a system and when not to.
Planning a one-off vacation and you forgot to tell AI you want to fly from California? Don't build a system — just be clearer next time. But if your team researches customers and builds sales decks every single day? Build the system. The ROI compounds just like the mistakes do.
4. Great systems are invisible
This is the one people don't expect: if you build this system well and your teammates say "thank you" when you roll it out, you probably did it wrong.
Genuinely great systems go completely unnoticed. People are quick to complain when something breaks, and slow to notice when things just… work. A really good alignment system means no one ever hits the moment where AI has quietly renamed your strategy.
But here's what that invisibility requires: you have to earn the trust of the people using it before they see it in action. Demonstrate the value to them individually — in their daily work, in their context — before asking for behavioral change. That's not optional. That's the whole game.
It's also why catching that renamed Evolutionary Framework mattered so much. Our first two steps for every customer are: earn trust, demonstrate value. If AI had silently redefined those — if that drift had gone undetected — we'd have been coaching people toward a framework that no longer meant what we thought it did.
The bigger point
Every team that starts working seriously with AI hits this wall. The tools are fast. The output is confident. And the mistakes compound quietly, in ways that are invisible until they're expensive.
The fix isn't to slow down. It's to build the right foundation — one source of truth, clear concepts, a system that catches drift before it spreads.
We built this at Lucy Labs because we needed it ourselves. Now we help other teams build it too.
If your team is moving fast with AI but you're starting to feel the drift — inconsistent outputs, misaligned terms, outputs that are confident but slightly wrong — that's the problem this solves.
Let's talk about building it for your team →
Talk with Us