Your AI Coding Agent Isn't the Problem. Your Instructions Are.
Most engineering teams blame the model when AI goes wrong. Wrong output, hallucinated code, a result that looks right but doesn't do what anyone asked for. The assumption is that the technology fell short.
Here's the uncomfortable truth: the technology is usually fine. The instructions aren't.
That's the core argument behind Liatrio's Spec-Driven Development (SDD) workflow, presented by Robert Kelly, VP of Innovation, and Damien Storm, Lead AI Enablement Engineer. Their thesis: AI coding agents don't fail because the model is bad. They fail because the input is bad. And no model, however powerful, can build the right thing from the wrong brief.
Speed Without Direction Is Just Fast Failure
AI amplifies everything you give it. Give it clarity, it builds fast and accurately. Give it ambiguity, it fills the gaps using its training data at AI speed. By the time you notice the problem, you've got a codebase that's plausible, confident, and completely wrong.
Underneath most of these failures is a concept called context rot. Think of the model's context window like a workbench: it's a finite surface area with tools and documents spread across it. The longer a session runs, the more critical information falls off the edge. Guidelines stop being followed. Architectural decisions get ignored. Even models advertising million-token windows start degrading in practice after around 100,000 tokens. More isn't always better. Deliberately managing what's in the window is what separates reliable AI workflows from frustrating ones.
What SDD Does Differently
Existing tools like GitHub's Spec Kit and AWS's Kiro move in the right direction, but they tend to lock teams into opinionated stacks that don't adapt well to enterprise environments. Liatrio's SDD workflow is built to be transparent and portable. Here are four markdown prompts that any team can read, customize, and plug into their existing process.
- Spec: You bring a right-sized piece of work and the AI generates a detailed plan, asking clarifying questions until the direction is locked.
- Tasks: The spec becomes a structured breakdown of work with proof artifacts the AI must produce to verify success.
- Build: Only now does the AI write code, with verification loops keeping it on track.
- Verify: Everything traces back to the original spec.
Before any of this, there's a Phase Zero: making the repo AI-ready. Documentation, guardrails, pre-commit hooks, test-driven development. Good developer experience benefits the AI too. Liatrio calls this engineering for agent experience, and it's fast becoming a discipline in its own right.
The Takeaway
The question was never whether AI can code faster than humans. It can. The real question is whether that speed is pointed somewhere useful. Liatrio's answer is structured, reviewable, and transparent... and it starts with a simple shift in where you put the effort.
Related Resources