What Separates Good AI Dev Teams From Great Ones
February 19, 2026
Steve Yegge published something this month that engineering teams everywhere recognized themselves in. He called it The AI Vampire. His observation is hard to argue with: the developers using AI coding tools the hardest are also, somehow, the most burned out. Drowning in review queues. Running faster just to stay in place.
He's right that it's happening. What he doesn't answer is why some teams escape it entirely.
Because every engineering team has access to the same tools right now. Cursor. Claude Code. Copilot. The models are commodities. The tooling is table stakes. And yet some teams are pulling ahead while others are buried. The gap isn't closing. It's widening.
The difference isn't which AI tools you're using. It's what happens between the idea and the first prompt.
Where Most Teams Are Losing Time They Don't Know They're Losing

The typical AI coding workflow at most teams looks something like this. Someone has an idea. It gets written up loosely in Notion or dropped into a Jira ticket. An engineer reads it, fills in the gaps with their own interpretation, and prompts an AI agent to build it. The agent fills in whatever gaps the engineer left. The result goes into review. A senior engineer figures out what the agent intended, finds where it guessed wrong, and sends it back. This happens two or three more times before anything ships.
Every handoff in that chain drops context. The original idea loses fidelity in the writeup. The writeup loses fidelity in the ticket. The ticket loses fidelity in the prompt. And the agent, working from a prompt that's already three degrees removed from the original intent, fills every remaining gap with a statistical guess.
The code that comes out looks reasonable. It compiles. It kind of does the thing. But it isn't quite right, and figuring out exactly how it isn't quite right costs your most experienced engineers the most time.
This is where the review burden that Yegge describes actually comes from. Not from AI being bad at writing code. From the gap between what someone meant and what the agent was actually told.
What Great Teams Do Differently
The teams getting the most out of AI coding have figured out that the leverage isn't in the IDE. It's in what happens before the IDE opens.
They treat the handoff from idea to agent as the most important moment in the development process, not an afterthought. They make sure the agent has everything it needs to make good decisions before any generation starts. That means real acceptance criteria, not implied ones. Explicit constraints. Edge cases covered. And critically, context grounded in how their actual codebase works, not just what they want built in the abstract.
When an agent has that kind of input, something changes. Output lands close to spec on the first pass. Review becomes a quick check against written criteria rather than an archaeology dig into what the AI intended. Senior engineers spend their time on work that actually needs their judgment. The team ships faster, not because generation got quicker, but because the code that gets generated is closer to what they wanted.
The challenge for most teams is that building that kind of spec is slow when you do it manually. Pulling in codebase context takes time. Covering edge cases requires thinking through the system deeply. Writing acceptance criteria precisely enough for an agent to use them requires a different kind of discipline than writing for a human developer.
Most teams skip it because it feels like overhead. That's the mistake.
From Rough Idea to Agent-Ready Spec, Without Leaving Your Browser
This is where Devplan lives. Above the IDE, between the idea and the agent, in the part of the workflow most tools don't touch.
The workflow starts with the idea, however rough it is. You bring the intent, the AI helps you refine it into something an agent can actually work from. It pulls in your codebase context automatically, so the spec gets built around how your system actually works, not generic best practice. Attach supporting materials, designs, notes, research, and that context gets folded in too.
What comes out isn't just a cleaner doc. It's a spec that knows your architecture, covers the edge cases, and gives your agents the constraints they need to make good decisions without guessing. When you're ready, hit run and orchestrate directly from the browser. No context switching. No copying things between tools and hoping nothing drops. The intent that went into the spec is the same intent that goes into the execution, because they live in the same place.
The Cost of Not Having This Layer
The risk of running AI coding workflows without a spec layer isn't obvious at first. It shows up gradually.
Review cycles stay longer than they should be. Senior engineers who should be doing high-leverage work spend more and more time in review queues reconstructing AI intent. Junior developers don't have a clear place to contribute and start becoming passive observers. Technical debt accumulates quietly because agents are guessing at gaps that were never specified. Shipping velocity plateaus or drops even as generation speed keeps climbing.
None of that announces itself as a crisis. It just makes everything incrementally slower and harder. And while it's happening, the teams who built the right process around AI coding are stacking advantages that compound.
That's the gap that's opening right now. In six months it will be obvious. Today it's still closeable.
What the Two Workflows Actually Look Like
Without a Spec Layer | Spec-Driven with Devplan | |
Where ideas live | Notion, Jira, Slack threads | One place, idea through execution |
Codebase context | In someone's head | Pulled in automatically |
Spec quality | Vague, gaps filled by agent | Structured, grounded in your architecture |
First-pass output | Technically OK, contextually wrong | Aligned with intent |
Review cycle | Multiple rounds, senior eng pulled in | Single pass, criteria-based check |
Cognitive load | Falls on senior reviewer | Falls on spec writer, any level |
Technical debt | High, gaps filled with guesses | Low, gaps specified before generation |
Execution handoff | Context dropped between tools | Spec and execution in the same place |
Six-month trajectory | Slower and harder | Faster and cleaner |
The teams in the right column aren't using better AI models. They changed what happens in the space between having an idea and opening their coding tools.
The Move Great Teams Are Making Now
AI coding is the future of software development. That's not in question. The question is which teams are building the process that makes it actually work at scale, and which teams are going to spend the next year cleaning up the debt from not having one.
The teams pulling ahead now aren't working harder or spending more on tooling. They figured out that the leverage in AI development isn't in the generation. It's in what you hand the agent before it starts. Codebase context. Real acceptance criteria. Constraints that reflect how your system actually works. An execution environment where the intent that went into the spec is the same intent that comes out the other side.
That's the process that turns good AI dev teams into great ones. Which is what we built Devplan for. Try it here.
