Using Devplan in Practice

August 7, 2025

Practical Guide Using Devplan


This is a walkthrough of how we use Devplan in real day-to-day development to achieve the results presented here. Right now, more than 90% of the code we ship runs through Devplan, making it the foundation of our ability to execute fast and get the true benefits of AI-enabled development.


Keep in mind the goals here are to create a repeatable, scalable system where AI can:

  • Get to a working solution independently

  • Execute tasks in parallel

  • Require minimal human oversight


In our experience as senior engineers, without Devplan, the overhead of managing AI-assisted workflows can sometimes cancel out the benefit, but with it, the benefits of AI coding are tremendous.


1. Define Product & Technical Specs with Devplan Agents

Every project starts with Devplan’s agents helping to define requirements. They work with you to ask smart clarifying questions, flag ambiguity, and scope the work properly—grounded in knowledge of your codebase, past projects, and your company’s structure and goals.


This step is deceptively important. It seems basic, but the quality of the questions the AI asks here is critical. It often surfaces misalignments or assumptions that would cause a coding agent to fail or require multiple follow-ups. Without this clarity, you risk vague specs, restarts, and messy outcomes.


By the end of this step, you’ve got a clean, scoped project with resolved ambiguity. You can archive it to your backlog or move straight to execution.


2. Break the Project Down into Right-Sized Features

Devplan automatically breaks each scoped project into individual features or user stories. This is where AI prompts are generated—one per feature.


Your job here is light. Mostly you're validating:

  • Are the features correctly sized (ideally half-day to 5-day chunks)?

  • Are there too many or too few?

  • Do the acceptance criteria make sense?


Thanks to the planning in Step 1, this typically takes less than two minutes. Most of the ambiguity has already been resolved, and this step simply formalizes the work into bite-sized units that are ready to ship.


3. Run Prompts into Your AI IDE (Manual vs. Devplan CLI)

Once features and prompts are ready, it’s time to run them inside your IDE of choice—Claude, Cursor, Junie, etc. This is where execution happens, and also where things can get inefficient quickly.


Approach 1: Manual Execution (Without Devplan CLI)

Here’s what the manual process looks like, per feature:

  1. Download the generated prompt and format it for your IDE (CLAUDE.md, rules.json, guidelines.md, etc.).

  2. Clone your git repository or create a new worktree—especially important if you want to implement features in parallel.

  3. Open your IDE manually in the correct folder with the right context.

  4. Prompt the AI to begin coding the feature.


Doing this once isn’t a big deal. But doing it 6–10 times per day becomes a drag. It’s repetitive, error-prone, and easy to procrastinate, especially if you forget to clean up worktrees or misplace prompts.


Approach 2 (recommended): Automated Execution with Devplan CLI

With Devplan CLI, all of that overhead disappears. You can spin up a feature-ready workspace with one command:

devplan clone -c XX -p YYYY -y -i cursor -f ZZZZ


This one-liner:

  • Creates a scoped cloned folder for the feature

  • Launches your IDE in the correct context

  • Automatically references the correct prompt file


After that, you just tell your AI agent: “Implement current feature.”


Before the CLI, we lost real time and energy just getting into a feature, switching between terminal, prompts, and IDEs. Parallel execution felt clunky, and small errors like forgetting a worktree setup often led to broken states or rework. With the CLI, feature execution is fast, consistent, and repeatable.


More importantly, this automation is what makes scale possible. We can run multiple features in parallel, and delegate reliably to AI.


4. Review and Polish the Output

This is the last human step before shipping. But the amount of work here drops dramatically if the planning and prompting were done well, which they are, if you followed the earlier steps.


Once the AI has written the code:

  • Manually review the output

  • Fix issues or edge cases

  • Test to ensure it meets your standards


Without this system, we wouldn’t be able to run or complete nearly as many AI-generated features per day. Devplan is what turns isolated AI prompts into a real production workflow.


We estimate that using Devplan makes our AI-assisted development planning process 8–10x faster compared to manually managing specs, prompts, repos, and execution and the overall coding execution 2-3x faster. But more importantly, it makes the entire workflow scalable.



Requirements Adjustments


There is a somewhat common flow to focus on separately which also highlights the power of using proper tools. When AI-coding agent went sideways and you need to course correct it, it is often easier to re-start from scratch with corrected requirements. The flow described above allows you to do a full restart in a matter of minutes if not seconds, depending on how complex the adjustment is.


The way you would do it is to go back to step 1 and update the PRD if the change needed was product related or the tech design doc if technical related and with the AI agents to update requirements with your new ask. Then, go to Build Plan (step 2) and regenerate features and prompts with a single click. Finally, use the CLI to restart with updated requirements. That’s it. It usually takes under 2 minutes from realizing that AI did something wrong that you want to adjust to AI restarting with the corrected prompt.


For example, I once worked on implementing remote MCP server and my AI IDE decided not to use SDK at all. When I noticed that, I updated Technical Requirements with a request to use python SDK for MCP, regenerated prompts and restarted. Took less than a minute to do that.


Another important reason to centralize requirements: every change will persist, even if you blow up the repo or switch to a different AI IDE. For example, you could edit a requirement directly in a rule file, but that change won’t carry over to the next feature. And if you try a different AI IDE, you’ll likely need to manually migrate those changes or risk losing them altogether if you roll back the repo. (That’s happened to me more than once before I switched to this centralized flow.)


Conclusion


There are a lot of people and articles (e.g. this) suggesting that AI may be a net loss for productivity. And indeed if not used smartly or with good tooling, that may be true. Good professional engineers are already quite efficient and for them it is critical to have efficient processes with tools allowing to minimize overhead while empowering AI to get big parts of the tasks to near-completion. Every minute of overhead, every single extra context switch matters. It will take time to figure out how to work with AI-coding at scale, but when done well, AI can engineers more productive and the job itself more fun.

Build better products faster.

We’re on a mission to transform how ambitious teams turn vision into software faster then ever before.

Maximum business efficiency with minimal effort.

Go end-to-end from quick queries to deep-dive analyses to beautiful interactive data apps – all in one collaborative, AI-powered workspace.

Build better products faster.

We’re on a mission to transform how ambitious teams turn vision into software faster then ever before.

Maximum business efficiency with minimal effort.

Go end-to-end from quick queries to deep-dive analyses to beautiful interactive data apps – all in one collaborative, AI-powered workspace.

Build better products faster.

We’re on a mission to transform how ambitious teams turn vision into software faster then ever before.