To be honest, the official superpowers/brainstorming skill already does TDD so well, I don't see that much of a need for this. TDD is definitely the way to go with agentic development.
1) Do you not feel self-conscious or weird about calling this "EvanFlow"? Seems like a lot of people these days are naming their AI tools/skills/whatever after themselves which seems self-absorbed. Either that or they hope that if their thing takes off like OpenClaw did then they'll grab the fame that comes along with it.
2) Why does your TDD flow miss the refactor step of TDD?
"Evenflo is a hundred year old infant feeding brand." Probably named to market its baby bottles and accessories.
Everybody who grew up to listen to Pearl Jam had seen or used an Evenflo pacifier, baby bottle, or car seat. That's one reason the song already sounded so familiar.
Let the guy have something. Free and open source developers work tirelessly for free for years supporting software that billion dollar companies use to make huge profits.
We don't question when scientists name stuff after themselves so why question this? At least he gets some recognition for his work.
1) Do you feel weird asking a question like this? What constructive benefit does it add to any dialogue?
Sometimes it’s helpful to ask oneself what’s the benefit of an answer. I cannot think of any for your question and the way you worded it is a bit cringe. People name things after themselves all the time. It does not matter in the slightest.
1): you have things backwards, the EvanFlow is not something i came up with but rather something i discovered similar to the dao. i am named Evan after the EvanFlow not the other way around.
2): you're right and dmitry called this out below too. shipped a fix that puts REFACTOR per-cycle, instead of being a deferred "after all tests pass" step. the old step 4 was iterate-shaped not TDD-shaped.
If you’re just looking for the TDD part - https://github.com/nizos/tdd-guard - is the only project I’ve come across that actually enforces it with hooks and blocks edits rather than relying on a prompt that gets context rotted away.
Creator of TDD Guard here, thanks for the mention!
TDD Guard was built when Claude Code was the only one to offer hooks. Plugins didn't exist and the models were weaker, so the validation context and instructions took more work to get right. This is why it ended up requiring test reporters for different languages.
I have started a new project that does the same TDD enforcement, also through hooks, but without reporters. It works with any test runner, and it is vendor-agnostic, it works with Claude Code, Codex, and GitHub Copilot. The validator also sees recent session history which helps it handle cases like refactoring better.
The TDD instructions are still pretty basic compared to TDD Guard's, which have been dogfooded for a year. One thing I noticed while testing across agents is that some follow TDD a lot better than others, Codex struggled the most with the basic instructions.
Built this as an opinionated Claude Code development flow based on evidence based practices and what has been working for me while developing professional code.
EvanFlow is a single TDD-driven loop. Say "let's evanflow this" and it walks brainstorm → plan → execute → tdd → iterate → STOP. Real checkpoints at design and plan approval. Never auto-commits, never auto-stages, never proposes integration - every git op is your call.
The three things that actually changed how I work:
1. Vertical-slice TDD. One failing test → minimal impl → next test. Watch each test fail before writing the impl that passes it. (Sounds obvious. Almost no agent does it by default. ~62% of LLM-generated test assertions are wrong per HumanEval research, so testing TDD discipline matters more than the impl discipline.)
2. Embedded grilling at decision points. Before locking a plan: what breaks if a user does X? What's the rollback? What's explicitly out of scope? Catches design flaws while they're still cheap.
3. Iterate-until-clean (hard cap of 5 rounds). Re-read the diff against dead code, naming, the deletion test, assertion correctness, and a Five Failure Modes pass (hallucinated actions, scope creep, cascading errors, context loss, tool misuse). For UI: screenshot via headless Chromium.
For bigger plans with 3+ independent units sharing types, it forks into a parallel coder/overseer orchestration. Integration tests at touchpoints ARE the cohesion contract.
Three install paths: Claude Code plugin marketplace, npx skills add, manual copy. MIT.
I’ve thought of going down the TDD model for LLMs as a way of providing constraints on their behavior. I would think that “vertical slice” TDD would encourage the LLM to start tailoring the tests to the implementation rather than establishing the invariants up front, though. I was considering “horizontal” TDD to force the agent to implement constraints before coding to them.
yeah went back and forth on exactly this trade-off, you're right that vertical can produce tests tailored to the impl. horizontal forces invariants up front but the failure mode flips: you're tailoring tests to the architecture you imagined before any feedback from working code. so it's invariants-vs-behaviors, both have a tailoring failure mode just on different axes. compromise i landed on: vertical + an explicit anti-tailoring grill check at each cycle. definitely gonna tweak with more as i keep refining.
With no disrespect intended because this is also how I would do it (but I wouldn't publish and name it after myself!) - they didn't read the research. They had the AI that actually created this do that for them.
fair to call out but half true. i did send claude off to look up specific stats on failure modes (62% assertion correctness, etc), but the design decisions came from my own reading of anthropic's reports, the columbia daplab paper i cited, and a mix of matt pocock's lectures + my own anecdotal experience running this loop on real projects.
yeah that is a little confusing, tdd is actually a substep of execution. it was listed separately in the diagram because not every task uses TDD (config, generated types, etc. skip it), so the skill is invoked conditionally during execution rather than always. but the arrow notation made it look sequential when it's actually nested. updated the README diagram to show that. thanks for the nudge.
The refactor-per-cycle fix lands in the right place. The harder problem shows up when EvanFlow forks into parallel coder/overseer mode: unit tests pass per agent, but the seams break at merge. Your note that "integration tests at touchpoints ARE the cohesion contract" is exactly right, but enforcement is what makes it stick. Each parallel branch needs its own failing test that can't be masked by another branch's green run. Worktree isolation handles this cleanly since each agent's environment is separate. Without that, vertical-slice TDD in parallel collapses to "tests pass somewhere."
On jtfrench's unanswered question about dumb zone evasion: context length is what drives the drift. Agents go off-track when a loop runs long enough that early design context falls out. Resetting at each RED-GREEN-REFACTOR boundary keeps cycles short enough to avoid it. The hard cap of 5 iterate rounds is the same instinct applied at the macro level.
We ran into the parallel integration seam problem building tonone, a 23-agent Claude Code plugin where each domain agent works in its own worktree and integration tests are the merge contract.
The refactor step is the silent casualty in AI-assisted TDD. Once the test is green, Claude optimizes for moving to the next test, not for cleaning up the impl that just passed. An "iterate-until-clean" pass at the end is a different thing: you're refactoring cold code, not refactoring with a freshly-written test as the safety net.
When I first used agentic coding I was already doing strict TDD and I just tried using it for the refactor step.
It sucked so hard I thought the idea of agentic coding was just a joke. Ive tried it periodically and it literally never stopped sucking.
I figure if it cant do that part it isnt worth using it for any part.
Ever since then whenever people tell me it's gotten better I've tried it out and nope, still sucks.
I still get gaslit about how well it works by people who just discovered TDD though, and watch it power through CRUD boilerplate getting impressed, blissfully unaware that boilerplate spew is an antipattern.
mmm good point! just shipped a fix that puts RED → GREEN → REFACTOR per cycle with the fresh test as safety net just like beck intended. macro/cross-cycle refactor lives in iterate now as its own separate thing so the two don't conflate. thanks for the catch : )
TDD in 2026? Besides, TDDs main benefit is to come up with a decent architecture for your system… LLMs can already do that if instructed. I don’t see the point of TDD
To be honest, the official superpowers/brainstorming skill already does TDD so well, I don't see that much of a need for this. TDD is definitely the way to go with agentic development.
how?i saw superpowers/brainstorming but never saw tdd code produced
It’s supposed to do this, but I’ve found it doesn’t always do it
Just tell it to use TDD
There is another skill for tdd. You can activate it manually or tell the harness to
Two questions
1) Do you not feel self-conscious or weird about calling this "EvanFlow"? Seems like a lot of people these days are naming their AI tools/skills/whatever after themselves which seems self-absorbed. Either that or they hope that if their thing takes off like OpenClaw did then they'll grab the fame that comes along with it.
2) Why does your TDD flow miss the refactor step of TDD?
I feel like 1 is a self correcting problem. If this goes nowhere it will soon be forgotten.
I can think of one example that did go somewhere: Linux.
Feels like a bonus to me.
Linus did not name it Linux himself: https://en.wikipedia.org/wiki/Linux#Naming
He merely laundered it through a coworker.
ReiserFS is another one that comes to mind.
And djb (the djb) also wrote djbdns.
There are plenty of examples, usually when it coincides with someone’s first project.
TanStack was started by a guy named Tanner
Debian is a portmanteau of Debra (Ian's girlfriend) and Ian.
I don't mind it. It's just a name
Debian is an even better example
Ref 1, he should have called it Daughter.
No Code, surely?
I initially thought it was a pun on Pearl Jam's classic "Even Flow", then I read your comment and noticed the username... Sad.
I was really hoping this was something I could find on CPAN from the author username perlJam.
"Evenflo is a hundred year old infant feeding brand." Probably named to market its baby bottles and accessories.
Everybody who grew up to listen to Pearl Jam had seen or used an Evenflo pacifier, baby bottle, or car seat. That's one reason the song already sounded so familiar.
Let the guy have something. Free and open source developers work tirelessly for free for years supporting software that billion dollar companies use to make huge profits.
We don't question when scientists name stuff after themselves so why question this? At least he gets some recognition for his work.
1) Do you feel weird asking a question like this? What constructive benefit does it add to any dialogue?
Sometimes it’s helpful to ask oneself what’s the benefit of an answer. I cannot think of any for your question and the way you worded it is a bit cringe. People name things after themselves all the time. It does not matter in the slightest.
1): you have things backwards, the EvanFlow is not something i came up with but rather something i discovered similar to the dao. i am named Evan after the EvanFlow not the other way around.
2): you're right and dmitry called this out below too. shipped a fix that puts REFACTOR per-cycle, instead of being a deferred "after all tests pass" step. the old step 4 was iterate-shaped not TDD-shaped.
Jesus mate, talk about loaded questions.
“Who are you? How dare you create anything”
EvanFlow - thoughts arrive like butterflies?
Oh, he don't know, so he chases them away
Oooohhhh
Someday soon he'll begin his life again
Seeeethinnggg tests failing not complete... again
If you’re just looking for the TDD part - https://github.com/nizos/tdd-guard - is the only project I’ve come across that actually enforces it with hooks and blocks edits rather than relying on a prompt that gets context rotted away.
Creator of TDD Guard here, thanks for the mention!
TDD Guard was built when Claude Code was the only one to offer hooks. Plugins didn't exist and the models were weaker, so the validation context and instructions took more work to get right. This is why it ended up requiring test reporters for different languages.
I have started a new project that does the same TDD enforcement, also through hooks, but without reporters. It works with any test runner, and it is vendor-agnostic, it works with Claude Code, Codex, and GitHub Copilot. The validator also sees recent session history which helps it handle cases like refactoring better.
The TDD instructions are still pretty basic compared to TDD Guard's, which have been dogfooded for a year. One thing I noticed while testing across agents is that some follow TDD a lot better than others, Codex struggled the most with the basic instructions.
Feedback welcome:
https://github.com/nizos/conduct
Built this as an opinionated Claude Code development flow based on evidence based practices and what has been working for me while developing professional code.
EvanFlow is a single TDD-driven loop. Say "let's evanflow this" and it walks brainstorm → plan → execute → tdd → iterate → STOP. Real checkpoints at design and plan approval. Never auto-commits, never auto-stages, never proposes integration - every git op is your call.
The three things that actually changed how I work:
1. Vertical-slice TDD. One failing test → minimal impl → next test. Watch each test fail before writing the impl that passes it. (Sounds obvious. Almost no agent does it by default. ~62% of LLM-generated test assertions are wrong per HumanEval research, so testing TDD discipline matters more than the impl discipline.)
2. Embedded grilling at decision points. Before locking a plan: what breaks if a user does X? What's the rollback? What's explicitly out of scope? Catches design flaws while they're still cheap.
3. Iterate-until-clean (hard cap of 5 rounds). Re-read the diff against dead code, naming, the deletion test, assertion correctness, and a Five Failure Modes pass (hallucinated actions, scope creep, cascading errors, context loss, tool misuse). For UI: screenshot via headless Chromium.
For bigger plans with 3+ independent units sharing types, it forks into a parallel coder/overseer orchestration. Integration tests at touchpoints ARE the cohesion contract.
Three install paths: Claude Code plugin marketplace, npx skills add, manual copy. MIT.
I’ve thought of going down the TDD model for LLMs as a way of providing constraints on their behavior. I would think that “vertical slice” TDD would encourage the LLM to start tailoring the tests to the implementation rather than establishing the invariants up front, though. I was considering “horizontal” TDD to force the agent to implement constraints before coding to them.
yeah went back and forth on exactly this trade-off, you're right that vertical can produce tests tailored to the impl. horizontal forces invariants up front but the failure mode flips: you're tailoring tests to the architecture you imagined before any feedback from working code. so it's invariants-vs-behaviors, both have a tailoring failure mode just on different axes. compromise i landed on: vertical + an explicit anti-tailoring grill check at each cycle. definitely gonna tweak with more as i keep refining.
What if you don’t ask for code yet. Prompt only for tests with maybe a minimal interface context that tests can code against?
Please don’t post AI generated comments :(
Just write it yourself. I promise it’s worth it
He's even being cheeky by intentionally replacing the em-dash by a regular dash, haha
It's quite well done really, but the cadence...
No x. No y. No z. Just abc.
Its like nails on a chalkboard...
sometimes you gotta hit em with the ol' linkedin one two hehe
Curious, In the repo you mention
> Several rules come from 2025-2026 industry research on agentic coding failure modes
What are some of the papers you read?
With no disrespect intended because this is also how I would do it (but I wouldn't publish and name it after myself!) - they didn't read the research. They had the AI that actually created this do that for them.
fair to call out but half true. i did send claude off to look up specific stats on failure modes (62% assertion correctness, etc), but the design decisions came from my own reading of anthropic's reports, the columbia daplab paper i cited, and a mix of matt pocock's lectures + my own anecdotal experience running this loop on real projects.
> execute → tdd
How are these separate steps?
TDD is how you execute, not something you tack on afterwards.
yeah that is a little confusing, tdd is actually a substep of execution. it was listed separately in the diagram because not every task uses TDD (config, generated types, etc. skip it), so the skill is invoked conditionally during execution rather than always. but the arrow notation made it look sequential when it's actually nested. updated the README diagram to show that. thanks for the nudge.
The refactor-per-cycle fix lands in the right place. The harder problem shows up when EvanFlow forks into parallel coder/overseer mode: unit tests pass per agent, but the seams break at merge. Your note that "integration tests at touchpoints ARE the cohesion contract" is exactly right, but enforcement is what makes it stick. Each parallel branch needs its own failing test that can't be masked by another branch's green run. Worktree isolation handles this cleanly since each agent's environment is separate. Without that, vertical-slice TDD in parallel collapses to "tests pass somewhere."
On jtfrench's unanswered question about dumb zone evasion: context length is what drives the drift. Agents go off-track when a loop runs long enough that early design context falls out. Resetting at each RED-GREEN-REFACTOR boundary keeps cycles short enough to avoid it. The hard cap of 5 iterate rounds is the same instinct applied at the macro level.
We ran into the parallel integration seam problem building tonone, a 23-agent Claude Code plugin where each domain agent works in its own worktree and integration tests are the merge contract.
https://github.com/tonone-ai/tonone if curious.
The refactor step is the silent casualty in AI-assisted TDD. Once the test is green, Claude optimizes for moving to the next test, not for cleaning up the impl that just passed. An "iterate-until-clean" pass at the end is a different thing: you're refactoring cold code, not refactoring with a freshly-written test as the safety net.
When I first used agentic coding I was already doing strict TDD and I just tried using it for the refactor step.
It sucked so hard I thought the idea of agentic coding was just a joke. Ive tried it periodically and it literally never stopped sucking.
I figure if it cant do that part it isnt worth using it for any part.
Ever since then whenever people tell me it's gotten better I've tried it out and nope, still sucks.
I still get gaslit about how well it works by people who just discovered TDD though, and watch it power through CRUD boilerplate getting impressed, blissfully unaware that boilerplate spew is an antipattern.
mmm good point! just shipped a fix that puts RED → GREEN → REFACTOR per cycle with the fresh test as safety net just like beck intended. macro/cross-cycle refactor lives in iterate now as its own separate thing so the two don't conflate. thanks for the catch : )
superpowers/brainstorming is doing TDD as well.
How does this handle “dumb zone” evasion while looping?
... thoughts arrive like butterflies Oh he don't know, so he chases them away Oh someday yet, he'll begin his life again Life again, life again
https://www.evenflo.com/
TDD in 2026? Besides, TDDs main benefit is to come up with a decent architecture for your system… LLMs can already do that if instructed. I don’t see the point of TDD
I've always been hesitant to prescribe TDD to _everything_ until agentic coding agents came along. TDD is a great way to keep them on track.