TDD Considered Harmful?
Warning: Snark alert.
I read something recently – might have been on social media somewhere, but I don’t recall exactly – a story about how test-driven development (TDD) had caused a software development team to allow a defect into their product.
The feature in question was a funds transfer operation. A customer could use the application to move funds from one account to another.
The thing is, there’s a possibility that a customer might try to transfer funds from and to the same account. The application is not supposed to allow that. Yet, it was possible to do it in production.
I have a feeling I’m missing part of the story. Transfering funds from and to the same account would result in the same balance, wouldn’t it? Wasted motion, maybe, but otherwise hardly catastrophic. So maybe there’s more to the story than I’m remembering. Surely, there must be more. Maybe there was a funds transfer fee, or something.
Fortunately, the details of the defect are not the point of this post.
The person sharing the story believed the cause of the defect was TDD. You see, there was no example for that case at the unit level. Therefore, it was never checked at that level. Apparently, it wasn’t checked at the integration or functional levels, either. Therefore, TDD doesn’t work.
They didn’t “do TDD wrong,” as they were careful to follow the red-green-refactor pattern. Well, most of the time, anyway. At least, when it was necessary. No sense in going to extremes. So, it must have been TDD as such that caused the defect. That’s just basic logic, right?
I didn’t have a chance to talk to the team, so I couldn’t ask them: What methods did you use to clarify the desired functionality? Did you use Specification by Example? Did you walk through the functionality with key stakeholders in some sort of design workshop? Did you use Mob Programming? Pair Programming? The Three Amigos? A formal Backlog refinement workshop? What about Feature Mapping or Story Mapping? Did you collaboratively build a matrix of different possible inputs and the expected outputs? Did you have a professional Business Analyst review the specifications with the team?
There are lots of ways to check and double-check the requirements, both ahead of time and frequently throughout the development process. If they’ve narrowed down the cause definitively to TDD, then it goes without saying that they did at least one or two of those things, or something equivalent.
Let’s say the “requirements” neglected to mention that case. Does the team include any professional software testers? They would have noticed any missing edge cases. That’s pretty basic stuff for a tester. They tend to think of cases like that one immediately, when they first start to reason about a feature.
Let’s say there are no testing specialists on the team. Does anyone on the team use a banking application in their own lives? If so, they might have some idea of what a funds transfer is supposed to do, even absent specific “requirements” from a Product Owner or Business Analyst. They would have said, “Hey, what about…?” but apparently no one did so. So I guess no one on the team uses the banking system. They pay for everything in cash or bitcoin. Maybe that’s a Millennial thing.
It seems to me the programmers would have stumbled across that case in the normal course of designing the microtests to drive the code. Example 1: Account A to Account B, sufficient balance. Example 2: Account A to Account B, insufficient balance. Exam– “Hey, what about…?”
Pretty routine scenario for programmers, especially when practices like pairing and mobbing are used. Still pretty routine even when working solo. Edge cases sort of jump out at you when you’re designing sufficiently fine-grained microtest examples. And of course, programmers don’t often design “unit” tests that are too big to allow them to see the trees for the forest, right?
Friends, this kind of thing is getting old. Please stop blaming your tools for the outcomes you achieve.