An Aggressive But Realistic Delivery Date?
I recently received an email asking about release planning. The sender wanted help understanding how to move ideas through the flow to create a mature backlog. The note went on to ask how to properly “estimate, prioritize and reach an aggressive but realistic delivery date”.
My immediate thought was, this is agile: total project story points divided by team velocity yields the duration of the project. And delivery date then only depends on when you start and how well you manage risks and dependencies. If you want an “aggressive” or what I’ve come to understand as “overcommitted” plan you should just crack out your Gantt charts and stop pretending that you’re agile.
Before I dashed off a sharp email, I chatted with an associate and came to a different understanding of “aggressive planning”. He made the point that teams may not be aware of unused capacity. And establishing an accurate team velocity is a “trust but verify” process. Trust the current velocity, but periodically verify its accuracy. After a team establishes a sustainable and consistent delivery velocity, you should run an experiment. Increase the number of story points planned for a sprint by some number. If the team successfully delivers the sprint, then that total number of points is used for planning successive sprints. If the team sustains that pace, then reset the team’s velocity to the new number. Run the experiment again.
This cycle of experiments continues until the team can’t keep up. At that point you have verified the team’s velocity as the last consistently maintained pace. This final velocity is likely a bigger number (more aggressive) than the starting velocity number and so now a project’s duration will be shorter than calculated with the untested velocity. But the new velocity is verified; consider the date realistic.
Comments (4)
Chris Marisic
Yikes, I would never want to work in that organization! That’s not even a death march, that’s a death marathon!
People are not machines. The internal combustion engine runs at 20% efficiency, even a nuclear reactor can barely reach 40% efficiency. You believe somehow humans can run at maximum efficiency (which is really 25%, 30% at best) for a permanent series of intervals? That’s how you get burn out and is definition of UNSUSTAINABLE.
That “unused capacity” is what allows the ebb and flow of software development to succeed on time. Some aspects overrun, some finish ahead of schedule, it balances out. That capacity is actually called float/slack https://en.wikipedia.org/wiki/Float_(project_management)
Consuming float is the only reason of project failure (not on time, not on budget). Choosing to WILLFULLY consume float is the single biggest risk you can ever bore upon your project that is correctly staffed to start with. The biggest risk possible is a project that is understaffed AND you steal the float, that is a recipe for guaranteed failure.
Chris, thanks for your interest and observations. I agree with your comment that slack and float are necessary elements of a sustainable pace. I often refer to those concepts as “tolerance”, the plus or minus next to a dimension that you find on mechanical drawings. Those measures provide accommodation for variations in the manufacturing process. The same way uncommitted capacity provides accommodation for variations in the pace of development.
But here I’m speaking to the growth in capacity that results from the natural improvements in the productivity an Agile team should accrue as they advance in their disciplines and skills. Because these occur over time, the team may not be aware that they can achieve a larger success. This technique is a means to test for the team’s true capacity. To provide stronger guidance; add one or two stories to a few sprints. If that consumes the team and is unsustainable, roll back to the original planning capacity.
Chris Marisic
So basically take what the team is willing to commit to and pile on more stuff?
People don’t want to be associated with failure. How are they going to meet your arbitrary amount of additional work on top what they already stated was their capacity? They’re going to cut corners. They’re going to test less. They’re going to be more willing to accept suboptimal experience. “It sucks” will be acceptable as long as “it works”.
That’s how developers deal with over capacity. If you already have a system that is severely lacking in design and planning the very last thing you need to do is ensure that what does exist is the lowest minimal quality possible.
John Mason
Chris,
Sorry to take so long to get back to you.
As I understand your point, you have the understanding that all teams always operate at optimum capacity, so directing them to test their assumptions around that capacity is asking for burnout, reduced quality, dissatisfaction, etc.
So I have to ask, how do these self optimizing teams discover and then use the increases in capacity that results from the continual process improvements that are one of the central agile disciplines?
I suspect they apply the suggested approach. They add a story or two to a sprint and gauge the result. If they are successful (same quality, no overtime, all tests written, etc.) they adopt the new velocity, if not they drop back.