This is the second article in the mini-series “Why Do We Need Those Estimates”. Never miss one of my articles: Readers of my newsletter get my articles before anybody else. Subscribe here!

One of the reasons teams and managers need estimates is to know how far they are in the project and when it will be finished. The reasoning is simple:

  • We know how far we are by dividing total estimated time by total elapsed time.
  • We know when we will be finished by adding up all the remaining estimates.

Indirect, Abstract, Coarse-grained Estimates

Answering those questions can work pretty well with indirect, abstract, coarse-grained estimates. Like story points on rather large-grained stories. Or what Kanban prefers: Different classes of work with SLAs attached to them.

Those estimation techniques are often designed in a way so your estimation errors can cancel themselves out. Story points, for instance, deliberately give you a very vague value. Every value is in fact a range of estimates: 5 Story points really means “Bigger than 3 but smaller than 8”. So, you might have one 5 that is larger than expected and on that is smaller, but both are still in the range.

With those kinds of estimates, you don’t get concrete answers. You get an interval, like “we will likely finish all the high priority tasks between June and July”, or “We will finish 70-85% of the high priority tasks before August”. Given that the backlog does not change (which we cannot safely assume). It is harder to work with those kinds of answers, but they are more honest than a concrete date or percentage. And they still might be wrong.

Also, this only works well when you only count fully completed work items and when you do not calculate remaining estimates.

Direct, Concrete, Fine-grained Estimates

Answering those questions might also work when you use direct, concrete, fine-grained estimates, like hours required to complete a task. But it becomes harder to get meaningful data. And there are more pitfalls. You data looks more precise, but that is often just an illusion. Also, I have not seen a team where I was confident that they really got that right.

It should work like this: You divide a larger work item (like a user story) into a series of tasks that all have to be done. Then you estimate how long it will take you to perform each task. As a result you also know how long it will take you to finish the larger work item: It is just the sum of all task estimates.

When people are working on the tasks, they have to update the remaining estimates, so we have a chance to correct our initial estimation errors. Say we estimated a task at 4 hours, but after two hours we found out that it will take much longer. We simply estimate all the remaining work and we now set a remaining estimate of 1 day and 6 hours.

It turns out that we (humans - all humans) are very bad at those things. We are bad at correctly identifying all the tasks that need to be done. We are bad at defining the tasks independently from all the other tasks. We are bad at estimating them. And our cognitive biases (Hindsight Bias, Planning Fallacy, Optimism Bias and others) make it very hard for us to learn from our mistakes and to get better.

So you are essentially pulling numbers out of thin air, no matter how much effort you put into the estimation. But since those numbers are so precise (the project will require 4721 man-hours), people will trust them more than the indirect, abstract estimates from above. Which is not really justified. The direct, concrete estimates will often be less reliable than the indirect, abstract estimates, which are designed so that your estimation errors can cancel themselves out (at least in theory).

Also, mixing original estimates, remaining estimates and elapsed time does not make sense, as in “Done Percentage = Elapsed Time / (Elapsed Time + Remaining Estimate)”. You cannot compare them. The elapsed time is a measured fact, the remaining estimate is an educated guess done with current data and knowledge and the original estimate is and educated guess done with outdated data and knowledge.

Is it even worth the trouble?

When you have a truly emergent backlog, one that obeys the ice-berg rule, you cannot really answer those two questions from above anyway. Because you simply do not know for certain how much work lies ahead.

You might say: “Well, we always gather all the requirements at the beginning of our projects”. But, given the changing nature of requirements (27% of your requirements will have changed within the first year), you do not really gain any advantage. Since you simply cannot know which and how much of your requirements will change, you are essentially in the same situation as above. You have just hidden the fact that you don’t know how much time lies ahead. But you have added some distracting details to your documents. And you have added some possible sources of errors and mistakes: You might forget to update a requirement in all documents. Somebody might find an old requirements document. You will have to re-work finished software when the requirements change.

Defects make everything worse (like always). When the quality of your software is low, your estimates are less meaningful. You just don’t know how often your work will be interrupted by defects and how much time you’ll spend fixing them.

And then there’s the agile idea that you’ll always work on the most important things, and simply stop when the software is good enough. How meaningful is “time remaining” when the customer can stop after any given sprint or release, telling you: “Now the software is good enough for us. Investing more would only give us deminishing returns.”? There is no “time remaining” in this case. And so there is also no “80% complete”.

Working Software

Detailed estimates are more important when you are not able to produce anything you can show to your customers and users early on. When you cannot show working software, you need other methods to tell your users where you are in your project. Like, “We are 80% done, according to our estimates”. When you always have working software, deployed to a production like environment, everybody knows how far you are in the project. Everybody knows what is still missing. They can try it out.

So, the better you become at actually producing software (and actually producing quality), the less you need to rely on direct, concrete, fine-grained estimates.


Time based estimates on (small) development tasks come with a lot of baggage. They take a lot of time and effort to produce. They become outdated very quickly. They create an illusion of precision. They create an illusion of accuracy.

When you are a developer, or a manager or a customer of a development team, ask yourself: Is there a better way for getting the data we need?

How this better way could look like depends on what you do and how your organization works. Estimating larger chunks (like user stories or epics) might work for you. Or using indirect, abstract estimates, like story points. Or creating different types of work and defining SLAs for them (like in Kanban). Maybe you need to re-organize your backlog, so that there are different detail levels and features can emerge. Or you could benefit from some #NoEstimates ideas.

There is no easy solution that will work for everybody. Maybe for your team, the solution is even to keep using detailed, task-level, time-based estimates. But I have seen several teams stop using them. And they were still able to answer the two questions from the beginning - When they needed to.