Developing Software Together

The TDD Trap


Test Driven Development, or TDD, seems to be very popular among some software developers, but most of us are still skeptical and not using it.

Those who use it say that TDD helps them to write better code faster. Those who do not use it often say they have tried it and it does not work.

How can some technique be very helpful to some people and not work at all for others? I think a part of the reason is what I used to call the “mock objects trap”, but now call the “TDD trap”…

Intro (TL;DR)

My name is David and I am coaching and teaching software development teams as a freelancer. In this video series, I want to show you things you can try to become better at developing software - alone and as a team.

You can view the videos on your own. Or you could do so together with others, maybe as an introduction to a learning lunch at your company or to a meetup.

Today I want to talk about test driven development. But I do not want to talk about how exactly it works or how to get started yet - I want to leave that for a future video.

Today I want to talk about how some people love it and some do not even want to try it again.

Those who love it say that it helps them write better code and that they are faster when doing TDD. And because they are faster, they also develop their software cheaper.

Those who do not like it say that they have tried it and it clearly does not work. Writing the tests beforehand slows them down. The existing tests prevent them from making changes. Keeping the tests green over time is a major effort for them.

And I think that, at least some of those people, have fallen into the TDD trap.

Question / Problem

I have seen the end result of this trap happen at different clients, and from the stories I hear, it usually starts somewhat similar to this:

Somebody decides that the team needs more tests. Maybe it’s a manager who sets a KPI or some team member who keeps insisting that some of the bugs would not happen if they had more regression tests.

Most probably, not everyone on the team fully buys into that idea. Everyone starts to write tests, but at least some of the team members only do so because they have to.

So, the team starts writing tests, but they never have a formal or informal training - neither internal nor external - because writing tests seems easy. One can learn JUnit in a few hours.

Some people try TDD. But they also do without formal or informal training, because TDD seems easy. Red, Green, Refactor.

But people do not take extra care to write good tests. They do not put effort into designing their tests;

A) because they have never learned how good tests look like and
B) because testing is still an after-thought for most.

They try TDD, but it does not really work. People have a feeling that it slows them down; writing the tests before the production code is super-hard and refactor? They don’t see anything to refactor after most “Green” steps, so they stop thinking about it at all.

But they produce a lot of tests. A lot of bad tests.

And then, after some months, they want to change something. But the tests get in their way. After every small change, dozens of tests are red. Even after valid changes.

They conclude that “TDD does not work here”. This team has fallen into the TDD trap.


The underlying cause of the problem here is that neither writing unit tests nor TDD are easy. Those are, like the game “Othello”, “a minute to learn, a life-time to master” - kind of activities.

And, to be honest, there’s a second underlying problem here. But that one would be totally off-topic for this video. Maybe for a later one ;)

Back to the first cause: TDD is rather easy to “learn”. I can explain the basic rules to you in 10 minutes, and then you can start to test-drive some code.

And this is exactly what I do at the beginning of the first day when I teach TDD trainings. So, what do I do with my training attendees for the rest of the two days?

After the ten minutes explanation, people are able to do the red-green-refactor circle.

The TDD cycle with annotations about where to do design

But it feels unnatural for them. They get stuck a lot. They take steps that are too big and then have problems writing the next test. They write themselves into a corner, but they do not want to delete code or tests, so they do not know how to get out again. They write “bad” tests that make their lives harder down the road. And they forget the “refactor” step, so it’s actually red-green-red-green-… for them.

To practice TDD successfully, you need to know more than the basic rules.

You must learn to take really small steps, and to not write more code than what is absolutely necessary. Because whenever you write more code than necessary - even only a little more - the next step gets harder.

You must learn about triangulation and about which next test - out of the list of all possible tests - to write.

You must learn how to use the “specific / generic cycle” to drive your code an design.

Diagram explaining the specific/generic rule

You will rely on your individual judgement a lot, so you have to develop a “gut feeling” for good tests and good code.

And there is a lot more to learn.

Learning all these things takes time and a lot of practice. But with a little bit of guidance from someone who has done it before, you can learn some of them a little bit faster. And you can hopefully avoid some common pitfalls.


And the TDD trap is about those pitfalls.

It happens when people think that TDD is easy. They read about it and think that they can do it too.

They try it and it does not work for them. So, they conclude TDD does not work at all.

And it can be worse. Sometimes they create a mess while using TDD. They write bad tests and they keep all of them. And months later, when they want to refactor something or write a new feature, the bad tests get in their way.

Some of those people start to hate TDD. They write angry blog posts. Others just dismiss it, never use it again and joke about the people who are still using such an “obviously flawed” method.

But it does not have to be like that.

If you have had problems with TDD, or if you are just starting out, try to find people who are using it successfully. Go to user-groups or meetups. Go to conferences. Ask around. Ask on Twitter.

Try to find someone who has “been there” and tell them your story. Ask questions.

This can make learning TDD a much more “painless” experience for you.


Did you already have a bad experience with TDD? Do you think someone you know fell into the TDD trap? Please tell me in the comments!

Or, if you want a longer discussion, tell me on Twitter: I am @dtanzer there.

And do not forget to subscribe to this channel or follow me on Twitter, so you do not miss any updates.

Slow Down to Move Faster


Did you ever hear phrases like “Agile is not faster” or “You need to do un-intuitive things to become agile”? Did that make sense to you?

To me, in the beginning, those phrases made no sense at all. But now I think that they are the main reason why so many companies do not get “agile” right. They “do” agile only to become faster. But they do not “get” some of the most important underlying principles.

Today, I want to talk about a very important but also un-intuitive principle: You must slow down to move fast.

Intro (TL;DR)

My name is David and in this video series, I will talk about things you can do to become a better software developer.

I am a freelance consultant and coach - and in the last 12 years I have strived to constantly learn and improve my skills as a software developer, architect, and also as a tester, team coach and trainer.

Many of the things I learned - like today’s topic - were hard to grasp for me. Because some of things we have to do to become better make no sense at all. Until you think hard about them and try them and learn more - And suddenly, those things are perfectly reasonable and the opposite does not make sense anymore.

For example, “Slow down to move faster”.

Today, I want to talk about how agile software development is not “fast”. And how it can still be worthwhile, because it enables you to deliver more value in less time.

Which means that it is faster, although it is slower. Confused? Me too. Let me start again, this time a little bit slower.

Question / Problem

Many things we do in agile software development seem to be slower than in a “traditional project”.

We are doing some planning, some design, some architecture in every iteration. We even change our design and architecture on a regular basis. This definitely adds some overhead.

We take more care when writing code. We write a lot of tests. We run them often. We re-think our design decisions after every green test and refactor often.

We spend a lot of time automating things and testing that automation. We automate user acceptance testing, performance testing, regression testing, deployments and even releases to production.

We work in pairs or even in small groups. Five people working togehter, on the same feature, even the same line of code? That cannot be faster than 5 people working in parallel, not getting in the way of each other.

And yet, often it is.

How can that possibly be true? How can slowing down make us faster?


Let’s look at two teams. Team “Niagara” works in a waterfall-style, plan-then-do-then-integrate, command-and-control way. People do exactly what they are told or what was written down in the plan.

Team “Gazelle” is more agile. The team experiments a lot, they try to get good feedback early, they release often and take care to do things right and to automate everything.

We ask both teams to deliver 5 features for us: A, B, C, D and E.

Team Niagara first plans the project, then they sketch out their software architecture. They do some project preparation work, like setting up their repositories, build systems, CI server, IDEs and so on.

Then they start to implement the features. First feature A, then B, C, D and E. They integrate and test everything. Things went more or less as planned, so they release to their users.

Chart showing the progress of Team Niagara

It took them 7 weeks.

Team Gazelle, on the other hand, start with a little bit of project planning, software architecture and preparation work. Then they implement a very basic version of Features A and B and they integrate and test all the time. After a week, they release what they have to real users.

Users cannot really use the software productively yet, but they give feedback. And from that feedback, the team learns that users, right now, really need D and E.

So, the team works on D and E next, but they also have to do some project planning, preparation work and software architecture. A week later, they release again.

The users try it, and they like D and E, but they would need a more elaborate version of those two features to use them productively. The team listens to their feedback, and they work even more on D and E.

Also, during that week, users report some defects - they seem to be really trying the software! - so the team fixes those right away.

During the next review, where the team gathers feedback from other stakeholders, some “friendly” users agree to use the software in production now, since it has enough features to provide at least some value.

Also, a user suggests that a feature G would be awesome. And, someone from operations brings some usage statistics and they suggest that the current implementation of feature E will not scale.

Now the team addresses those issues. They completely re-do E and start with G. They fix defects and make some minor corrections to D. A week later, they release again.

And this cycle of implementation, release, feedback and re-planning continues until, after 10 weeks, they are done.

Chart showing the progress of Team Gazelle

So, delivering the software took longer with the agile approach, and the team did not even deliver feature C at all! Was that really better?

Well, the story does not end here. With software, it never ends after the release.

Right after the rollout, team Niagara gets the first phone calls. The software does not scale. They have major slowdowns and production outages.

They have to work night-shifts to prepare a quick fix, and then work even more to create the real solution to the scaling problem. They have to re-architect feature E, which causes the problems. But some design that is in place for feature C makes re-doing E harder, so it takes even longer than at the first time.

While they work on that, bug reports keep rolling in, so the team starts fixing those.

And people complain that the software does not really help them to do their job, because some major parts at the beginning of all workflows are missing. Management decides that the team needs to implement feature G.

But because all that code in place - The fixes for E, all the other features and the hastily-done bugfixes, implementing feature G takes quite long.

After the next hotfix release, the team analyzes some metrics. They realize that only a tiny percentage of the users uses feature C. To make future maintenance easier, they decide to remove a part of C.

Chart showing the PROBLEMS of Team Niagara

While team Niagara did all that, team Gazelle moved on to the next piece of work, for another application. In parallel, they fixed some minor defects and collected feature requests for the next major release.


In the end, they delivered faster and cheaper, they are now ready to start working on the next major release and they were able to provide value in another application while they had some slack.

One could say that team Niagara made mistakes during requirements engineering and software architecture. They should have forseen those things that made them slower in the end!

But there will always be unforseen things. Even the best requirements engineer or software architect cannot anticipate everything that could ever happen.

So, by getting feedback early, being able to react quickly and working hard to have high quality code, design and architecture, the “agile” team was faster.


In a future episode, I want to talk in more detail about some aspects of this example. And I will try to answer your questions. So ask them right here in the comments or ping me on Twitter - I am @dtanzer there.

And subscribe to this channel so you don’t miss any updates!

Learning Lunch


How can you, as a team, start with continuous improvement when your company does not want to give you enough time? And if they do not give you a budget to spend?

A learning lunch - where, once a week, you use your lunch time to learn something new - can be a great starting point. And it requires only a small investment of your team members’ personal time and money.

Intro (TL;DR)

My name is David and in this video series, I will talk about things you can do to become a better software developer. I will keep the videos short so you can watch them while commuting or during a break. Either alone or with your team.

I have been a freelance consultant, coach, trainer and developer for more than 12 years now. And in those 12 years, I have learned a lot - mostly in my spare time.

And I am still learning. But now I have a family and learning or working in my spare time is not feasable anymore. Also, I think nobody should be required to learn in their spare time.

So how can you learn and improve continuously - alone or as a team - without “wasting” your precious free time?

Today I want to show you one technique that could work for you. Today I want to talk about the “learning luch”. Try to spend one lunch break every week or every two weeks learning new things together with your team. And convince your manager to “pay” for half of the time.

Question / Problem

If you want to get good at developing software - as a developer, but also as a team - you have to learn a lot. I mean, a lot! When I started collecting topics for this video series, I was baffled by the number of things I want to talk about. And I am still in the process of learning most of those things myself - even after more than twelve years of working in this industry.

So, there is a lot to learn. You’d better start now.

Just… The day has only so many hours. And many of us spend most of them at work. We could learn in our spare time. But that is not an option for many of us. We have children, friends, hobbies, better things to do. And we must sleep - a lot - so we can be productive again tomorrow.

That means we must find ways to learn on the job. If you are lucky, your employer gives you all the time you need for learning. Realistically, there will be some limits.

And in many companies, there is almost no time and no budget for learning at all. By the way, let’s talk about good and bad employers in a later episode.

So, your company does not give you enough time and money to learn all the things you would like to learn. That means, your mission is from now on to

  1. Prioritize what you want to learn
  2. Find ways to learn things in short bursts
  3. Learn on the job, but in a way that does not take “working time” away from you
  4. Change the attitude and policy of your company towards learning.

I will talk more about things you can do to fulfill this mission in later episodes. Today, I want to start with something simple.


You will just do the learning during your lunch time. Once a week, together with your colleagues.

“Wait”, you say, “that means I will do the learning in my spare time! You said that was not an option!”

And you are right. You will be spending some of your own time, learning things that you need for your job. But you will do it at a time where you are at (or near) your company anyway. And you will also personally benefit from the things you learn here: You will become a better developer, and your job will become more satisfying for you.

Yes, I do believe that your employer should give you all the time and resources required to learn everything you need to do your job well. But when they don’t? What are your options?

If you are interested in this job and you want to do it well, start learning some of those things on your own. If not, you should seriously reconsider whether you are at the right place right now. But I will talk about that in a later video.

If you decide to start learning on your own time, your lunch break is a very good time for that. You usually cannot do much else anyway, you are at work. And your colleagues are already there, so you can learn from each other.

All you need to start a learning lunch is some people who are willing to join you and a place where you can do it. And your cafetaria or a restaurant is probably not the right place.

You need a room that is reasonably quiet and where you do not disturb anyone else. Everyone should be able to sit comfortably, at a table. But the room should also not be too big - If most of the seats are empty, it will not feel like you are having lunch together.

You need a big screen or projector for pesentation, and probably a good internet connection for the person who does the presentation. You also need a whiteboard or a flip chart for the discussion afterwards - even better if you have both. Also have differently colored sticky notes and painter’s tape if you can.

Everybody should bring note taking paper and a pen. But make sure that nobody except the presenter brings a computer. You want to facilitate a discussion, not an hour where everyone is surfing the internet on their own. So, phones should also stay mostly in the pockets of their owners.

And you need food. Either you order or buy food togehter, or everyone brings whatever they like.

I would not recommend cooking together, even if you have a suitable kitchen. While cooking together can be a great team-building experience and can be fun, it will just take too much time. You want to spend your lunch break mostly with learning and discussing.

One person prepares a topic. But do not spend too much time preparing. This does not have to be a sleek presentation.

Just state a problem, challenge or question, and solve it together. Like,

  • I would like to explore whether trunk-based development or feature-branching is more suitable to our way of working.
  • I want to practice outside-in TDD by writing a tic-tac-toe game.
  • I want to show you a cool thing I’ve found yesterday and then explore how we could use it.

You could even just watch tutorial videos together (wink) and then discuss them.

You can use different techniques - techniques that are useful in other situations too and that I want to present in later videos - during the learning lunch, like:

  • Mob Programming
  • Prepare presentations
  • Create a card game
  • Write a story
  • Power-point karaoke
  • Chaos Cocktail-Party
  • Learning Matrix

After the learning lunch, take some 5-10 minutes to prepare a conclusion. Write the topic on a flip chart page, and then write down what you did, a short summary of the discussions and, most importantly, what you have learned.

Add drawings, use differently colored markers, sticky notes, index cards - be creative. Make it beautiful.

Then, hang it on an office wall or in a team room, where everyone can see it. If there is already a flip-chart page from last week, take a picture of it and throw it away first.

After some time, when you have 8-12 pictures of learning results, show them to your manager. Explain to them what you have learned. Ask them to allow you to do a part of the learning lunch during working time. Ask them to pay for the food.

By this time, you will be able to show them the positive impact the learning lunch has on your team. So the manager has a good reason to accept your request.

(And if they do not accept: We will talk about good and bad employers in a later episode).


To recap; You should not use your precious free time to learn stuff you need on your job.

But if your employer does not give you enough time to learn and improve? What are your options? Use your free time, but a part of your free time that you cannot spend on hobbies / family / sleep anyway.

Like, your lunch break. Prepare a small - and I mean small - task for your team, and then discuss, program, research, write together. While having lunch.

And prepare a poster, like a single flip-chart page, with your results. Keep pictures of these posters to remember later how much you have already learned in those lunch sessions.


Now go ahead and organize a learning lunch at your company. And then, tell me in the comments how it went. Or tell me on Twitter - I am @dtanzer there.

And subscribe to this channel or follow me on Twitter if you want to get more videos like this.

What does an Effort Estimate Mean?


When we give a size-, effort- or time estimate, like “5 Story Points”, “7 ideal engineering days”, “this will be finished before End of May”, what does that actually mean? And how useful is the number?

That depends on what question we are actually answering with that number…

But We’d have to Know the Future…

Some people argue that, in order to estimate accurately, you need a time machine: Only when you know the future, you will be able to make accurate descriptions.

But that’s a classic straw man: While the statement is true, it completely misses the point. The estimates are not supposed to predict the future accurately. They are meant to be another data point for our decisions.

Other Factors

Our estimates will always be “wrong”. No matter how much time we invest.

We can never fully anticipate all the things that might go wrong during development. Or how often we will be interrupted by more urgent stuff. Or how our software architecture and design will have changed when we start. Or how third party systems will behave.

When we get better at producing high-quality software - get better at “crafting software” - some of those factors get smaller. But they never go away.

But We’d have to Specify Exactly…

This is something I hear from teams a lot. “Let’s write down what we discussed before we estimate, so that later, we will implement exactly what we estimated”. So, the team here wants a very exact, detailed specification before giving an estimate, to make sure the number is as “accurate” as possible.

That behaviour comes from our own perfectionism and from fear of the consequences of wrong estimates. Both are very real, and both probably highlight some major cultural problems in this team and company.

Specifying, in great detail, when producing an estimate will prevent them from working in a truly agile way.

But can they even estimate, without knowing the future, without knowing everything that might go wrong, and without even knowing exactly what they’ll have to do?

Some Version of That Feature

Everything we do has an expected benefit for at least some of the stakeholders. Hopefully without annoying some others.

When there is so much uncertainty - no clear, exact specification, all the other factors - an estimate cannot mean “We will deliver exactly that feature within roughly that time frame”.

But it can mean “We are pretty confident that we can deliver some software that will bring most of the expected benefit within roughly that time frame”.

When we learn to create features iteratively, we will have a first version of the feature ready long before the time is up. And then we can focus on adding more and more of the expected benefit.

Are there Other Ways?

Our estimates will always be “wrong”. No matter how much time we invest. So, is it even worth creating them?

Also, when we use the definition “some software that will bring most of the benefit”, there is always the danger that some people “misunderstand” us (sometimes even deliberately) and turn our estimates into commitments.

And the usefulness of our estimates also depends on what kind of software we produce and where we are in the life cycle of the project.

But I will write about that later. Today, I want you to review what an estimate means in your team. And discuss whether this definition is useful within your context and what you can improve. And if you are allowed to, please tell me about it!

Planning Poker - What Could Go Wrong?


A lot, it turns out. Or, maybe, not “go wrong”… But the result you get might not be the result you expected.

This article is part of the series Planning Software Development.

Suppose your team must estimate the effort of the work packages (User Stories, …) it is supposed to work on.

I know, sometimes the estimates are not needed at all and sometimes the need for estimates hints some deeper dysfunctions. Let’s put that aside for now.

Your team must or wants to estimate, and you are doing “Planning Poker”.

Planning Poker

In Planning Poker, some person (often called the Product Owner) presents a user story to the whole team. Then, every team member (developers, testers, …) secretly select a “poker card” that shows an estimate. Everyone turns their card simultaneously. Then they discuss why the numbers differ so much. After that, they play another round.

Often, those poker cards do not contain all possible numbers, but some exponential sequence (Fibonacci, modified Fibonacci, power of two), so that for larger numbers, there will be more uncertainty.

The benefits of this method are that team members do not influence each other when picking a number (some agile consultants call it a Wideband Delphi Technique for this reason). And still the numbers will be more accurate, because everyone’s opinion was heared during the discussion. And the whole team will commit to the estimates, because they created them together.

Those good things are happening… In the very best case. Which is probably not happening in your team.

Bored People

The PO discusses the user story - It’s the tenth of today. Your have small stories, as you are supposed to. The Scrum Master says “Everyone select your card. And turn your cards in 3… 2… 1… Now!”

The cards show 2, 2, 3, 3, 3, 5, 5, 5.

Someone on the team says “Let’s just make it a five and move on”. And everyone agrees “Yes, this is definitely a five!”.

Doing planning poker for lots and lots of small stories can become boring for everyone involved. People just want to get done - Assign a number - any number - to those stories, and get back to work. So, they become sloppy, like in the example above.

But, on the other hand, you should have small stories that go into development. And, as we already have established, your team must or wants to estimate the work packages they will be developing.


Commitment to an estimate is a real problem.

I have listed it as a potential benefit above: When the estimates or deadlines come from outside the team, people will happily ignore them at best, or become very sarcastic and stubborn at worst. They will only do exactly what is instructed - even if it is stupid. So, the estimates must come from the team themselves.

But commitment to an estimate creates more problems than it solves. When people are committed to an estimate - instead of to an outcome - they will push back very hard against changes that would prove their initial assumptions wrong.

Yes, this is again a hint there might be a problem with the engineering culture of this team or organization. So, the problem is not only caused by planning poker.

But I have seen this in several teams that used planning poker, so it is a real danger. And this problem is caused, in part, by the team commitment that planning poker creates.

Power Dynamics

People always follow their leader. Even if they do not know it. So, if there is a leader, that person’s opinion will dominate the estimates.

And I am not talking about somebody with formal authority. Someone who forces their opinion on others. That would be a major dysfunction in an agile team - one that you should address.

I am also not talking about team members who think they have no authority at all, who think they should submit to the group opinion, who do not dare to speak out. This is also a dysfuntion in your team. It is harder to detect and address, and you must be careful when addressing it, but you should work on it.

I am talking about the person everyone trusts. The most senior or the most talented person on the team. The person who always helps and mentors others. Who is there for everyone else.

The cards show 2, 2, 3, 3, 3, 5, 5, 5. That person showed a 5.

There is a short discussion, where everyone is heared. Then there is a new round of estimating. Suddenly, all cards show 5.

If this happens once, it is OK. If this is happening often, examine your biases.

To Recap…

Planning Poker should, in theory, eliminate a lot of biases and make estimating quick an painless. But in reality, it often fails to achieve that.

The results that you are getting are probably not what you expected. They are often not an un-biased, thoughtful estimate where everybody’s voice was heared. And the shared commitment to the estimate - and not the outcome - creates its own problems.

I do not want to advise against Planning Poker. If you must estimate, it is still the most painless method that I know of. And it is easy to get started for unexperienced teams, so it moves one obstacle - “But we need numbers!” - out of your way quickly.

But after you did it for some time, examine the problems I have outlined above. If you have them, see if you can solve them (iteratively, over multiple sprints, in retrospectives). And also examine whether you can completely do without workpackage-level estimates.

Why We do not Want to Give an Answer


This article is part of the series Planning Software Development.

When we develop software, people often will ask us questions like:

  • When will it be done?
  • How long will it take?
  • How much will it cost?
  • What will be done by 2019-02-25?
  • Can [feature set] be done by 2019-02-25?
  • What can I get for 200 000€?
  • What team size do we need to finish before 2019-02-25?

Those are related but slightly different questions. And a good answer to them would be useful to some people, often even to the people on the development team - the people who should give the answer - themselves.

And we often do not want to answer those questions. And we have reasons.

Our Answers will not be “Good”

Or at least not good enough for our own standards.

There are some inherent difficulties in estimating software development: Things about our work, things happening in our teams, that make estimating hard. And by hard I mean that our estimates will be “inaccurate”.

That is not a problem per se: All estimates (and also all forecasts, which are a type of estimate) are inaccurate. The value is still useable and useful, because it is based on the best information we have available right now.

But we do not want uncertainty. We are humans, after all. So, we do not want to produce data where we know that it is “wrong” from the start.

Our Answers May not be Useful

Sometimes, we do not want to give an estimate because we know we cannot produce it to a degree of certainty that would be useful.

We may be reluctant to answer “How long will [feature X] take?”, because there are too many factors that influence this number.

We might be tempted to answer “How much work will [feature X] be?”, because under some conditions, this question is easier to answer. But we have to know beforehand what exactly “Feature X” means. So if we answer this question far ahead of starting to work on “Feature X”, this may impede our agility, if we are not careful.

We can often produce the most meaningful answers to “How many work items can be done within the next month”, because this is a capacity-based forecast. In a stable team that already has enough historical data to run some simulations, this number will be the most accurate of the three.

But we will be most reluctant to give an answer when we think that the person asking does not need it at all.

Our Answers are often Ignored

Somebody asks “us” - the development team - a question. We answer with an estimate. When we suspect that our answer does not and cannot change the outcome of some meaningful decsisions, we feel like we have wasted our time.

“Can [Release] be done before September 22nd?” - “No, probably not.” - “But sales promised that to one important customer. Try it anyway.”

“Please estimate those 20 user stories, so we can sequence and cut scope” - “OK, here are the estimates…” - And then nothing happens…

“Based on our estimates and acutals, we are way behind schedule. We should cut scope now so we do not face crunch time in two months.” - “No, cutting scope is not an option.”

I bet every developer, tester and other development team member knows situations like those. And after hearing something like that, they probably have been thinking: “Why am I even talking to you when you ignore me anyway?”…

Our Answers Might be Used Against Us

Whenever we give an estimate (or a forecast, which really is just a special kind of estimate), we risk that somebody thinks this is a promise. At least in dysfunctional organizations.

And it does not matter how much disclaimers we add.

“In the best case scenario…”, “Our simulation said the most likely release data is…”, “Measured in perfect engineering days…”, “…but you know that a story point is really a range.”

The micro-manager will not hear those. They will come back later, screaming “But you promised…” or “Why is our velocity down this sprint?”. If this is happening in your organization, you have some major cultural problems. You are seeing Agile Anti-Patterns. And in such a dysfunctional environment, we do not want to give any answers that might be used against us.

No Answers - The Solution?

For what it’s worth, do not answer questions when you do not need the answer. But…

I am talking about not wanting to give answers today. If we could give them, they might be useful. Even if they are not usefull all the time, they may be useful in some phases of software development. At least to some people, and at least in some situations.

Even though our answers may not be “perfect” or even good enough for us, there are situations / times where we are able to give better answers than in other situations.

And when the answers are ignored or used against you, you are seeing some severe dysfunction in your organization. In such a situation, you probably should not answer those questions - But you probably are also not allowed to deny an anser. So, now might be a good idea to look for “Agile Anti-Patterns” and to start addressing them - You have bigger problems than just giving “good estimates”.

Planning Software Development


I already wrote quite a few things about estimation in this blog. But this is only a part of the picture.

Now, I want to collect some thoughts about software development. This page collects all the blog posts in this category.

Here they are:

Iteration, Detours and Feedback


Throughout my career, I tried to create some products (Books, courses, video tutorials, software). Most of them failed, especially the ones in the early years. They failed in a really un-spectacular way: People were not even interested in them.

And I think the ones that failed, failed because I wanted to finish them before showing them to anybody.

Now, I have two products that people are interested in: They are downloading them and buying them. I do not make much money from them, but still… They are infinitely more successfult then the ofther ones.

Here’s what I did differently.


Over the last 11 years, I learned a lot about how to work in a software development team. And I tried to apply my learnings to my own way of working.

So, none of what you’ll read here is really new. You may have read it before, in a book or a blog.

But it was new to me at some time. And some of the things, I had to learn the hard way.


This was the most scary thing for me to try: You must get good feedback early.

And this means showing unfinished work to others - lots of others. But what if they don’t like it? What if their feedback is devestating? Obviously, they would like it more if I showed them the finished thing… If they saw the big picture.

Well, that “obviously” is not so obvious at all. In the past, I wasted a lot of time “finishing” stuff that, then, nobody would be interested in. Or not even finishing it, because at some point, I ran out of time.

With my React / Redux course (that later led to my React / Redux Book), I did it differently. I announced on Twitter that I would be giving a webinar (and recording it) without having any material prepared. As I went on, I was completely open about my progress and what would happen next. The webinar was sold-out, and the content I created later led to a published book.

I did something similar with my other book, Quick Glance At: Agile Anti-Patterns: I presented some ideas at a conference. The audience liked them, so I wrote them down.

I gave the very first draft (which was still quite rough around the edges) to almost 200 readers (and promised that they’d get the finished book for free). I got almost 20 emails and messages with valuable, thoughtful and respectful feedback. The finished book is now much better than anything I could have created completely on my own.

But beware: Releasing something unfinished to the public or some friendly users does not mean releasing low quality. In both cases, I tried to create a small slice of the final work in almost the final quality. Otherwise, I guess I would not have gotten good feedback - People would have only complained about the quality.


With both products, I tried to work in an iterative way, creating increments along the way.

For the Agile Anti-Patterns book, I did the following iterations, seeking feedback after each of them:

  • I presented a short list of anti-patterns in my keynote at the Lean - Agile - Scrum Conference Zürich
  • I created a draft ebook with those anti-patterns
  • I changed the structure and content based on feedback I got
  • I added more anti-patterns based on the feedback
  • I created a mobile-friendly, single-column version of the PDF
  • I published the final book on Amazon and Gumroad
  • I changed the distribution platform from Gumroad to Payhip after some problems
  • I added bonus material (Large Poster, small posters, illustrations, …)

After every iteration, I had some finished product that I could show to people and ask them what they think about it. And I proceeded based on their feedback.


What I wrote above is not entirely true: It just describes the happy path, for both products. I actually took some detours and saw some dead ends with both.

With the React / Redux book, for example, I took some detours and some dead-ends. Here is how the book came to be, and unsuccessful steps in between:

  • Successful: Host a webinar and record it. Attendees and some other people got the videos for free
  • Unsuccessful: Sell the videos
  • Unsuccessful: Give the first 5 videos to subscribers of a mailing list for free
  • Successful: Write a short ebook based on the 5 first videos, give it to subscribers of a mailing list for free
  • Successful: Write a longer ebook based on the whole course, sell it on Amazon and Gumroad
  • Unsuccessful: Up-sell from free ebook to complete ebook

…and some more mostly-unsuccessful things. So, only half of the things I tried were actually successful - and some of them only moderately or only on some metrics.

And I also had a large detour with the “Agile Anti-Patterns book”: Before I presented the anti-patterns at that conference, I had already a concept and almost 5 chapters for an “Agile Excellence” book, but I did not like them. So, after the positive feedback I got at the conference, I threw the 5 chapters away and started over.

The lesson here is: Be prepared to throw stuff away. Be prepared to go backwards when you face a dead end.

This was very hard for me to learn… I spent so much time on this thing, now I’m supposed to throw it away? I had to learn to overcome the Sunk Cost Fallacy.


So, I had to learn to stick to the main agile ideas.

  • Work together with your customers to create a product
  • Seek feedback early and often
  • Start small, iterate, and have a “working product” after every iteration
  • Take small steps

All four are scary. And just like when you are developing software in team that is part of a company, all four help you reduce risk and create a better product.

And I think I could do even better. I must learn to take even smaller steps, and seek feedback even more often.

Maybe next time…

Agile - Two Steps Back


At conferences (and when talking to others), I often hear discussions about specific agile methodologies.

About the relative merits of extreme programming vs Srcum. About how SAFe is not actually agile (or why it is agile). About whether Kanban is agile or lean or both or none of the above.

About how Sprints and standup meetings and the Scrum Master role and the backlog and other things cause problems and are responsible for dysfuntions. Or about how those things work really well when you use them as intended.

And all of those discussions have some merit. Just… I think sometimes, it would be necessary to take two steps back and reconsider why we are doing this in the first place - and what we expect to happen.

Here are some of my thoughts around this topic:

What is an Estimate, Part 2


Yesterday, I got dragged into a #NoEstimates discussion on Twitter again. It was a mistake: I really dislike how those discussions usually unfold. With all their half-truths and over-simplifications and attacking straw men and the passive-aggressiveness, most turn basically into a flame war. I think we need a more nuanced discussion, if we even need that discussion at all.

But I could not stop thinking about that discussion. So, here’s another blog about Estimates and other stuff…

Estimates and #NoEstimates

In What is an Estimate, Anyway?, I wrote how I dislike the only definition of “Estimate” that I found in the context of #NoEstimates. As a quick reminder, here it is:

Estimates, in the context of #NoEstimates, are all estimates that can be (on purpose, or by accident) turned into a commitment regarding project work that is not being worked on at the moment when the estimate is made.

Vasco Duarte

And now, I think I know why I dislike it so much. It is not only circular and unhelpful. It invites people to shout No True Scotsman!. And it invites other kinds of flame wars.

An Estimate can mean many different things. It can, for example, mean: “We are pretty confident that we can implement some version of that thing in roughly the same time as some version of that other thing”. As long as everybody involved knows that this is what you mean, everything is fine. As soon as somebody with power misunderstands what the estimate means, you have a problem!

But you have that same problem with any kind of estimate or forecast. And probably even when you don’t do estimates or forecasts at all.

Say you are doing capacity-based forecasts, like some proponents of #NoEstimates would suggest (and which I think are a good idea!). And then somebody misunderstands the forecast and turns it into a commitment. Now, somebody could shout: “No true scotsman! You were not doing proper #NoEstimate, because whatever you had was turned into a commitment, and thus was an estimate!”.

What Could We Do Differently?

Let’s look at the Wikipedia defintion of an estimate again:

An Estimate [is an] approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is derived from the best information available.

-- Wikipedia: "Estimation"

With this definition, we could acknowledge that we are using estimates, whether we estimate things or not. We are also using them when we forcast or “just work on the most valuable things first”.

Then we could talk about for which purposes those estimates are usable, and for which they are not. And we could have a discussion around that.

And if some people in our organization are turning those estimates into commitments, we could talk about how this is toxic for the team and the product. How we are facing the “Feature Factory” and the “Burnout by 1000 Baby Steps” Anti-Patterns in this team. When solving the problem, we may or may not end up with doing less estimation (the activity of estimating things). But there are still estimates.

If this was interesting to you, check out my book Quick Glance At: Anti-Patterns. There, I describe situations that go can go wrong in agile teams. And I am trying to help you find the root causes and start solving those problems.