How often do you review and validate your practices/process?

Loading...

How often do you review and validate your practices/process?

We currently drive changes to our process through the following mechanisms:

Weekly wrap-up meeting
Project postmortem 

We discuss what isn't working, what is working, etc.  I use these settings to introduce new practices, and eliminate ones that aren't working.  We usually only make 1 change at a time and try it for 2 weeks to a month.
To validate our change we look at:

our velocity and std. deviation (work accomplished week over week)
dialectic
our opinion/perception of the practices effect and effectiveness

Those practices which seem to work for us we keep.  Everything else is removed.  This has worked pretty well for us, but I'm always interested in better ways to do this. 
How often/When do you review you development process?
How do you validate the changes in your process are effective?

Solutions/Answers:

Answer 1:

How often/When do you review you
development process?

All the time.

How do you validate the changes in
your process are effective?

By monitoring and evaluating everything all the time.

This may sound unbelievable, but where I work we do not stop reviewing. Apart from the formal release post-mortem with regard to planned and realised issues (and why some were postponed or brought forward), and the usual HRM “performance reviews” we do not have a formal schedule for evaluations.

We evaluate and learn. All the time. No schedule. Whenever we encounter something that we would rather not have happen again, we try to work out a way to prevent it happening again. Whenever something goes especially well, we try to figure out what it is that is making it go better than other times so we can replicate that in the future.

It is informal and very ad hoc, but very to the point and very effective as well. For one because any disection of a situation is immediate, which means everything is still fresh for all involved. For another because if you weren’t involved in the situation and/or are not part of figuring out the solution, you do not have to waste time in scheduled meetings to which you may have nothing to contribute.

Personally, I like this continual (continuous?) attention to how we can improve our product, our processes and ourselves. You can’t do this in any ol’ team though. It requires:

  • A very clear idea of the priorities in all aspects of the product and its development.
  • A team of very open minded people.
  • An absence of ego’s/ego-tripping. Nobody is perfect and prima donna’s just get in everybody’s way.
  • No code ownership by individuals. No one is the sole master of a piece of code, everybody can work on anything. Though expertise is taken into account, knowledge transfer is equally important.
  • An atmosphere where it is recognized that errors happen and everybody can make a mistake or misjudge the impact/required time for an issue. This does not mean we like errors or mistakes. We certainly don’t like to see the same error by the same person more than once.

Answer 2:

I like your answer, but here’s another, for the sake of variety:

  • review 2 weeks and/or 3 cycles after start of process or practice – when your team has just enough time to have ironed the “we did this totally wrong” type problems and is just starting to get a handle on it. “Start” can be a new phase (like waterfall phases), or the instantiation of something new that should transcend phases – like a new continuous build system

  • review at critical mass – when you have a “statistically significant” amount of data to look at. I had to put it in quotes, cause I don’t really do statistical analysis here. But 3 iterations is too small. I mean somewhere between 10-20 repetitions of something where you may have enough data to see some outliers or an average trend. This can be really machine-relevant stuff like the time it takes to do a build, or it can be subjective stuff like the accuracy of individual estimates or the estimated time it took to fix a class of bugs.

  • review at completion – if the practice will be stopped take a look and see if it was bad or good, could have been better or had any great side effects that you didn’t anticipate.

  • review at staff change – whether you’re growing or shrinking, staff changes are a good time to touch base and review. If it’s a new person joining the team, maybe they have some great tricks you don’t. If it’s a team member leaving, capture that last knowledge. This may not necessarily be a team sport – it may be a manager/changing staff thing – particularly if you think it’s cherished process that is up for serious critique.

  • review because the game changed – either the product or the environment in which you build it changed – time to assess and see if you need to do anything differently.

Can’t say I do all those things all the time. Too much navel gazing will drive you nutty. But these are all decent touchstones to be used with consideration.

Answer 3:

I am gonna play the devil here, because I strongly believe in changing for good rather than changing for the sake of it. I also believe every team slowly moves towards an equilibrium – it can hang there for eternity.

Have you measured the cost/benefits of changing processes constantly? I know of a bunch of teams who have been working on an incremental development model with regular meet ups for years and its perfect for them. Yeah new team members join in, old ones leave – but the process has stuck and worked wonders.

Another question I have is how do you scale this / adapt this for a busy team with tight dates? I can see this may be working in a small team with manageable deliverables – but have you tried this out at a larger team / company level?

References

Loading...