How often do you review and validate your practices/process?

Loading...

How often do you review and validate your practices/process?

We currently drive changes to our process through the following mechanisms:

Weekly wrap-up meeting
Project postmortem 

We discuss what isn't working, what is working, etc.  I use these settings to introduce new practices, and eliminate ones that aren't working.  We usually only make 1 change at a time and try it for 2 weeks to a month.
To validate our change we look at:

our velocity and std. deviation (work accomplished week over week)
dialectic
our opinion/perception of the practices effect and effectiveness

Those practices which seem to work for us we keep.  Everything else is removed.  This has worked pretty well for us, but I'm always interested in better ways to do this. 
How often/When do you review you development process?
How do you validate the changes in your process are effective?

Solutions/Answers:

Answer 1:

How often/When do you review you
development process?

All the time.

How do you validate the changes in
your process are effective?

By monitoring and evaluating everything all the time.

This may sound unbelievable, but where I work we do not stop reviewing. Apart from the formal release post-mortem with regard to planned and realised issues (and why some were postponed or brought forward), and the usual HRM “performance reviews” we do not have a formal schedule for evaluations.

We evaluate and learn. All the time. No schedule. Whenever we encounter something that we would rather not have happen again, we try to work out a way to prevent it happening again. Whenever something goes especially well, we try to figure out what it is that is making it go better than other times so we can replicate that in the future.

It is informal and very ad hoc, but very to the point and very effective as well. For one because any disection of a situation is immediate, which means everything is still fresh for all involved. For another because if you weren’t involved in the situation and/or are not part of figuring out the solution, you do not have to waste time in scheduled meetings to which you may have nothing to contribute.

Personally, I like this continual (continuous?) attention to how we can improve our product, our processes and ourselves. You can’t do this in any ol’ team though. It requires:

  • A very clear idea of the priorities in all aspects of the product and its development.
  • A team of very open minded people.
  • An absence of ego’s/ego-tripping. Nobody is perfect and prima donna’s just get in everybody’s way.
  • No code ownership by individuals. No one is the sole master of a piece of code, everybody can work on anything. Though expertise is taken into account, knowledge transfer is equally important.
  • An atmosphere where it is recognized that errors happen and everybody can make a mistake or misjudge the impact/required time for an issue. This does not mean we like errors or mistakes. We certainly don’t like to see the same error by the same person more than once.

Answer 2:

I like your answer, but here’s another, for the sake of variety:

  • review 2 weeks and/or 3 cycles after start of process or practice – when your team has just enough time to have ironed the “we did this totally wrong” type problems and is just starting to get a handle on it. “Start” can be a new phase (like waterfall phases), or the instantiation of something new that should transcend phases – like a new continuous build system

  • review at critical mass – when you have a “statistically significant” amount of data to look at. I had to put it in quotes, cause I don’t really do statistical analysis here. But 3 iterations is too small. I mean somewhere between 10-20 repetitions of something where you may have enough data to see some outliers or an average trend. This can be really machine-relevant stuff like the time it takes to do a build, or it can be subjective stuff like the accuracy of individual estimates or the estimated time it took to fix a class of bugs.

  • review at completion – if the practice will be stopped take a look and see if it was bad or good, could have been better or had any great side effects that you didn’t anticipate.

  • review at staff change – whether you’re growing or shrinking, staff changes are a good time to touch base and review. If it’s a new person joining the team, maybe they have some great tricks you don’t. If it’s a team member leaving, capture that last knowledge. This may not necessarily be a team sport – it may be a manager/changing staff thing – particularly if you think it’s cherished process that is up for serious critique.

  • review because the game changed – either the product or the environment in which you build it changed – time to assess and see if you need to do anything differently.

Can’t say I do all those things all the time. Too much navel gazing will drive you nutty. But these are all decent touchstones to be used with consideration.

Answer 3:

I am gonna play the devil here, because I strongly believe in changing for good rather than changing for the sake of it. I also believe every team slowly moves towards an equilibrium – it can hang there for eternity.

Have you measured the cost/benefits of changing processes constantly? I know of a bunch of teams who have been working on an incremental development model with regular meet ups for years and its perfect for them. Yeah new team members join in, old ones leave – but the process has stuck and worked wonders.

Another question I have is how do you scale this / adapt this for a busy team with tight dates? I can see this may be working in a small team with manageable deliverables – but have you tried this out at a larger team / company level?

References

Loading...

Shared QA responsibilities on an Agile team

Loading...

Shared QA responsibilities on an Agile team

For many years our IT development group subscribed to the waterfall software development methodology with segregated pods of programmers specializing in database development, logic layer and presentation layer development. Of course we also had a small quality assurance group which handled all testing responsibilities. Requirements, tasks and bugs were handed back and forth electronically, very little conversation took place, and the process ground painfully on.
In 2010 we made the monumental shift to Agile with Scrum and after some serious growing pains we've truly began to excel as a whole team. Communication is constant, people are growing and learning from each other every day, and our releases more stable and much more aligned with true business priorities.
One area we're still trying to grow in is in sharing software QA responsibilities amongst all development team members to get away from the toss-it-over-the-fence attitude that still seems to persist when it comes time to test a feature.
Does anyone have any advice as to how to grow a development team towards collective application quality ownership? We currently practice XP so we've begun to do more pair programming between developers and testers when creating unit and integration tests. But it still is challenging to get the whole team to proactively think about testing strategies for each sprint. Inevitably one or two "Write Test Cases" tasks are thrown into the sprint backlog with no forethought as to what will be tested and how it will best be achieved and organized so the entire team is aware of what testing has been completed, what's currently being tested, and what's left to test.
Sorry for the long-winded question, but any advice would be much appreciated.

Solutions/Answers:

Answer 1:

I think your question really has two aspects to it:

1) How does a traditional software QA/QE group fit into an Agile development environment?

In my experience, you’re correct. Separating the testing team from the development team creates some issues. The “throw it over the fence” method doesn’t work very well in Agile. What we’ve done is make the testing team just part of the Agile Team. If your quality team is more technical, maybe easing them into development is a possibility. Maybe one member of the quality team is interested in becoming the Scrum Master. Maybe some members of the quality team are looked at as specialized testing resources on the Team. I think it’s important in any case that they are full-fledged Team members, though; they come to all of the standard Scrum meetings, help with estimation, contribute to the Retrospective, etc.

2) How is the testing responsibility spread throughout an Agile Team?

This has to do with your Team’s Definition of Done. The testing effort for a Story should be part of the acceptance criteria. Testing (both automated and manual) should be parts of the tasks toward completion of the Story. That way, the dedicated testers can be assigned pieces of the Story that are an actual part of their completion. Of course, this means that development tasks need to be granular enough that there are testing tasks available throughout the Sprint. For testing resources, maybe the first couple of days of a Sprint is spent on maintaining automated tests, or grooming the bug database. We’ve decided that bugs found on a Story during the Sprint it’s being developed in need to be fixed for the Story to be accepted. Bugs reported after the Story is accepted become part of the product backlog and are prioritized.

Hopefully this gives you one possible answer. As always with Agile, you’ll find that what works for your Team will be unique in some respects.

References

Loading...