There are lots of ‘health check models’ out there. Most of them attempt to summarise and visualise how teams are doing, and seek areas of improvements. They are sometimes run by managers, sometimes by coaches, sometimes by teams in workshops.
Over the years, I have discarded approaches that ‘score’ teams (either by numbers, letters, or traffic lights) simply because real life is not that simple. A team can be doing well and poorly in the same area at the same time. If, for example, a team is good at certain communication, but poor in others, would you score them 5 out of 10? Or would you give them an amber traffic light? Or a C grade? Although they look pretty and appear clear, I have never found such simplified visual cues really help me in any way.
What I have found very useful is taking some time to stand back and evaluate the situation, asking questions like “How often is the team disrupted? By whom? How important are the disruptions?” “Is the team running any analytics/follow-up activity after their work is delivered?” “When does the team start to get involved in discussing requirements?”
Let me make this very clear: this is not a method of rating the team. It is not meant as a comparative tool to judge the success of individuals or teams, and it should not be used to rank teams. It originated as a way for me, as a coach, to put down my feelings on how a team is doing, what areas I should be focusing on, and as a record of how the team evolves over time. It also helps me form the basis of feedback that I give to teams.
Do you ever stand back and look at how your team is doing? I’ve recently made my health check template open source and you are welcome to use this as a basis for your review. I’d love to hear your feedback and especially suggestions of how this could be improved.
Thank you to the team at GDS for their recent feedback on the health check template.