In response to the question, teams often come armed with charts, slides and metrics that showcase the journey’s progress. This armory of information is usually collected with budgetary justifications in mind.
But, the clients just really want to know if the transformation is enabling faster delivery of high quality software, and if the business is adapting to change without losing go-to-market predictability.
I’d like to discuss the reason a fiscally-motivated set of metrics are misleading and ultimately detrimental to the transformation journey.
Conventionally, transformation is expected to help delivery teams improve quality, productivity and responsiveness to business demands. And, organizations are known to subscribe to The Four Key metrics when measuring their software delivery pipeline’s effectiveness.
Readily available indicators like velocity, lead time, cycle time, burn up and burn down metrics are observed in isolation to measure productivity and progress. We recognize these as a Dysfunctional metric set that convolute the larger aim of agile transformation. Such measures showcase a lopsided progress and do not even address the proverbial “are we there yet” question.
Velocity is the average work a team completes during an iteration. In our experience, many organizations misidentify velocity as a prime indicator of productivity.
This metric only denotes the magnitude of work, not the quality of work. For instance, teams might be churning out several stories, but of low quality, sometimes completely missing the business objective.
Velocity is an ideal internal metric for the team that gauges performance in comparison to other iterations. The indicator helps decide the amount of work a team should accept for upcoming iterations. Major velocity fluctuations between iterations mean estimations have to be realigned based on feature-complexity.
Velocity indicators are reliable when juxtaposed with lead time, change failure rate and escaped defects.
Burn up and burn down are charts liable to be deceiving when out of context. The burn down chart shows remaining work against remaining time. The burn up chart shows how close the team is to completing the scope.
The optics of this metric can prevent teams from accepting additional scope, therefore dousing morale. The charts lead teams to believe they are on a never ending journey, with never ending scope.
We advise leadership to not have ‘progress conversations’ with team members in the context of burn up or burn down. This way, team members are aware that their success is not linked to such indicators.
Cycle time and lead time are the time taken to perform ‘actual work’ on the task and the time between defining a task and completing it ,respectively. Cycle time is always lesser than lead time.
This metric is used to show how quickly work is being delivered as teams usually try to measure the delay between cycle and lead times. This indicator is meant to identify the bottlenecks between development of a feature and reaching its end users.
When a team is trying to meet aggressive deadlines, cycle time can be crunched to create an optimal ratio between it and lead time. The ‘crunch’ comes at the cost of quality and business goals.
We advise not to focus too much on cycle time and just focus on lead time (from The Four Key metrics). This allows teams to ensure features reach production fast enough and with desired quality.
With respect to the metrics described above, I’m reminded of Goodheart’s law – “When a measure becomes a target, it ceases to be a good measure.” And, as productivity is measured with such metrics, the Hawthorne effect kicks in and teams bend the rules to ensure they meet the targets.
The optics of making a lot of progress is just the creation of problems that will surface in the future.
When so misleading, why so popular?
These metrics are easily available with common tracking tools. For instance, in the early days of Agile adoption, teams leveraged the velocity metric as the de facto way of measuring team productivity.
Agile transformations require enormous effort and patience. Instead of focusing on quick wins and leveraging dysfunctional metrics to show success, teams should focus on monitoring the outcome such as improvement in efficiency, customer experience and more.
While we wouldn’t completely disregard The Four Key metrics, we do associate every software project with a few direct or indirect business outcome metrics.
For example, an online flight booking application, to better understand customer engagement and business success, will measure successful booking rate, customer conversion rate, average time required for booking etc.
We’d recommend such quantifiable metrics be juxtaposed with The Four Key metrics to recognize genuine progress.
Fitness metrics to guide the transformation journey
An organization should be tracking outcomes of a project or initiative and not micro-level engineering metrics. It’s the former that will provide one with relevant and effective responses to the question, “are you there yet”!
A version of this article was published in Dataquest.