By Hamzah Tariq on Aug 23, 2019 8:53:00 AM
We’ve interviewed Dave Nicolette, a consultant specializing in improving software development and delivery methods and author of ‘Software Development Metrics’. We dive into what factors to consider when selecting metrics, examples of useful metrics for Waterfall and Agile development teams, as well as common mistakes in applying metrics. He writes about software development and delivery on his blog.
Content and Timings
- Introduction (0:00)
- About David (0:21)
- Factors When Selecting Metrics (3:22)
- Metrics for Agile Teams (6:37)
- Metrics for Hybrid Waterfall-Agile Teams (7:37)
- Optimizing Kanban Processes (8:43)
- Rolling-up Metrics for Development Management (10:15)
- Common Mistakes with Metrics (11:47)
- Recommended Resources (14:30)
David is a consultant specializing in improving software development and delivery methods. With more than 30 years experience working in IT, his career has spanned both technical and management roles. He regularly speaks at conferences, and is the author of ‘Software Development Metrics’. David, thank you so much for taking your time out of your day to join us. Why don’t you say a bit about yourself?
I’ve been involved with software for a while and I still enjoy it, though I’ve been working as a team coach and organizational coach, technical coach, for a few years. I enjoy that.
Picking up on the book ‘Software Development Metrics’, what made you want to write the book?
It’s an interesting question, because I don’t actually find metrics to be an interesting topic, but I think it is a necessary thing. It’s part of the necessary overhead for delivering. If we can detect emerging delivery risks early, then we can deal with them. If we don’t discover them until late, then we’re just blindsided and projects can fail. It’s important to measure the right things and be sure we’re on track.
Secondly, I think it’s important to measure improvement efforts, because otherwise we know we’re changing things and we know whether they feel good, but we don’t know if they’re real improvements if we can’t quantify that. I noticed several years ago that a lot of managers and team leads and people like that didn’t really know what to measure. I started to take an interest in that, and I started to give some presentations about it, and I was very surprised at the response because quite often it would be standing room only and people wouldn’t want to leave at the end of the session. They had more and more questions. It was as if people really had a thirst for figuring out what to measure and how. I looked at some of the books that were out there and websites that were out there, and they tended to be either theoretical or optimistic.
Metrics for measuring and monitoring software development have been around for decades, but a lot of people still don’t use them effectively. Why do you think that is?
I often see a pattern that when people adopted new process or method, unfamiliar one, they try to use the metrics that are recommended with that process. There are a couple of issues that I see. One is that they may only be using the process in name only, or they’re trying to use it but they’re not used to it yet, and the metrics don’t quite work because they’re not quite doing the process right.
The other issue is that people tend to use the measurements that they’re accustomed to. They’ve always measured in a certain way, now they’re adopting a new process. They keep measuring the same things as before, but now they’re doing the work in a different way. There’s a mismatch between the way the work flows and the way it’s being measured. They have numbers and they rely on the numbers, but the numbers are not telling truth because they don’t line up with the way the work actually flows.
Factors When Selecting Software Development Metrics
Picking the right metrics is paramount. What are some of the factors that we should consider when selecting metrics?
Look at the way work actually flows in your organization and measure that. I came up with a model for that in the course of developing this material which I would look at three factors to try to judge which metrics are appropriate. The first factor is the approach to delivery. The basic idea there is if you try to identify all the risks in advance, all the costs, you identify all the tasks, lay out a master plan, and you follow that plan. That’s what I’m calling traditional.
What I call adaptive is a little different. You define a business capability, you’ve got your customer needs, and you set a direction for moving toward that, and you steer the work according to feedback from your customer, from your stakeholders. You don’t start with a comprehensive plan, you start with a direction and an idea of how to proceed. Then you solicit feedback frequently so you can make course corrections. That’s the first factor I would look at: traditional versus adaptive, and that I think has the biggest impact on which metrics will work.
The second factor to look at is the process model. I don’t have to tell you that there are a million different processes and nobody does anything in a pure way, but if you boil it down I think there’s basically four reference models we can consider, or process models. One is linear, you can imagine what that is. The canonical steps that go through from requirements through support. The next one would be iterative, in which you revisit the requirements multiple times and do something with them. The third one that I identify is time-boxed. It’s really popular nowadays with processes like Scrum and so on. The fourth one is continuous flow. This is becoming popular now with the Kanban method, and it’s also being adapted into Scrum teams quite a lot. We’re really interested in keeping the work moving smoothly.
Now a real process is going to be a hybrid of these, but it’s been my observation that any real process will lean more toward one of those models than the others, and that’ll give us some hints about what kind of metrics will fit that situation. The third thing probably has the least impact on is whether you’re doing discrete projects or continuous delivery. What some people call a continuous beta, or some people just don’t have projects. You have teams organized around product lines or value streams, and they continually support them, call it a stream. Between those two there are some differences in what you can measure. Well I look at those three factors, and based on that you can come up with a pretty good starter set of things to measure, and then you can adapt as you go from there.
Metrics for Agile Teams
Let’s take a couple of example scenarios. If we have an agile team working in short sprints who are struggling to ship a new product, what kind of metrics should they consider to identify areas for improvement?
If they’re using Scrum basically correctly, they could probably depend on the canonical metrics that go with that, like velocity and your burn chart. You might look for hangover, incomplete work at the end of the sprint. You might look for a lot of variation in story size, when you finish a story in one day and the next story takes eight days. When it comes to metrics as such they could use, as I said, velocity and so on, and you can always use lean-based metrics because they’re not really dependent on the process model. What they might consider is looking at cycle times. They could look at the mean cycle times as well as the variation in cycle times and get some hints about where to go for root—cause analysis. Metrics don’t tell you what’s wrong, but they can raise a flag.
Metrics for Hybrid Waterfall-Agile Teams
What about a hybrid waterfall agile team working on a long term project, wanting to know what it’s possible to deliver by a certain date?
To know what’s possible to deliver you can use the usual things like a burn chart, burn up or burn down as you prefer, to see according to their demonstrated delivery, their velocity, I’ll call it that, how much scope they can deliver by a given date. Conversely you could see by what date approximately they could deliver a given amount of scope. It depends on what’s flexible. In this kind of a project, usually neither is flexible, but at least you can get an early warning of delivery risk. If it looks like the trend line is way out of bounds with the plan, well now you’ve got a problem.
One thing that might surprise some people is the idea that agile methods can be used with traditional development. We need to decouple the word “agile” from “adaptive,” because quite often it is used in a traditional context.
Optimizing Kanban Processes
What are some metrics relevant to those working in a bug queue? Say they’re wanting to optimize their working practices to stay on top of incoming bugs.
For that I usually like to use little metrics, mainly cycle time, because you want to be somewhat predictable in your service time so when a bug report comes in people have an idea of when they can expect to see it fixed. How do you do that? Well, you can use empirical information from past performance with fixing bugs, and your mean cycle time will give you approximately how long it takes to fix one.
I like to use the Kanban method for these kind of teams because it defines classes of service. You’ll find that every type of bug doesn’t take the same amount of time to fix. Based on your history, pick out different categories. You can identify the characteristics of particular kinds of bug reports that tend to fall together, and you can track cycle times differently for each of those classes of service. If someone calls in and says, “Well we got this problem.” “Well that looks like that’s in category two. Whatever that means, that typically takes us between four hours and eight hours.” That can give them a little bit of warm fuzzy feeling about when they’re going to see it fixed. I think that using empirical data and tracking cycle time, is the simplest, most practical way toward that workflow.
Rolling-up Metrics for Development Management
What about a CTO who wants to monitor how teams are performing, and ensure code is of high quality? How can metrics be rolled-up for those in more senior positions?
How the teams are performing, you need measurements that are comparable across teams and comparable across projects. The lean-based metrics needed to compare across teams and projects and across development and support, those kinds of things. If you’re tracking throughput, cycle time, there’s another one I haven’t mentioned that I wanted to: process cycle efficiency. If you track those, those roll up nice. Some other metrics don’t roll up so well. Some of the agile metrics, particularly velocity is really different for each team. Percentage of scope complete, that may not roll up very well either.
The other question about code quality, I think that if we can let that be a team responsibility then they can use metrics, but they don’t need to report outward from the team. Usually things that they can get out of static code analysis tools will help them spot potential quality issues, but I wouldn’t share very detailed things like static code analysis kind of stuff and code coverage outside the team, because then team members will feel like they’re going to get judged on that and they’ll start gaming the numbers thinking they’re going to be needed those. Those kind of metrics are really for the team’s own use.
Common Mistakes with Software Development Metrics
What are some mistakes you often see people make when applying software development metrics?
People seem to make a couple of mistakes over and over. The one I think I mentioned earlier, people apply metrics that won’t fit the context. Maybe a company wants to ‘go agile’, and so they start tracking agile metrics, so whatever is recommended. The metrics that are recommended are safe or something like that, but they haven’t really fully adopted these new methods. They’re still in the transition, and so the numbers don’t mean what they’re supposed to mean. For instance, they may track velocity, but they might not have feature teams. They might have component teams, and the work items that those teams complete are not vertical slices of functionality. Whatever they’re tracking as velocity isn’t really velocity. You start getting surprised by delivery issues.
You can also have the opposite situation where teams are working in an agile way, but management is still clinging to traditional metrics. They will demand that the teams report percentage of still complete to date, but you’re doing adaptive development so you don’t have 100 percentage scope defined. You have a direction. You may be 50% complete this week based on what you know of the scope. Next week you might be 40% complete because you’ve learned something, and then the management says “Well what’s going on? You’re going backwards,” and they don’t understand what the numbers mean. What I see happen in that case is it drives the teams away from adaptive development, and causes them to try to get more requirements defined upfront.
The second mistake I see a lot is that people either overlook or underestimate the effects of measurement on behavior. We can be measuring something for some objective reason, but it causes people to behave differently because they’re afraid they’re going to get their performance review will be bad because they didn’t meet their numbers. I think we have to be very conscious of that. We don’t want to drive undesired behaviors because we’re measuring things a certain way. That not only breaks morale, but it also makes the measurements kind of useless too when they’re not real.
Those are really the two main mistakes: that they don’t match up the metrics with their actual process, or they neglect the behavioral effect of the metric.
What are some resources you can recommend for those interested in learning more about managing projects and improving processes?
I like this, an older book called ‘Software By Numbers’. That’s a classic, but one of David Anderson’s earlier books is called ‘Agile Management From Software Engineering’. That has a lot of really good information to apply economic thinking to different kinds of process models. He covers things like feature driven development, extreme programming. Another guy whose work I like is Don Reinertsen. He combines really deep expertise in statistics with deep expertise in economics and applies that to software, and can demonstrate mathematically why we don’t want to over-allocate teams. How it slows down the work if you load everybody up to 100% actually slows things down. What’s counter intuitive to a lot of managers is if you load your teams to 70% capacity they’ll actually deliver better throughput, but it’s very hard for a lot of managers to see somebody not busy. It’s really hard for them to get there.
Really appreciate your time today Dave, some great stuff here. Thank you.
Well I enjoyed the conversation, thanks.