A real-life tale about using software development metrics, data, and charts as a helpful tool to managers.
We’ve all been there. Whether it’s from your client, or from your boss, we’ve all faced awkward statement, “I don’t think we are doing a great job” or “the team is not as productive as it should be.” You feel like your team is working fine (or not), and you feel it in your guts. That feeling is probably right. If there’s one thing I’ve learned after many years in IT is that there’s nothing more reliable than your gut when it comes to figuring out whether you are on the right path or not. But, wouldn’t it be great if you could demonstrate that feeling empirically, and support it by rock solid data? Of course it would! And that’s where meaningful software development metrics come in.
Tons of articles and papers have been written about metrics, so we won’t get into just listing metrics, what their purpose is, and how to calculate them. Instead, we will share a real-life experience. This comes from a real project and we’ll demonstrate how custom software development metrics helped us manage the execution in a controlled way, resulting in a better overall performance, and a more predictive project.
Setting the scenario
We were hired to build a web platform that supports the business of an online tour operator. The client sells trips to multiple destinations leveraging social experiences. The goal is to create a community of loyal travelers that gets together to organize adventures all across the world.
When setting up the project, we defined the KPIs that we wanted to track to measure our team’s performance. This is sometimes overlooked in the early stages of a project. Sometimes all your energy is focused on getting things rolling as fast as possible, and failing to set up KPIs in favor of speed is a mistake many Project Managers make. You need to make sure the way you will measure productivity is set from the get-go, this way you can:
- Define the metrics to use and, just as importantly, which ones NOT to use.
- Start collecting data from day one.
By using a SCRUM-driven tool for managing tasks (JIRA, Redmine or Trello to name a few), you will gain access to standard agile metrics. These include Sprint Burndown charts, Velocity, and Cumulative Flow diagrams. But we won’t be focusing on these out-of-the-box metrics. Instead, we will elaborate on two custom metrics we defined for specific things we were looking to track.
- Estimate Precision –> determine how accurate our team’s estimates are.
- Learning Curve –> calculate when a new member of the team was fully ramped up.
This was specifically important in our case because the client was extremely date-oriented. Thus, as a manager, I wanted to make sure the schedules we committed to as a team were realistic and achievable. Therefore, we needed to understand the trend of estimate gaps, to ensure our team was getting better at predictitions. We came up with this report which shows the distribution of tickets per developer (John, Mary, Peter). The metrics is on a scale of -20% to +20% deviation between the original estimate and actual effort per task.
Peter was being really predictable. For most cases he was hitting his estimates or even beating them.
Caution alert: it’s not good to have an engineer that regularly delivers work under the original estimate, may be a sign of someone buffering a lot and being too conservative.
At this point we had an average deviation for the whole team of +22%.
Mary and John had to improve, if we compared them to Peter (our baseline). As a result we took the obvious action to make Peter double check Mary and John’s estimates. This ultimately led to a large improve in our prediction abilities. We reduced the average deviation to +13%, which is great!
Another challenge with this project was reducing ramp up time. Even though we try to avoid rotating people across projects, it happens sometimes. In this case, a team of five developers were assigned for a period of around six months. We knew from experience with projects like this, there was a high chance we’d have to bring in someone new midway through the project. So we defined a rule. If the new folk didn’t reach the pace of their partners within 1 Sprint, we would simply re-assign him/her to something else. Then we would try to bring back the ramped person (i.e. if the original team member was sent to help put out a fire somewhere else).
Again, we needed to measure this as precisely as possible, and define a measurement for learning curve time. Tickets were classified in a complexity scale from 1 to 5 and, as early as the very first Sprint, we defined a baseline. Basically, we calculated number of points solved per developer (number of tickets closed x ticket complexity). The following chart shows the metric we used to see how “ready” a new dev was in comparison to the rest. You can see that Jack joined the team in Sprint 4 replacing John. Jack was able to pick it up pretty quickly, and finished with a similar point total to the other devs. He even managed to increase his productivity and became one of the most effective member on the team. Good for him!
When thinking about software development metrics it’s very important to follow a clear management goal. You must be disciplined in determining what you want to accomplish by measuring only certain aspects of your operations.
Be it “increase team predictability”, “avoid long learning curves” or any other ultimate purpose, the data you collect and report must accurate represent your goals. You want to avoid getting yourself into what I call the “dashboard disease.” This is when you have a nice report full of charts and stats, but it’s not useful! Most of the projects I’ve executed needed no more than 3 core software development metrics to ensure success.
So, next time you open your management tool and see all those beautiful (yet unused) trend charts, pies and bars, take a few moments to think. Select only the metrics that deliver valuable data, and are tied to your core business objectives.