In almost every engagement I have with customers seeking guidance on how to effectively run an Agile development shop, the question of work in progress tracking comes up. However, the question comes in different “flavors”, often signaling where in its Agile journey the organization is.
Project tracking vs Work in progress tracking
Organizations that were born agile have mostly never planned in a project centric manner. They mostly fund a product team and track the value they deliver to their customers and how their customers satisfaction.
However, many large organizations have spent years planning their investments using a project centric manner. Their key to a successful delivery is a project that adheres to the famous principle “On time, In Scope and Within Budget” triangle. Their challenge hence once moving to Agile planning is how to reconcile the two worlds.
According to the Agile manifesto, the primary measure of progress is a working software. The word “project” isn’t even mentioned. Does that mean we stop tracking projects? We throw away everything we know how to do? How do we fund new work efforts? How do we know If we are going over budget? How do we know if we are running a profitable organization?
The answer to these questions is that, Agile is philosophy that focuses on better ways of writing software, and nothing else. Once you mix it with other concepts, confusions and conflicts arise. It’s even more important to avoid confusing “Product Lifecycle Management – PLM”, “Project and Portfolio Management – PPM” and “Application Lifecycle Management – ALM”. Both are needed, they are very much related, but they are not the same thing.
Successful organizations focus on the flow of data, information and feedback loops throughout these 3 worlds. It’s crucial not to let you PLM or PPM dictate how you run your ALM (Process, People and Products).
Since “a working software is the primary measure of progress”, let us ideate how we might measure such progress. We would first need to define what a working software is. You should come to an agreement, as an organization, as to what makes a software you built a working one.
Let us consider the following (made up) definition: “A working software is a high-quality product that allows us to continuously deliver value to our customers”. This could be a good starting point. However, even with a such simple definition, there can be room for debate on what defines a high-quality product. I personally like this definition by A.V. Feigenbaum: “Quality is a customer determination, not an engineer’s determination, not a marketing determination, nor a general management determination. It is based on the customer’s actual experience with the product or service, measured against his or her requirements — stated or unstated, conscious or merely sensed, technically operational or entirely subjective — and always representing a moving target in a competitive market.”.
Hence, for us to track progress we need to have indicators showing us quality measures while the product is being developed and others for when it is used by customers. More importantly we need to know whether we are continuously delivering value to our customers, and how long it takes us to ship an idea out of the door.
Here are some KPIs that when used collectively can give us a good insight about the quality of our software.
- Code Analysis
- Code Coverage
- Functional Tests Coverage
- Tests pass rate
- Build pass rate
- Cumulative flow diagram
- Lead Time
- Cycle Time
- Live site incidents trends
- Customer feedback Trends (5, 4, 3 … Stars)
In most ALM efforts I am involved in, we either use VSTS or TFS as a tool to manage the application lifecycle. I often get asked the question what does a dashboard look like for your team. Usually it’s a set of dashboards not just one. The idea is to get a wholistic view of your software development investment. I often align my dashboards with DevOps pillars: Plan, Dev & Test, Release and Learn. In VSTS you can create as many dashboards as you need: https://www.visualstudio.com/en-us/docs/report/dashboards .
(Charts used in this blog are directly taken from the VSTS documentation website: https://www.visualstudio.com/en-us/docs/overview )
In the Plan dashboard, I often use the following widgets. However, I never use these charts or KPIs to draw conclusions in isolation. These are supposed to help conversation within the team on how well we are doing as a team:
Cumulative flow diagram: https://www.visualstudio.com/en-us/docs/report/guidance/cumulative-flow#configure-widget . I am often looking to see how lean our continuous delivery is. I am also looking at spikes that signal inconsistent loads in our cycle.
Lead time: https://www.visualstudio.com/en-us/docs/report/widget-catalog#lead-time-team-scoped Lead time is defined as the time between work item creation, and work item completion. Lead Time helps you understand how long it takes for work requested by a customer to be delivered.
Cycle time: https://www.visualstudio.com/en-us/docs/report/widget-catalog#cycle-time-team-scoped Cycle time is defined as the time a work item spends in development before it is closed. Cycle time helps you analyze the time it takes to deliver work from your backlog.
Sprint capacity: If your team uses capacity planning (https://www.visualstudio.com/en-us/docs/work/scale/capacity-planning). This widget allows you to have a quick view on whether you are over allocated or under allocated. The goal is to be allocated your team precious time without being too stretched, and have time expected unplanned small spikes.
Sprint overview: A simple light weight progress overview using either story points or number of work items.
Sprint burndown: showing you whether you will be able to potentially finish all the works items you committed to within the current sprint:
In the Dev & Test dashboard, I want to know how well our code base is progressing and how are we handling our technical debt. Hence, I usually add the following widgets:
Code Tile: It will display the number of recent changes in the code repository. I often track my master branch. What I am looking for is that the number of commits should be equivalent to the number of pull requests for the work items we work against.
Code coverage: It will display the code coverage from build definition of your choice. What I am looking for is not necessarily the value alone, but rather whether we progressed or digressed.
Code Complexity: There are several integration points in VSTS with famous code quality tools. SonarQube, ReSharper and NDepend come to mind. You should use the one that you understand the most. (I will be looking into these tools deeper in future blog posts. I am not advising or endorsing any of them). In the meantime, take a look at the following sample from NDepend, and consider the insight your team gets from such an overview.
Test plan(s) executions summary: I want to know how many are passing, failing or not run at all. Hence, I will consider different angles:
Test status by test suites:
Test status by sprint:
Test status by testers:
Test case creation trends:
Build Status History: showing me how often we break our build, and spikes in build duration signaling a significant change in our code base or its unit tests.
Requirements Quality: Once you associate your work items with automated tests, you can track the quality of your requirements and how they map to your release pipeline.
Test Results trends: This chart will display the trend of test results, such as passed or failed tests, for a selected build definition:
In the release dashboard, my focus in how well our release pipeline is continuously allowing us to get value in the hands on our customers.
I am usually watching the progress of our topics:
And watching the evolution of our production path pipeline:
More importantly I want to know how well our testing is progressing within each environment in our pipeline:
In the Learn dashboard, I will want to integration data from telemetry, customer feedback and any source of data that provides me insight as to how well the value we delivered is being used or even perceived.
For instance, I may add telemetry points from Application Insight:
More important than dashboards or widgets is team communication. These data points should just help us drive change and progress within the team. If they are not then we’re not measuring the right thing.