The most important things cannot be measured. The issues that are most important, long term, cannot be measured in advance.
—W. Edwards Deming
Working software is the primary measure of progress.
Metrics are agreed-upon measures used to evaluate how well the organization is progressing toward the portfolio, large solution, program, and the team’s business and technical objectives.
Thanks to its work physics, timeboxes, and fast feedback, Agile is inherently more measurable than prior proxy-based, waterfall metrics of progress. Moreover, with Agile, the “system always runs.” So, the best measure comes directly from objective evaluation of the working system. Continuous delivery and DevOps practices provide even more things to measure. All other measures—even the extensive set of Lean-Agile metrics outlined below—are subordinate to the overriding goal of focusing on rapid delivery of quality, working Solutions.
But metrics are indeed important in the enterprise context. To that end, SAFe provides guidance for various Metrics that can be applied for each level of the Framework. The links below navigate to the entries on this page.[/vc_column_text][/vc_column][/vc_row]
- Lean Portfolio Metrics
- Portfolio Kanban Board
- Epic Burn-up Chart
- Epic Progress Measure
- Enterprise Balanced Scorecard
- Lean Portfolio Management Self-Assessment
- Value Stream Key Performance Indicators
Lean Portfolio Metrics
The Lean Portfolio Metrics set provided here is an example of a comprehensive but Lean set of metrics that can be used to assess internal and external progress for an entire Portfolio. In the spirit of “the simplest set of measures that can possibly work,” Figure 1 provides the leanest set that a few Lean-Agile portfolios are using effectively to evaluate the overall performance of their transformations.
Portfolio Kanban Board
The primary motivation of the Portfolio Kanban Board is to ensure that Epics and Enablers are reasoned and analyzed prior to reaching a Program Increment boundary, are prioritized appropriately, and have established acceptance criteria to guide a high-fidelity implementation. Furthermore, the business and enabler epics can be tracked to understand which ones are being worked on and which have been completed.
Epic Burn-up Chart
The Epic Burn-up Chart tracks progress toward an epic’s completion. There are three measures:
- Initial epic estimate line (blue) – Estimated Story points from the lean business case
- Work completed line (red) – Actual story points rolled up from the epic’s child Features and stories
- Cumulative work completed line (green) – Cumulative story points completed and rolled up from the epic’s child features and stories
These are illustrated in Figure 2.
Epic Progress Measure
The Epic Progress Measure provides an at-a-glance view of the status of all epics in a portfolio.
- Epic X – Represents the name of the epic; business epics are blue (below) and enabler epics are red
- Bar length – Represents the total current estimated story points for an epic’s child features/stories; the dark green shaded area represents the actual story points completed; the light green shaded area depicts the total story points that are “in progress”
- Vertical red line – Represents the initial epic estimate, in story points, from the lean business case
- 0000 / 0000 – The first number represents the current estimated story points, rolled up from the epic’s child features/stories; the second number represents the initial epic estimate (same as the red line)
These measures are depicted in Figure 3.
Enterprise Balanced Scorecard
The Enterprise Balance Scorecard provides four perspectives to measure performance for each portfolio—although the popularity of this approach has been declining over time in favor of Lean Portfolio Metrics (see Figure 1). Nonetheless, these measures are:
- Value Delivery
These measures are then mapped into an executive dashboard, as illustrated in Figures 4 and 5.
For more on this approach, see , Chapter 22.
Lean Portfolio Management Self-Assessment
The Lean Portfolio Management (LPM) team continuously assesses and improves their processes. Often this is done using a structured, periodic Self-Assessment. When the LPM team completes the spreadsheet below, it will automatically produce a radar chart like that shown in Figure 6, which highlights relative strengths and weaknesses.
Figure 6. Portfolio self-assessment radar chart
Large Solution Metrics
Solution Kanban Board
The primary motivation of the Solution Kanban Board is to ensure that Capabilities and Enablers are reasoned and analyzed prior to reaching a PI boundary and are prioritized appropriately, and that acceptance criteria have been established to guide a high-fidelity implementation. Furthermore, the features can be tracked to understand which ones are being worked on and which have been completed.
Solution Train Predictability Measure
To assess the overall predictability of the Solution Train, individual predictability measures for an Agile Release Train (ART) can be aggregated to create an overall Solution Train Predictability Measure, as illustrated in Figure 7.
Solution Train Performance Metrics
To assess the overall performance of the Solution Train, individual performance measures for an ART can be aggregated to create an overall set of Solution Train Performance Metrics, as illustrated in Figure 8.
Feature Progress Report
The Feature Progress Report tracks the status of features and enablers during PI execution. It indicates which features are on track or behind at any point in time. The chart has two bars:
- Plan – Represents the total number of stories planned for a feature.
- Actual – Represents the number of stories completed for a feature. The bar is shaded red or green, depending on whether the feature is on track or not.
Figure 9 gives an example of a feature progress report.
Program Kanban Board
The primary motivation of the Program Kanban Board is to ensure that features are reasoned and analyzed prior to reaching a PI boundary and are prioritized appropriately, and that acceptance criteria have been established to guide a high-fidelity implementation. Furthermore, the features can be tracked to understand which ones are being worked on and which have been completed.
Program Predictability Measure
To assess the overall predictability of the release train, the “Team PI Performance Report” is aggregated, for all teams on the train, to calculate the Program Predictability Measure, as illustrated in Figure 10. The Team PI Performance report compares actual business value achieved to planned business value (see Figure 22).
For more on this approach, see , Chapter 15.
Program Performance Metrics
The end of each PI is a natural and significant measuring point. Figure 11 is an example set of Performance Metrics for a program.
PI Burn-down Chart
The PI Burn-down Chart shows the progress being made toward the program increment timebox. Use it to track the work planned for a PI against the work that has been accepted.
- The horizontal axis of the PI burn-down chart shows the iterations within the PI
- The vertical axis shows the amount of work (story points) remaining at the start of each iteration
Figure 12 exemplifies a train’s burn-down measure. Although the PI burn-down shows the progress being made toward the program increment timebox, it does not reveal which features may or may not be delivered during the PI. The Feature Progress Report provides that information (refer to Figure 9).
Cumulative Flow Diagram
A Cumulative Flow Diagram (CFD) is made up of a series of lines or areas representing the amount of work in different steps of progression in a Value Stream. For example, the typical steps of the Program Kanabn are:
- Validating on Staging
- Deploying to Production
In the cumulative flow diagram in Figure 13, the number of features in each stage of development is plotted for each day in the chart.
Agile Release Train Self-Assessment
As program execution is a core value of SAFe, the Agile Release Train (ART) continuously works to improve its performance. A Self-Assessment form (below) can be used for this purpose at PI boundaries or any time the team wants to pause and assess their organization and practices. Trending this data over time is a key performance indicator for the program. Figure 14 gives an example of the results of a self-assessment radar chart.
Continuous Delivery Pipeline Efficiency
This metrics looks at the different steps of the continuous delivery pipeline as manifest from the program or solution Kanban and looks at the efficiency of each step as apparent by the relation between touch time and wait time. This metric can be used as a basis for value stream mapping. Some of the information can be taken from tools especially around continuous integration and deployment while some of the other data might need to be logged manually in a spreadsheet. In case of manual data, provide an estimate of the average touch and wait times.
Deployments and Releases per timebox
This metrics comes to show if the program is making progress towards deploying and releasing more frequently. It can be shown on a program increment basis as is shown in figure 16, or we can zoom in to see how releases are handled mid-PI as figure 17 shows.
Recovery over time
This measure allows us to see how often we’ve had to roll back (physically or by turning off feature toggles). It overlays this data with point in time where we deployed to production, or released into production, to see how these relate to needs for rollback.
Innovation Accounting and Leading Indicators
One of the major goals of the continuous delivery pipeline is to allow the organization to run experiments quickly through to the end customers and validate hypotheses. As such both Minimal Marketable Features and Minimal Viable Products must have a definition of the leading indicators for the business outcomes of those hypothesis (see the Epic article for more details). This allows us to measure our innovation using real innovation accounting as opposed to vanity metrics.
In most enterprises such leading indicators will reoccur and so it is important to monitor them continuously as well as overly the results with the timing of releases of the different features.
The example below shows some metrics that we can gather on the SAFe website to show leading indicators for our development efforts.
Hypotheses tested over time
A major goal of hypothesis-driver development is that to create small experiments that are validate as soon as possible by customers or customer proxies. The metric in figure 20, shows how many hypotheses have we validated in a PI and how many of them failed. A high failure rate is actually a good thing in an environment of quick testing (see ref 3 for more information), as it allows us to validate quickly and identify and focus on the good ideas.
The end of each iteration is the time for each Agile Team to collect whatever Iteration Metrics they have agreed upon. This occurs in the quantitative part of the team retrospective. One such team’s metrics set is illustrated in Figure 21.
Team Kanban Board
A team’s Kanban process evolution is iterative. After defining the initial process steps (e.g., define – analyze – review – build – integrate – test, etc.) and WIP limits, then executing for a while, the team’s bottlenecks should surface. If not, the team refines the states or further reduces the WIP until it becomes obvious which state is “starving” or is too full, helping the team adjust toward more optimum flow.
Team PI Performance Report
During the PI System Demo, the business owners, customers, Agile Teams, and other key stakeholders collaboratively rate the actual business value achieved for each team’s PI objectives as shown in Figure 22.
Reliable trains should generally operate in the 80% – 100% range; this allows the business and its outside stakeholders to plan effectively. Below are some important notes about how the report works:
- The Planned total (BV) does not include stretch objectives to help the reliability of the train
- The Actual total (Actual BV) includes stretch objectives
- The Achievement % is calculated by dividing the Actual BV total / Planned BV total
- A team can achieve greater than 100% (as a result of stretch objectives achieved)
- The effort required for stretch objectives is included in the Iteration plan’s load (i.e., it is not extra work the team does on weekends)
- Individual team totals are rolled up into the Program Predictability Measure (see Figure 10)
SAFe Team Self-Assessment
Agile Teams continuously assess and improve their processes. Often this is via a structured, periodic Self-Assessment. This gives the team time to reflect on and discuss the key practices that help yield results. One such assessment is a simple SAFe Team practices assessment, as appears in this spreadsheet:When the team completes the spreadsheet, it will automatically produce a radar chart such as that shown in Figure 23, which highlights relative strengths and weaknesses.
 Leffingwell, Dean. Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise. Addison-Wesley, 2011.
 Leffingwell, Dean. Scaling Software Agility: Best Practices for Large Enterprises. Addison-Wesley, 2007.
Last update: 5 April 2016