What is the state-of-the-art?

Agile methodologies nowadays gain popularity and have become a mainstream of software development due to its ability to generate high customer satisfaction. Estimation and metrics consultancy receives more and more ‘Agile assignments’ and are supposed to come up with valuable advice for agile project environments. But Agile methodologies differ from traditional Waterfall methodologies especially in the way estimation and metrics gathering are done. A survey through literature and the internet for these subjects  give insight in the differences and will broaden our knowledge and may strengthen our abilities. New approaches on how existing techniques and instruments can be applied in Agile environments were found. The result of this survey encompasses three articles. This first paper shows the state-of-the-art of metrics in agile software development.

Software Metrics in general

Software metrics or software measurement is a concept from the software industry. It can be described as follows: to apply values to properties of products or processes according to defined principles in order to carry out statistical or comparing analyses in behalf of a previously defined purpose.

Software Metrics usually supports three areas of interest.

  • Planning, forecasting
  • Monitoring & control
  • Performance improvement, benchmarking

There are many software metrics in use for different situations. Putnam and Myers beautifully point out what it is all about with the ‘5 core metrics‘ for software development: one develops a product of acceptable quality with a certain effort in a certain time. The relationship between product, quality, effort and time, is determined by the productivity, which therefore also must  be measured. We take a closer look at these core metrics.

Product(size)
Measures quantitative properties, which determines the size of the product. To be done beforehand or afterwards.
Examples: LOC, Function points, number of Use Cases, number of Features, User Stories.

Quality
The product has to possess some quality, usually determined by means of ‘number of defects per period in time’, the Mean Time to Defect  (MTTD) and the ‘defect density’ (number of defects per KLOC).

Effort
Measured in person-months which were solely dedicated to his project.

Duration
This is the turnaround time without interruption between start date and end date of the project (e.g. in months).

Productivity
This is a property of the process, i.e. the software-development process in the organization. It is emphasized that this is not the productivity of one or more persons who take care of the realization! Productivity determines the relationship between the product (size), effort and time in the so called Software Equation in its basic form:

Productivity = product size (with a known Quality) / (effort * time)
(refer to the book for explanation)

These core metrics support the previously mentioned (main) areas of interest as follows.
Planning and prediction: productivity and product size (at a desired quality) determine indirectly the cost, lead time and effort.
Monitoring and control: all kinds of measurements are used as product part, percentage effort spent per delivered product compared to the entire product size at certain point in time, number of errors found during development compared to the expected number of errors. This kind of measurements characterizes the performance of the project team.
Improvement, benchmarking: by building a history with key information from own finished projects a reference point can be determined with which future projects can be compared. Also the own performance can be compared with a performance from other companies (benchmarking). With this second comparison it may be determined  if the organization performs in line with the market.

Metrics in Agile projects

Agile methods have the same starting point as the waterfall method, namely: one develops a product of acceptable quality with a certain effort in a certain time.

The approach and processes may differ, but in principle it’s another road that leads to the same destination. It is not surprising that in an agile environment software metrics have their place. However, there are some notable differences to the waterfall approach. In the first place, it should be stated that agile methods have their greatest successes in small to medium-sized projects for commercial applications.

The focus of agile metrics is in line with this limited size and focus mainly on the team, the current iteration, the current release. Less on the project and may be not at all on a whole program of projects. In short, the focus is aimed internally. This is important to notice because on this point agile clearly differs from the waterfall approach. A second striking difference between agile metrics and waterfall metrics is the difference in used units of measure. Agile metrics for product(size) and productivity are expressed in subjective units (story points and story points per iteration or velocity) that apply only to this project and this team. This makes comparison between teams, projects and organizations impossible. Waterfall metrics are especially expressed in standardized units for benchmarking purposes.

The following matrix shows the core metrics of Putnam and Myers in both environments.

Core MetricAgileWaterfall
Product(size)features, storiesFP, CFP, UCP
Qualitydefects/iteration,
defects,
MTTD
defects/release,
defects,
MTTD
Effortstory points *)person months
Timeduration (months)duration (months)
Productivityvelocity
(story points/iteration) **)
hours/FP ****)
*) Story points are subjective and only apply to this project and this team
**) Velocity is subjective and only applies to this project and this team, benchmarking not possible.
***) FP and CFP are objective and are international standards to express functional size.
****) Hours/FP is used by several estimation & metrics tools for benchmarking purposes.

 

A tour on the Web for the use of metrics in the Agile community results in a varied picture. Some consultants propose metrics that ‘stay close to the manifest ‘. In this category goals emerge as ‘insight into the sponsor’s confidence ‘, ‘ understanding of customer satisfaction’ and ‘motivation of the team ‘.  The perspective from which ‘agilists’ look at metrics, is well illustrated by the presentation on Agile metrics by Mike Griffith. Aside from the ‘agilists’ there are consultants who focus on the traditional metrics, albeit these metrics use agile-own concepts and units like features, iterations, story points and velocity.

A remarkable document in this context, the recent thesis of Hanna Kulas. This document pinpoints on why Agile is so special and how certain product metrics can be incorporated in agile projects. Additionally what will be measured and what benefits may be established. The application of these product metrics may lead to an increase in customer satisfaction as a result of a higher product quality and lower development costs which in turn is the result of an improved understanding of the software development process.

Remarkable and characteristic for agile is the absence of metrics for benchmarking or any other form of external comparison. The units of measure used for product (size) and productivity are subjective and apply exclusively to the project and team in question. From the perspective of an acquisition manager or sponsor there is no possibility to compare development teams or tendering contractors on productivity. So the selection process for a contractor on rational grounds (productivity) is virtually impossible.

Hereafter the ‘agile’ metrics that have been encountered on the Web are summarized; their purpose and how they are measured. Notice that the majority of the metrics support ‘monitoring & control’. Obviously with respect to this area of interest the agile environment does not vary that much from the traditional waterfall environment.

 ‘Agile’ Metrics, results of a survey on the Web

The following tables show metrics recommended by various ‘agile consultants’ in many cases according to their own practice. The metrics are grouped around the areas of interest mentioned earlier in this document.

Metrics for Planning, forecasting

MetricPurposeHow to measure
number of features1. Insight into the size of the product (and the entire release).
2. When status applied to features: insight in progress.
The product comprises features that in turn comprise stories. Features are grouped as ‘to do’, ‘in progress’, and ‘accepted’.
number of planned stories per iteration/release1. Insight into the size of the product (and the entire release).
2. When status applied to stories: insight in progress.
The work is described in stories, which are quantified in story points. Stories are grouped as ‘to do’, ‘in progress’, and ‘accepted’.
number of accepted stories per iteration/ releaseTo track progress of the iteration/releaseFormal registration of accepted stories.
Team VelocityRefer to Monitoring & Control
LOCIndicates amount of completed work (progress), calculation of other metrics i.e. defect density.According to rules agreed upon which LOC’s to count.

Metrics for Monitoring  & Control (progress and performance)

MetricPurposeHow to measure
Iteration Burn-downPerformance per iteration, ‘are we on track?’.Effort remaining (in hrs) for the current iteration (effort spent/ planned expresses performance).
Team Velocity per IterationTo learn historical Velocity for certain Team. Cannot be used to compare different teams.Number of realized story points per iteration within this release. Velocity is team and project specific.
Release Burn-downTo track progress of a release from iteration to iteration ‘are we on track for the entire release’.Number of story points ‘to do’ after completion of an iteration within the release (extrapolation with certain velocity shows the end date).
Release Burn-upHow much ‘product’ can be delivered within the given time frame.Number of story points realized after completion of an iteration over the total number of story points (of the release). On a time line it shows the progress of ‘scope completion’.
Number of test cases per IterationTo learn amount of test work per iteration. To track progress of testing.Number of test cases per Iteration recorded as ‘sustained’, ‘failed’ and ‘to do’.
Cycle Time
(Team’s capacity)
To determine bottlenecks of the process; the discipline with the lowest capacity is the bottleneck.Number of stories that can be handled per discipline within an iteration (i.e. analysis- UI-design-coding - unit test- syst.-test).
Little’s Law – ‘cycle times are proportional to queue length’.Insight into duration; we can predict completion time based on queue length.Work in progress (# stories) divided by the capacity of the process step.

Metrics for Improvement (product quality and  process improvement)

MetricPurposeHow to measure
Cumulative number of defectsTo track effectivity of testing.Logging each defect in defect management system.
Number of test sessionsTo track testing effort and compare it to the cumulative number of defects.Extraction of data from the defect repository.
Defect densityTo determine the quality of the software in terms ‘lack of defects’.The cumulative number of defects divided by 1000LOC’s (KLOC).
Defect distribution per originTo decide where to allocate quality assurance resources.By logging the origin of defects in the defect repository and extract the data by means of an automated tool.
Defect distribution per typeTo learn what type of defects are the most common and help avoid them in the future.By logging the type of defects in the defect repository and extract the data by means of an automated tool.
Defect Cycle TimeInsight in the time solve a defect (speed of defect resolution).
The faster the resolution, the lesser coding ‘on top’ will be produced.
Opening date of the defect minus the resolution date (usually the closing date in the defect repository).

Note: there are also ‘metrics’ found that measure ’insight into the client’s trust ‘ and ‘ insight into customer satisfaction’. However for these metrics the agile community fails to make it plausible why these aspects specifically apply to agile and are therefore not included. Referring to my own experience of more than 30 years these aspects are measured in all kinds of project environments. More or less the same applies to the use of the Earned Value Method.

Conclusions

This tour on the Web and through publications leads to the following conclusions.

  1. The metrics used within the Agile methods and the units used (story points, velocity) are not standardized. This makes benchmarking problematic, if not impossible.
  2. When realized story points are expressed as LOC’s a correspondence may be established with standardized functional size units (FPn); even an organization’s productivity or team productivity may be determined.
  3. Agile methods could benefit from incorporating product metrics in consequence of an increase in customer satisfaction as a result of a higher product quality and lower development costs through improved understanding of the software development process.
  4. Within the Agile community many metrics are developed, that essentially are the same as metrics within the waterfall method, albeit they use agile-own units and concepts.
  5. Agile methods do not recognize ‘functional size’. The ‘size’ of a release is expressed as number of features or stories. The measurement ‘story points’ does not yield a (functional) size, but rather an amount of required effort.

Links to some of the websites

 

 

About the author

John Kammelar is Metrics Consultant at metrieken.nl, that helps organisations to attain insight and control over costs and time for software development and management. They quantify functionality with Function Point Analysis. Based on this analysis a realistic estimation can be made for the time and costs. This blog is part of a series of three articles as a result of an extensive survey. This first part shows the state-of-the-art of metrics in agile software development. All articles can be downloaded from the metrieken.nl website.

A blog post represents the personal opinion of the author
and may not necessarily coincide with official Nesma policies.
Share this post on:

2 Comments

Leave a Comment
  1. John, thanks for this blog.

    However, I have some problems with the first table under the text: ‘The following matrix shows the core metrics of Putnam and Myers in both environments.’ You imply that the terms on the left are applied in agile projects, whereas the terms on the right are used in traditional (waterfall) projects. However, In my opinion, the terms on the left cannot be used for estimating or benchmarking, as they are not based on a standard method for size measurement. Although these ‘agile metrics’ are very useful in operational sprint planning, team commitment and communication, they are useless in project estimation, productivity measurement, benchmarking and therefore in contract management and outsourcing. The ‘waterfall metrics’ on the right still need to be used in these activities, regardless whether the project is delivered in a traditional way or an agile way.

    Do you agree?

  2. René Notten says:

    Hi Harold,

    Because John doesn’t have a personal account, I will post his reaction to you.
    John: “The terms and units of measure in the ‘Putnam & Myers-table’ on the left focus on metrics used in agile environments.
    The purpose of those metrics is a different matter. Metrics using agile units of measure can be used within the scope of a project however I fully agree that they cannot be used for the purposes you mention (benchmarking, contract management, outsourcing).”

Leave a Reply