Introduction

This is the third part of the blog focusing on a number of popular software size measurement methods and their usefulness for software project estimation. In this third part the non-functional size measurement methods are covered. In the next and last blog on this topic future blogs I will cover the hybrid methods.

Readers interested in this topic are very much recommended to first read the first part and second part of this blog before reading this third part of the blog series.

For a particular software size measurement method to be useful for software project estimation, the following characteristics should apply:

  • The size measurement method can be carried out in an objective (independent of the person doing it), repeatable, verifiable and therefore defensible way;
  • Size in itself is only useful when historical data is available of projects measured in that size measurement method;
  • The size measurement methods should be supported by parametric models and/or tools in order to accurately factor in project specific characteristics that influence the estimate, like for instance QSM SLIM, SEER for Software, TruePlanning or COCOMO II.

Of course, any software size measurement method that does not comply to one or more of these criteria may still be very useful for an organization, provided they draw up a procedure to ensure repeatable size measurements, collect and analyze historical data and/or build their own estimation models based on the data collected. For this blog however, the focus is on the theoretical usefulness of the specific measurement method for software project estimation purposes in the context of organizations that don’t have a parametric estimation process in place yet. For these organizations, that are willing to implement a size measurement method in order to estimate software projects, this blog can serve as a guideline to select a specific method.

Although everything written in this blog is based on personal experience and on publicly available documentation (referenced), I wish to stress that this blog only reflects my personal views and beliefs and therefore  I am not claiming that everything written in this blog is an absolute truth. My personal beliefs are not necessarily also the beliefs of the organizations I am affiliated with: Metri, Nesma and ISBSG.

I encourage everybody to submit comments and/or feedback on this blog.

 

Non Functional Size measurement methods

While the functional size measurement methods only measure the size of the functional user requirements (what the software should offer to the user), the non-functional size measurement methods measure (often part of the) non-functional user requirements, which can be technical user requirements and/or quality user requirements (how the software should work). Also the measurement of the code that is delivered is considered to be a non-functional size measurement.

The most widely used non-functional size measure is Source lines of code (slocs).

Source lines of code (slocs)

Source Lines of Code appeal to many people and organizations because once the software system is ready, it can be counted automatically by source code counters. Often, the development tools already measure the lines of code during the project.

Many organizations use the slocs measure as input for their software project estimation processes. This is remarkable, as the number of slocs can only be measured after completion. This means that to use slocs in software estimation, the slocs have to be estimated, not measured. Usually a team of experts together estimate the number of source lines of code for the new project and/or use analogy methods with regard to previously completed projects in order to come up with an estimate of the software size to be delivered.

Although this seems like a good idea, this method brings large risks to the effort estimate. There is no ISO standard (or any other standard) available for source lines of code. Different code counting tools return a (sometimes completely) different result after counting the same code. Sometimes physical lines are measured, but also often source statements are counted instead. Since one statement can easily be written on multiple lines, this already highlights a big problem.

In addition, the number of source lines of code needed depends on factors like technical environment, complexity and programmer capabilities. Source lines of code also don’t really represent value to the users. Is it better to get more lines of code or less lines of code? More lines may mean more functionality, but if one would pay a supplier based on a price per 1000 source lines of code one thing is for sure… the customer would get a lot of code! Code counters should not measure the code that is generated by the development tools, but in reality it is impossible for these tools to exclude the generated code from the measurement.

Therefore it is usually very difficult to use the slocs of a completed software project as an  input for the next software project. Using experts to estimate the total number of source lines of code for a new project and then use historical data based on source lines of code is extremely risky. By estimating in this way, there is already a large uncertainty percentage in the main input parameter of the estimate.

So, why do many organizations estimate their projects this way? Some publications, for instance ‘Software defect origins and removal methods‘ by Capers Jones, state that estimating projects this way is a form of professional malpractice. It may work sometimes, but the risk involved is huge and failing projects with huge overruns probably cannot be avoided. In reality, this is exactly the thing that is often overlooked. There are usually many things that go wrong in large projects, and in the end it is almost always possible to ‘blame’ some operational issues that always appear during a software project… some technical problem, or a product owner that was not involved enough, or the OTAP environment was not implemented fast enough, or the customer changed their requirements a lot during the project… and so on. In most of the cases of software failures however, when analyzing the real cause, it proves that the project started with too optimistic expectations… the team is too small, the duration too short, the costs too low.  Expert estimates, and estimates that are not based on industry standards and experience data usually start with optimistic expectations. Organizations that understand the relationship between a realistic estimate and the result of the project, will focus on implementing instruments that enable them to estimate the project as accurately as possible, also not rewarding overly optimistic estimates delivered by suppliers for instance. However, organizations have to reach a certain level of maturity to understand this and this level of maturity is still far away for many organizations in the industry… even for those that consider themselves quite mature!

Theoretically, source lines of code are not useful for software estimation at all. Some people report successful projects estimated this way, but from a theoretical point of view this could be the result of chance or luck instead.

The characteristics to assess the usefulness of this method for software project estimation are listed in the next table:

Characteristic Yes/no Remarks
Objective, repeatable, verifiable and defensible No Slocs can be measured only after project completion. For project estimation only  ‘guessed slocs’ can be used.
Historical data available Yes ISBSG R13: 180 projects. However, not possible to verify the type of SLOC measured.
Supported by models/tools Yes QSM SLIM, SEER for Software, TruePlanning, COCOMO II. ISBSG

 

SNAP points

SNAP is the acronym for “Software Non-functional Assessment Process,” a measurement method of non-functional software size. SNAP point sizing is meant to be a complement to a function point sizing, which measures functional software size. SNAP is a product of the International Function Point Users Group (IFPUG), and is sized using the Software Non-functional Assessment Practices Manual, now in version 2.2.

SNAP is loosely connected to the ISO 9126 and ISO 25010 standards for software quality. It tries to size the non-functional requirements that are implemented in a software project. Although this seems like a good idea, the SNAP method seems to  miss its target. It does not offer an integral measurement instrument for all non-functional requirements and furthermore a number of highly relevant non-functional requirements are not measured at all. A number of additional observations:

  • Most documentation that is used in the industry does not explicitly state the necessary information about the non-functional requirements that SNAP tries to measure. In addition, it is not clear what to do when this information is missing. Count something anyway…or not count anything?
  • Not all ISO 9126 or ISO 25010 categories of non-functional requirements are measured. Even when the SNAP points can be measured using the method as published, the non-functional requirements that people believe to be important cost drivers (e.g. performance or security, and so on), are ignored;
  • It is not clear how the relationship between the different SNAP categories are determined and why this would be valid. Why would the UI complexity (SNAP points = Nr of properties added or configured …2, 3 or 4 times the number of unique UI elements) be of equal..or more…or less non-functional size than the non-functional size measured for batch processes (4 times, 6 times or 10 times the number of data attributes). There seems to be no relevant connection between the categories and the way the SNAP points are measured seems arbitrary;
  • Professor Doctor Abran pointed out that the statistical proof of the SNAP method does not pass the usual methodical validity tests, as it appears that outliers in the dataset used to demonstrate the correlation between SNAP points and effort seems to have not been removed;
  • Some of the non-functional requirements that are measured are in fact functional and are also measured in NESMA and/or IFPUG methods. This is strange for a method that claims only to measure non-functional requirements.

All in all, although IFPUG and a lot of practitioners are advocating the SNAP method as a good and valid method to use in estimating, from a practical and from a theoretical point of view there are still many issues to address before this method can become useful when it comes to project estimation. At the moment there is very limited historical data of project measured in SNAP available. In 2013, the International Software Benchmarking Standards Group published a SNAP data collection form, but until now no project submissions with SNAP points have been received.

For now, at max it can be used to try to understand the differences in performance or productivity between completed projects, but it is not suitable for project estimating yet.

The characteristics to assess the usefulness of this method for software project estimation are listed in the next table:

Characteristic Yes/no Remarks
Objective, repeatable, verifiable and defensible No The SNAP manual should ensure objective measurement, but as it is not sure what to do when necessary information is missing, measurers are likely to make different assumptions resulting in different size.
Historical data available No ISBSG does capture projects in SNAP points, but no data was submitted yet. Possibly there is data available in the IFPUG SNAP group.
Supported by models/tools No

 

Next blog

In the next blog on this topic, I will give my opinion on the usefulness of the main hybrid size measurement methods for software project estimation.

 

About the author

Harold van Heeringen is a senior benchmarking consultant at Metri. Apart for his work for Metri, he is involved in Nesma (board member), the International Software Benchmarking Standards Group (ISBSG, current president) and the Common Software Metrics International Consortium (COSMIC, International Advisory Council, representing the Netherlands).

A blog post represents the personal opinion of the author
and may not necessarily coincide with official Nesma policies.
Share this post on:

Leave a Reply