ISBSGI strongly believe they are and I am often surprised and disappointed that more organisations do not implement Functional Size Measures.

Productivity is a concept that is widely discussed right from governments at a national level down to individual companies, in all industries producing goods or providing services. For the purposes of this discussion, I take productivity to mean the amount of resources consumed to produce a unit of output. The key here is that there is an output that can be identified and counted to compare to the resource used.

Corporate Executives and Government Department Heads regularly mandate that their organisations improve productivity each year either by, for example, producing more output with the same level of resources or the same output with less resources. They are also interested in the performance of their organisation compared to other like companies with whom they compete.

The IT Industry is no different. Examples include Services Vendors who need to demonstrate competiveness to win business, Internal IT Departments who need to show they are delivering more to the business with the budgets they are allocated and  Chief Financial Officers  needing  to ensure that Outsourcing deals will deliver real benefits to the business, not just a cheaper labour rate.

In the realm of software development, lines-of-code has been used as a measure of output produced but with the arrival of modern development techniques and tools this is no longer a useful measure in most modern environments.

A more useful technique is to measure the output of a software development project in terms of the amount of functionality it delivers to the customer, irrespective of technology/platform/development methodology used. To do this one needs a standardised, repeatable way of identifying and assessing the business functions delivered and thus to derive a numeric value for the size of the software delivered.

Some of the key lessons learnt in implementing Functional Sizing are:

Measure what you do and then improve

  • Unless you have a baseline of current performance it will not be possible to manage an improvement program. I have several times heard managers say “We want to increase productivity by 10% per annum” but they did not know what the current performance was. Also, you need to know whether the current performance is good, average or bad. Depending on where you stand, you can understand if you need to improve, and if so, how much would be realistic. The ISBSG data can help you with this.

Understanding your data and taking actions.

  • Measuring is of little value unless it is used to indicate actions that can be taken to improve. Therefore a process of “Causal Analysis” is required to understand the attributes/factors/events that resulted in the performance of each individual project.
  • Be careful of the “three-legged-stool” of cost, quality and schedule. They are all interdependent and should not be managed in isolation.

Beware Misuse of Data

  • I once heard 2 senior Executives boasting about the performance of their IT Departments using Hours/Function Point. They were from companies in different industry sectors with substantially different businesses so such a comparison is almost meaningless
  • There is a danger of over simplification. Because projects, even in similar domains, will produce varying results, some basic statistical analysis beyond just averages will be required to properly understand results.

Correlation with effort

  • The use of functional size to measure productivity or to estimate presupposes that there is sufficient correlation between the size and the effort. Not all tasks require that directly proportional to the functional size,  so may need to be removed from the productivity calculation and handled separately

Hours/FP or $/FP

Some define productivity Effort/ Unit of output while others prefer Cost / Unit of output. Each metric has value as there is often a relationship between the labour rate and the number of hours taken to complete a task. I recommend using both.


I would be interested to hear your experiences (good and bad) and especially reasons as to why you have not implemented functional sizing. Please contact me at if you are interested.


About the author

John Ogilvie is CEO of the International Software Benchmarking Standards Group.


A blog post represents the personal opinion of the author
and may not necessarily coincide with official Nesma policies.
Share this post on:


Leave a Comment
  1. What you describe as the “three-legged-stool” of cost, quality and schedule is in my experience the reason why a lot of measurement programs fail. They fail to relate the three and manage them – or even worse – only cost in splendid isolation. When you get a focus on cost only you may end up with very efficient software development that is not producing what an organization needs from their IT portfolio. Any measurement program should be focussed on “doing the right things” first, before looking at “doing things right”. This means quality first, then schedule and then cost. I have encountered a lot of companies who do it the other way around and then fail.

Leave a Reply