Two worlds for software project estimating

SCAFIt appears at first sight that there are two worlds for software project estimating which, for simplicity, I will call the ‘Chaotic’ and the ‘Controlled’ worlds. The Chaotic world is characterised by the majority of organisations whose projects frequently over-run on time and budget, or that fail completely. The Controlled world has a very much smaller population of exemplar organisations whose projects are claimed to be delivered to time and budget routinely. The interesting question is why organisations in the Chaotic world do not or cannot simply learn and copy the behaviour of those in the Controlled world and save themselves a lot of money.
This article explores the two worlds and aims to explain the differences which are partly intrinsic and to a degree unavoidable, and partly due to a mixture of cultural, process and technical factors, several of which can be overcome with enough effort and perseverance.
First, the evidence for the two worlds.

The Chaotic world

There have been several surveys, covering the outcome of thousands of software projects, mainly in the US and the UK and mainly of projects from the domain of business application software, in the public or private sectors. Strictly-speaking we should refer to ‘software-intensive system projects’, since for many such projects the delivery of the software is only part of a project that must deliver a hardware/software system and often organisational change as well. I use ‘software projects’ for simplicity. The results vary but indicate that between 10% – 30% of software projects fail completely, i.e. they are stopped before delivering anything useful. Another roughly 50% overrun on time and/or budget by at least 10%. This leaves only 20% – 40% of projects delivered on time and budget.

Delivering large-scale IT projects on time, on budget, and on value
Michael Bloch, Sven Blumberg, and Jürgen Laartz, October 2012
Why Your IT Project May Be Riskier Than You Think
Bent Flyvbjerg, Alexander Budzier, Harvard Business Review, September 2011
The Standish Group Report, 2014
www.projectsmart.co.uk/docs/chaos-report.pdf

However, these figures do not reflect the fact that many projects deliver less functionality or business value than was originally planned. Further, an unknown proportion of those projects that finished ‘on time and budget’ may well have been over-estimated in the first place so could have been delivered faster and at less cost. Abdel-Hamid observed that Parkinson’s Law applies to software projects just like any other activity, i.e. work expands to fill the time made available for its completion.

The illusive silver lining:
how we fail to learn from software project failures

Abdel-Hamid, T.K., Madnick, S.E., Sloan Management Review, Fall 1990

The cost of these over-runs and failures is enormous. A well-documented analysis of 105 contracted software projects completed over the ten years up to 2007 between UK public sector customers and external suppliers had a total value £29.5 Billion. Of these, 30% were terminated, 57% experienced cost overruns averaging 30.5% (totalling £9 Billion of overruns), whilst 33% suffered major delays. An important point to note is that all these projects were undertaken by external suppliers that operate world-wide and would claim in their marketing to be ‘world-class’. Further, the suppliers’ profit margin on the contracts was almost always over 10%, ranging up to 25%.

Cost over-runs, delays and terminations:
105 outsourced public sector ICT projects

D. Whitfield, European Services Strategy Unit, Report No. 3, December 2007

The same reasons for these failures and over-runs are cited repeatedly, going back at least 30 years as described in the book Crash! They fall into two main groups.

  1. Lack of senior management commitment and user involvement, resulting in unclear objectives, which leads on to stakeholder conflicts, and unclear and shifting requirements.
  2. Poor project management (e.g. in the management of progress and changes), staff inexperience, especially when new technology is involved, and staff turnover.
Crash. Ten easy ways to avoid a computer disaster
T
ony Collins, Simon & Schuster 1997

Whilst the cost of the failures and over-runs may be heavily weighted by write-offs on hardware, the cost of employing extra staff, lost benefits, etc., the causes are almost invariably due to problems with specifying and developing the software.

In all the various analyses of why software projects fail or over-run, it is uncommon to see ‘poor estimating’ listed as one of the causes. This is not surprising for the projects that fail. A poor estimate seems an unlikely cause of abandoning a project. More likely it was stopped because priorities changed since it started and it will no longer deliver anything useful, or it has gone on for so long beyond the original budget that management decides to cut its losses. But if, say, 57% of all software projects over-run by on average over 30%, one must ask ‘is there something systematically wrong with the estimating process in these environments?’

The Controlled world

From time to time we get glimpses of this other world when an organization publishes results showing its successes in software project estimation. The exemplar I will use is Renault, the French vehicle manufacturer, which has published its progress in successful software project estimating, most recently in 2014.

Manage the automotive embedded software development cost & productivity with the automation of a Functional Size Measurement Method (COSMIC)
Alexandre Oriou, Eric Bronca, Boubker Bouzi, Olivier Guetta, Kevin Guillard, IWSM Mensura, Rotterdam 2014

A modern average family car has roughly 50 Electronic Control Units (ECU’s), small processors that form a distributed network to monitor and/or control almost every function, e.g. engine, lights, air-conditioning, tyre pressures, navigation, driver information, etc. The ECU’s and their embedded software are mostly bought from component suppliers with their associated sensors, subject to specifications issued by Renault.
Renault has been collecting data on the costs and performance of its suppliers of ECU software for a few years. The process by which it contracts to procure ECU’s is briefly:

  • Renault software departments, specialized by vehicle functional area (e.g. powertrain), develop specifications for new or enhanced ECU software and store these in the Matlab Simulink tool;
  • A Renault-developed tool then automatically computes a functional size of each specification (or the increase in size if an enhancement) using the ISO standard COSMIC method;
  • Past measurements and statistically-established relationships are used to predict the effort that the supplier will need to develop the software (see Fig. 1) and its memory size (Fig. 2);
  • This information is used by the Purchasing Department to negotiate the price for the ECU. Further, the information available to Renault is now sufficiently well-established that it can be used to negotiate annual price changes in the same way that car manufacturers periodically negotiate prices of other materials such as steel, paints etc., and other components. (Fig. 3);
  • COSMIC functional sizes are also used to monitor the performance of the internal software department, since Renault has established a specification-size/staff-level relationship for their work.

Renault states that at the end of a new software development, the difference between the initially estimated effort from the established correlation and the actual value ‘has to be lower than 5%’ (see Fig. 4).

Effort vs COSMIC size for ECU software

Fig. 1 Effort vs COSMIC size for ECU software

Memory usage vs COSMIC size

Fig. 2 Memory usage vs COSMIC size

Purchase Department negotiation

Fig. 3 Purchase Department negotiation

Control of precision of cost estimates

Fig. 4 Control of precision of cost estimates

Differences between the Chaotic and Controlled worlds of software project estimating

In the following, since whole-life project estimates are required whatever the project management approach, I will use a waterfall model of project phases for convenience. Differences when using an iterative or agile model will be mentioned as they arise. We must also assume that in the comparisons of the Chaotic and Controlled worlds, the organizations in both worlds have reasonably repeatable processes and use technology with which they are reasonably familiar, i.e. we will ignore environments where process immaturity and the risks associated with using new technology leave little chance of developing any accurate estimating methods.

Different conditions for the estimating. In one sense, it is unfair to draw any comparison between the two worlds as there are a few intrinsic differences between them.

The first and most obvious difference is that in the Chaotic world, a whole-life cost estimate is usually needed for a business application project early in its life, before the requirements are known in detail, in order to inform the cost/benefit analysis for the software.

In contrast, estimates-to-complete projects in the Controlled world of Renault are not made until the software design is completely specified, i.e. they are not really whole-life estimates. By this stage, estimates can also made at a low level of decomposition (Simulink blocks in Renault’s case) before aggregating to the cost of the whole ECU.

Clearly one would expect the Renault estimates to be much more accurate than those made in the early stages of a typical business application project. Having said that, it is then legitimate to ask why estimates made so early in a project’s life, when there is still so much uncertainty, become accepted as fixed such that overruns are frequently experienced. Further, on-going maintenance and support costs that contribute to the business case often turn out to be much higher than forecast at this early stage.

Cultural differences. A study of estimating practices by Jorgensen tells us much about the culture of the Chaotic world. His research found that ‘expert estimation’ is the dominant strategy for estimating whole-life development project effort. He defined expert estimation as ‘work conducted by a person recognized as an expert on the task, and that a significant part of the estimation process is based on a non-explicit and non-recoverable reasoning process, i.e. ‘intuition’’. Although this research was published in 2004, Jorgensen recently told me that he knew of no published data that altered this view that expert estimation still dominates project effort estimation.

A review of studies on expert estimation of software project effort
Magne Jørgensen, Journal of Systems and Software, 70, 2004

In contrast, my informal observation is that the organizations in the Controlled world that publish data indicating high accuracy for project estimates are mostly hi-tech manufacturing companies, often producing safety-critical or mission-critical software. These projects require great attention to quality, so they start with the benefit of a ‘real’ engineering mentality, relying on data rather than just judgement.

These cultural differences affect the accuracy of project estimating. Daniel Kahneman, a psychologist who won the 2002 Nobel Prize for economics describes two ways of human thinking, intuitive and rational. Most of the time we think intuitively; it requires real discipline to think in the rational mode. His most important finding relevant to estimating is that intuitive thinking is almost always optimistic and tends to ignore statistics and past experience (e.g. believing ‘this time we’ll get it right’). He recommends that final predictive decisions should be left to formulae, and preferably simple ones with few variables.

Thinking, Fast and Slow
Daniel Kahnemann, Penguin Books, 2014

Applying this recommendation to a project cost estimate based on intuitive thinking, e.g. by analogy, suggests that if the environment has the track record cited above for UK public sector projects, then the business case should consider the 30% risk of total failure, and any intuitive cost estimate should be automatically increased by 15% – 20% with a corresponding increase in the uncertainty.

Kahneman has other recommendations that are significant for estimating when hard data are lacking, e.g. the use of processes such as wideband Delphi (or ‘Planning Poker’ in the agile world), rather than relying on an individual’s expertise.

Understanding the roles of the various players involved in estimating. The responsibility of an estimator is to produce a project effort figure based on the best available data, with an appropriate statement of the range of uncertainty of the estimate. That’s all.

It is the manager’s job to understand the estimator’s assumptions, assess the risk and uncertainty, and ultimately to decide on the project budget. If the manager’s mentality is to rely on the estimator and to ignore risk (e.g. with the attitude of Dilbert’s manager of ‘just give me a number’) the project is doomed to miss its budget.

Further, when a customer issues an ITT to procure software from an external supplier, the customer must understand other factors that affect how a supplier arrives at his estimates and bid prices.

Suppliers of outsourced software systems depend on reliable estimating for their survival – and we noted above that they normally have a good track record on profitability. They therefore normally take very seriously the collection of software metrics and their use for estimating, far more so than does a typical in-house IT department or a customer’s retained IT function that manages its outsourced IT suppliers.

In a supplier, the cost estimate based on the requirements information contained in the customer’s ITT is converted by its sales team into a price-to-win. In this process, they will take into account many obvious factors such as the anticipated customer’s budget, the probable competition, future cash-flow, desired profitability, etc.

Two other less-appreciated but important factors are also considered. First, as the project progresses, the customer will inevitably think of new or changed requirements which can be charged extra beyond the bid price. Second, the winner of the initial development project is best placed to win the on-going maintenance and support work over the life of the system. Both these additional and on-going activities can be much more profitable than the initial development work. Consequently, a supplier may bid low for the initial development to ensure a win.

Unfortunately, when the first big wave of UK public sector IT outsourcing started over twenty years ago, most of the experience of software metrics and estimating was outsourced to the suppliers under long-term contracts. This has led to severe ‘information asymmetry’ between customers and their suppliers and is almost certainly a major cause of the high level of budget over-runs of UK public sector IT projects.

For a car manufacturer, purchasing is one of its most important functions. In the case of UK public sector IT procurement, effectively the gamekeeper handed over its metrics expertise to the poachers.

Another cause of project over-runs can arise in the way contingency reserves are managed. These should be held by a manager at the project portfolio level and released to project managers as needed, rather than being allocated to individual projects at their outset. First, knowledge of the contingency included in an estimate gives comfort to the project manager and Parkinson’s Law ensures it will be used. The same goes for an outsourced relationship, where Kahneman quotes ‘a budget reserve is to contractors like red meat to lions; they devour it’.

Estimating techniques. Much software project estimating in the Controlled world, as exemplified by Renault, attempts to answer the dominant cost-driven question of ‘how big is it?’ by making experience-based estimates of counts of source lines of code (SLOC). The well-known COCOMO II estimating method and most of the commercially-available estimating tools have been calibrated using SLOC sizes as input. In spite of the many, oft-publicized disadvantages of SLOC sizes, estimation accuracy based on expert judgement from detailed designs is typically claimed to be accurate to within 10% at the component level.

In the Chaotic world, if more than intuition or expert judgement is needed for estimates when only outline requirements exist, it is most common to first estimate the size of the requirements using Function Point Analysis (FPA). Size is then converted to effort using productivity benchmarks derived from previous similar projects. Albrecht’s original FPA idea in the late 1970’s of proposing a measure of the size of a software system based on its functional requirements was a brilliant piece of lateral thinking. But this method, now developed and supported by the IFPUG organization, is showing its age.

The COSMIC method used by Renault was designed by an international group of software metrics experts to be applicable to business, real-time and infrastructure software, based on fundamental software engineering principles. Variations of the method to produce approximate sizes are available to measure requirements before they are known in sufficient detail for a precise measurement, and the method has or is being automated by various means. (Automated measurement is critical for Renault; manual counting would be too slow for their development process.) The method is ideally suited to measuring requirements at any level of aggregation in agile developments, e.g. User Stories. Iterations, releases, etc., and for the components of distributed systems.

An example of a problem that can be avoided by using the COSMIC method arose in a major European pension fund that had used the IFPUG FPA method for sizing as a basis for estimating. The FPA scale offers only a narrow range of sizes for transactions; the COSMIC method measures on a ratio scale with no upper limit. One project was investigated to find out how it had been seriously under-estimated. Some transactions that scored the IFPUG maximum of 6 or 7 FP’s were re-measured using the COSMIC method and were found to be over 60 COSMIC FP’s. The transactions with size over 40 COSMIC FP’s accounted for almost 80% of the budget overrun.

What can be done to bridge the estimating gap from the Chaotic to the Controlled world?

Jorgensen’s advice on how to get the best out of expert judgement estimating is strongly recommended and Kahneman’s observations on forecasting based on intuitive judgement must be taken into account. But if the Chaotic world is to bridge the gap, it must do more than rely on intuitive estimating. It must collect hard performance data on completed projects and develop simple estimating methods using modern methods of measuring requirements. If buying from an external supplier, customers must learn how suppliers determine their bid prices.

Even with these steps, there remains the intrinsic problem in the Chaotic world that estimates are often required and budgets must be established early in a software systems’ life before the requirements are known in detail. At this stage an estimate must inevitably have a very wide range of uncertainty. So what can be done to mitigate the effects of this challenge?

The answer is a process that was developed 15 years ago by the Government of the State of Victoria in Australia but has never been widely applied.
In simplified outline, when a customer issues an ITT with an initial statement of requirements, suppliers are asked to estimate the eventual total size and to bid a fixed price per unit functional size. The total bid price is then the product of these two factors. When a contract is awarded and as the requirements evolve, the unit price remains fixed, but the total price will vary in proportion to the size of the requirements. The actual size is monitored by an independent Scope Manager, a ‘quantity surveyor’ of software. The customer therefore bears the risk of varying the size of the requirements; the supplier bears the risk of bidding the right unit price based on his knowledge of the customer’s needs and of his own capabilities. With this process, the information and risk asymmetries between customer and supplier are vastly reduced.

The Australian process has been refined in Finland where it is known as ‘Northern Scope’. It is being applied or trialled in various countries. Proponents of the method claim that cost over-runs can be reduced to within 10%.

Scope Management: 12 steps for program recovery
Carol Dekkers, Pekka Forselius, CrossTalk: The Journal of Defense Software Engineering, January/February 2010

But the biggest benefits claimed are very substantial reductions in the unit costs of software and of improvements in the speed of delivery of software projects. The ability to measure requirements plays a wider role here than might be imagined, namely as a quality control factor. If requirements are not precise enough that they can be measured, the software certainly cannot be reliably built and tested! It will be seen that both Renault’s process for managing the supply of embedded software for its ECU’s and the Northern Scope process rely on software unit pricing as a key feature.

Validation of the NorthernSCOPE concept for managing the sourcing and development of software
Pekka Corslius and Timo Käkölä, ICSE 2013

In conclusion, the tools are available for the Chaotic world to largely close the gap with the Controlled world on software project estimating. But there are no silver bullets, no quick and ready answers. An engineering mentality and a readiness to invest in gathering and analysing actual performance data are essential.

 

About the author

Charles Symons is Founder and Past President of the Common Software Measurement International Consortium. You can contact him at cr.symons@btinternet.com. This post previously appeared in the 2015 winter newsletter of the Society for Cost Analysis and Forecasting.

 

A blog post represents the personal opinion of the author
and may not necessarily coincide with official Nesma policies.
Share this post on:

Leave a Reply