Physical or Logical size measurement

The oldest metric for software projects is that of “Lines of Code” (LOC). This metric was first introduced around 1960 and was used for productivity measurement, quality measurement and economic calculations. Productivity was measured in terms of “lines of code per time unit.” Quality was measured in terms of “defects per kLOC”. The economics of software applications were measured using “currency units per LOC.” Lines of Code were reasonably effective for all three purposes.

When Lines of Code were first introduced there was only one widely used programming language and that was basic assembly language. Programs were small and coding effort comprised about 90% of the total work. Physical lines and logical statements were the same thing for basic assembly language. In this early environment, LOC metrics were useful for both productivity measurement, quality analyses and economic calculations.

As the software industry changed, the Lines of Code as a metric did not change and so became less and less useful without very many people realizing it. The advent of COBOL, FORTRAN, PL/I, and other third generation programming languages would soon lower coding effort. Larger applications would expand requirements and design effort. LOC began to lose accuracy. This loss of accuracy continues further with the advent of fourth generation programming languages like Ingres, PowerBuilder and RPG.

For sizing existing software Lines of Code is still frequently used. More often than people realize this metric is combined with functional size measurement in a technique called “Backfiring”: many of the current estimation tools still use Lines of Code under the hood of their calculation engines.