Nesma homepage Forums Sizing Sizing – other methods Automated function point counting


Viewing 15 posts - 1 through 15 (of 19 total)
  • Author
  • #3236
    Ian Alyss

    Hello all,

    I was not sure in which topic I should post my question, so I’m putting it here. I think function points are a great way to attach some sort of size measure to a piece of code, but it’s such a hassle to count them. I have to go through all of the I/O requirements and data structures to obtain a reasonably good count. I found some tools that can count automatically, but they are pretty expensive and I was looking for some experience with those kind of tools. Is there anyone in this community who has experience with automated FPA counting?


    Peter Bellen

    Dear Sir,

    In answering your question: I have experience in automated FPA counting.

    The last 5 years, I have spent a lot of time in developing a tool for automated FPA counting. This tool is based on Natural Language Processing and can read text in English and Dutch. So if you have documents like a Functional Design, Requirements, Use Cases etc, this tool can give an estimated FPA count. It can process several documents at the same time. It can also give an estimated count on SAP design documents like Blue Prints etc. The count is based on Nesma 2.2 version.

    If you want more information, sent me an email.


    Peter Bellen

    Frank Vogelezang


    Based on (IFPUG) FPA, the Object Management Group has published the Automated FPA specifications in early 2014. I know that CAST Software has a working tool that counts function points, based on the OMG specifications. This tool is quite expensive.

    I know that within the COSMIC community there is a lot of work in progress in automating the counting process. Renault has a working tool in operation, but that is not available for commercial use. I know that Dutchsoft has a semi-automated tool for SAP function point counts, but that is only available as service. Papers from both Renault and Dutchsoft were presented at the 2014 IWSM Mensura conference in Rotterdam. The papers are available from

    Ian Alyss


    Is there much difference between Nesma 2.2 (seems to be only in Dutch) and the 2.1 version? And between Nesma and IFPUG?


    Do you have some references to COSMIC that are publicly available? The IWSM link sends me to an IEEE site where I have to pay for the documents.


    Hi Ian,

    I’m from CAST. Since you mentioned it, I don’t deny that there is a certain price tag associated with our software ;-).
    One of the reasons for CAST being able to automatically count APFs is the 120M USD investment that went into R&D in order to come up with a product capable of figuring our the inner structure of complex multi-layer applications precisely enough and provide an accurate, repeatable, and over time, quite cost-effective automated count of AFPs and EFP (Enhancement Function Points).

    To your question about experience sharing on APFs, there has been significant adoption of CAST AFPs in a number of accounts and at several major System Integrators in the last year or so. If you email me, I may point you to relevant contacts for your interest. Just in the first page of Nesma members account list, I could spot 2 major SIs who have CAST COEs with significant experience on the topic.


    Frank Vogelezang


    If you go to you’ll find some papers on the subject that could be downloaded free of charge. The investment in some of the papers is very small in relation to the investment from CAST ;-). You could also post your question in the COSMIC User Group on LinkedIN. I know that a number of people involved in automated COSMIC development are member of that group.

    I can also answer the questions you directed to Peter. Nesma 2.2 is the Dutch version of the English 2.1 standard. Since the English version is compatible to the ISO/IEC 24570 standard it has an additional chapter 1 with the ISO stuff, but apart from that they’re identical. With regard to the differences between Nesma and IFPUG there is a document in the Articles downloadsection of this website that explains the differences.

    Currently, the ISO/IEC 24570 is under systematic review and the Nesma Counting Practices Committee has issued a maintenance list with improvements. When these improvements are accepted this will lead to a new version of the counting guidelines. We’ll make sure that the English and Dutch version will be identical then, to avoid confusion.

    Regards, Frank

    Andreas Schuderer

    A question directed at the suppliers of automated function point counting software: are you considering offering your software as a service such as a pay-per-use model?

    Also, I’ve yet to come across AFPC evaluations that answer all of the following questions on the sample which the evaluation is based on:
    – Mean distance (error ratio) of automated count from expert count (+median ratio if distribution is skewed)
    – Standard deviation of the mean distance
    – sample size (N)
    – Size class of the sample’s counts (for example 100-500 FP, 500-1500 FP, etc. — if the sample is mixed, provide above measures per size class)
    – Type of count (such as System, Enhancement,… — if the sample is mixed, provide above measures per type of count)
    – Same for type of system (embedded, business, interactive, batch,…)

    Up until now I’ve always been feeling that one or the other aspect was missing for being able to make a call. For example, if the average error is given as 5%, but no indication about the count size class is given, I can’t judge whether the measure would be suitable for individual system counts. For what it’s worth, it could be only suitable for portfolio counts with a size of more than 50,000 FP (law of large numbers). Do you know any articles which evaluate function point software and are thorough enough to cover all these aspects?

    Jean-Pierre Fayolle


    I did a couple of posts about the OMG Standard and what it takes for a SCA software to count AFP on the Qualilogy blog.

    Frank Vogelezang


    This is an interesting blogpost. I can advise everyone to read it. For the sake of this discussion I copied some of your obeservations:

    As we can see, the document of the OMG standard specifies the requirements for the use of Automated Function Points, as well as its limitations. The points that I think are most important to remember are the following.

    The standard does not « address the sizing of enhancements to an application or maintained functionality (often called Enhancement Function Points) » and therefore would only apply to new developments. If this is the case, it limits very much its use and interest.

    AFP and IFPUG
    The AFP standard does not claim a strict compliance with a manual counting of Function Points: « Automated Function Points counts may differ from the manual counts produced by IFPUG Certified Function Point counters ». This seems to me a first important point: Automated Function Points are not IFPUG Function Points. This is another measure, which has the advantage of being computable automatically by a tool, and therefore with less effort than a manual counting, but also with a different result.

    Counting AFP requires to identify and assemble into functional components all data structures and transactions, and decide which are internal or external, and which to take into account or not when setting up the tool and configuring analysis. This assumes you have available people with a good knowledge of:

    • The application by a SME or an expert of the project.
    • The process of counting Automated Function Points to determine the scope of analysis and the factors to be considered.
    • The tool and its parameters to configure the analysis, verify false-positive and validate the results, accordingly to the previous two points.
    • This configuration phase is obviously critical if we want to achieve a more objective result, and therefore as credible as possible when it comes to measuring the productivity of a team.

    Analysis and validation
    Counting Automated Function Points with a tool assumes that this tool is able to:

    • Analyze any kind of component.
    • Identify any link between these components.
    • Assemble all these links into transactions with as less false-positive as possible.

    But it is rare that a code analysis tool has a parser able to recognize and analyze any type of file on a given technology, and less on different technologies. A tool may be able to recognize operations like ‘read’ or ‘write’ on a flat-file in a Batch application, to identify the different kind of links between xml files of a Java framework and not be able to analyze an Html or an Excel report. An important feature of our Timesheet application in our example will be to produce activity sheets for validation before invoicing, usually in different formats: Excel, PDF, etc. I know of no code analysis tool that manages this type of file.
    To find all the links between components can be difficult or impossible for some technologies. The use of frameworks (Spring, Hibernate, etc.) complicates analysis, and this means an important work to validate the results in order to avoid false-positives as much as possible, and then check the identified transactions and the counting of Function Points for each of them.

    In conclusion, I think that Automated Function Points is a different measure, which produces different results than a manual estimation conducted by an IFPUG consultant. In an ideal situation, it would be great to have such a consultant to participate in defining the scope of analysis, the settings of the tool, the validation of the results. This assuming that the tool is able to identify all components, the links between them, data structures, transactions, etc.

    Even in such an ideal case, I believe that the difference between a manual estimation and Automated Function Points is at least 10% to 20%, more often between 40% and 50%. As a minimum. It could also be 200% or 300% different, for example for a Cobol Batch application (many flat-files), an integrated software (ERP) with different modules, in case of a framework that makes it impossible to clearly identify transactions, etc.

    You pose some practical difficulties. Now I can understand why CAST has invested USD 120M to make it work.

    Ian Alyss

    To all of you,

    Great discussion. We are going to rethink whether we’ll start with automated counting. I’ve seen some fast approaches in Nesma documents that might be useful. Maybe I’ll come back with some questions on that.

    To Jean-Pierre,

    It’s a pity that the discussion on your Qualilogy blogpost is closed, so I’ll put my points here. One of the objections you raise in the blog that Frank did not copy is that AFP don’t work on enhancements. I think that’s an unfair comment, since changing software is an activity. You can describe it in a change document, you can envision it in your head and then you change the code. AFP measures the code, which is either changed or not. The change is not in the code, so a parser will never be able to capture it. That’s not a flaw of AFP, that’s technically impossible. You might be able to capture some of the change by comparing a count before and after the change, but that will be very difficult when enhancements are done on existing functions.

    There were a couple of things in your blog and the reaction that I would like to comment on. Since I cannot do it on Qualilogy, I might do it here in a separate post.

    Jean-Pierre Fayolle


    Comments on my blog were closed 30 days after the post, in order to avoid me having to manage all these spammers trying to sell pills and all kinds of products that have nothing to see with my blog. I just reopened it so you can let a comment and anybody can participate in a discussion, after moderation.

    I don’t understand very well your point about changes. Frank did mention above the part about enhancements (see “Perimeter”).
    And yes, SCA tools can analyze changes, based on the difference between different analysis, in existing code or not. It can (or should) say how much components have been added, deleted, modified, and if modified, the change in usual metrics like LOC, CC, … and also on AFP.

    As mentioned in the post, this is mandatory for what people want mainly when it comes to AFP: productivity estimation. If you give the maintenance of an application to an outsourcer, you must know how much AFP he added, deleted, modified based on these changes in order to measure his productivity, and benchmark it with other outsourcers or on different technologies.
    Is a new outsourcer responsible of the defects already present in the code he has to maintain? No. Same for AFP, you count what he changed, not what is in the code that you deliver him.

    As you all know, 90% of programming activities are on maintenance, there are not so much new developments. So, having the OMG standard saying that AFP apply only on new developments did really surprise me, because it decreases seriously the interest of this measure for 90% of the projects.

    Ian Alyss


    Sorry if I wasn’t clear enough. I’ll try again. You write that you were surprised about the OMG statement.

    First of all. The OMG states that AFP does not apply to enhancements. That’s not exactly the same as saying that it only applies to new developments. I’ll try to explain why.

    AFP tools work on static code, so they measure the size of the code as it is. That’s either before or after the maintenance. Changing the software is an activity you can describe in a change document, but not in the code. The code is either changed, or not. In order to measure the size of a maintenance project you would need two static instances of the code, one before and one after, and then analyze the differences. I’m not sure whether that is technically possible. So I would not judge negatively on the fact that OMG is not making this miracle happen.

    Jean-Pierre Fayolle


    I posted answers to your comments on my blog
    . Yes, you will need to analyze at least 2 versions and store the differences. This is what SCA tools do usually.
    . It is often possible with these tools to do incremental analysis only on changed (added, deleted, modified) components, but this will not work for AFP as you have to analyze each transaction and data structures on all the layers of the application.
    . If SCA tools can do this, why this limitation from the OMG specifications? This is not precised.

    I would be happy to know why because I cannot understand, and if this standard is limited to new projects only, I would like to know it before companies begin to measure AFP on their whole portfolio of applications, whose 90% are applications under maintenance.

    Ian Alyss


    Thank you for your answers. If SCA-tools can store the differences, than the counting tools should be able to do the same. Maybe Gerard can answer that question. I can’t imagine that they did not think of that as part of a 120 M$ investment.

    Luigi Lavazza

    Dear all,
    here is my contribution to the (very stimulating) discussion.

    On May 17, at WETSOM 2015, I presented a critical evaluation of AFP. You can retrieve the paper from
    If a username and password are requested, send me an email and I will send them to you.

    Concerning the questions about enhancement, Ian is perfectly right: AFP measure code that can be the result of maintanence or the result of new development. However, at WETSOM Bill Curtis mentioned that CISQ is working on Automated Enhancement Function Point Specification (see also

    Finally, there is some research activity addressing the automated measurment of function points based on code execution. The idea is that by looking at execution traces it is possible to see what processes were invoked by users, what data were used in each process, and what data where exchanged between users and the system. In practice, by the execution trace contains the information needed to compute function points. These initiatives are promising, but the code to be measured needs to be instrumented, and currently there is no supporting tool, except academic prototypes and proofs of concept.

Viewing 15 posts - 1 through 15 (of 19 total)
  • You must be logged in to reply to this topic.