Printer Friendly

What is Function Point Analysis?

The concept of Function Point Analysis was first presented to a joint SHARE/GUIDE/IBM conference in 1979 by the developer of the technique, Allan Albrecht, (Albrecht, 1979). Albrecht had required a long-term productivity measure for systems he was managing. The problem he faced was that his environment included different language types, and so he needed something that was technology-independent, or at least independent of the programming language used. That was the start.

Since then FPA has moved from being a locally used metric within one organization--in fact in just one part of an organization--to the situation we see today, which is that FPA is widely used in very many organizations in many parts of the world.

Supporting this use are a number of user groups, many consultancy organizations and many commercially available training courses. At times it seems as though there is a whole industry involved in promoting and supporting the use of FPA. Given that FPA has been so successful in terms of widespread adoption, at least within the data processing environment, it is sometimes surprising to realize that FPA provides a very basic item of data. Function Point Analysis is simply a technique that can be used to derive a measure of the size of a software system. The wide use of FPA and the fact that it has been in the public domain has led to some problems. For example, there are many variants of FPA. Some of these have been developed to address perceived limitations in the original technique, while others have been developed to extend the application domain of FPA beyond the data processing environment to the, so called, real time environment. For example, Charles Symons has developed a major variant of FPA, often called Mark II Function Point Analysis, which has been adopted by the British Civil Service as a standard, (Symons, 1991). Capers Jones (1988) has developed a variant of FPA known as Feature Points, which he sees as an experimental technique that could extend the use of FPA to real time applications. Don Reifer has also explored the application of FPA to real time environments, (Reifer, 1989). Being in the public domain has also meant that there have been numerous critical reviews of the technique, many of which have led to improvements in FPA or in its application.

Despite the wealth of information available, or perhaps because of it, many misconceptions exist about FPA. I would like to establish three basic ideas that should be appreciated by anyone before they start to use FPA.

1. Function Point Analysis is a technique, hence the use of the word 'analysis' in the name. It is not a simple count of specific characteristics, nor is it a totally de-skilled activity. Organizations that are seen as the most effective users of FPA often bring together an FPA expert and a system expert when they wish to size an application. To use FPA effectively requires training, may require support and will involve an investment of effort.

2. Function Point Analysis produces a unitless answer or score. This means that there is no such thing as a 'function point'. Appreciating this fact can save much confusion later. If this appears odd, then think of an FPA score as an index in the same form as the Dow Jones or Stock Market FTSE index. These are also unitless measures and once it is understood how the index behaves, use can be made of the information it provides.

3. The number produced by FPA is a measure of system size. Of itself this has little value, but it is a basic data element. It is important to stress that FPA is not an estimating technique, nor is it a device for productivity measurement. What FPA does is enable such activity by providing the basic data, size. Once you have a size value, this can feed into the estimation process for effort or cost. Size values are also used in basic productivity measures and, in many other measures that provide meaningful information.

USING FUNCTION POINT ANALYSIS

At this point I would like to give a feel for what is involved in an FPA exercise. The variant used to describe an FPA exercise is the Albrecht form of FPA sanctioned by the International Function Point User Group (IFPUG). There is also a brief explanation of some of the main differences between the Albrecht variant and Mark II FPA.

Performing an FPA exercise

Function Point Analysis is a measure of system size. It is derived from an analysis of five basic, external, system parameters:

* Inputs;

* Outputs;

* Inquiries;

* Internal and

* External Logical Data Groups, or LDGS, operating within the system boundary.

Defining the system boundary

Defining the system boundary is the first step in FPA. The boundary is used to limit the scope of the sizing exercise and to help identify the other external parameters.

There are three views of the system boundary depending upon the type of FPA exercise that is being carried out.

1. The application or product boundary. This encompasses a full application, and this type of count is often done at the end of a development project when handing over to the maintenance group or when an organization first starts to use FPA. This type of FPA count can also be derived from a live system.

2. The initial development project boundary. This is a very similar kind of count to the previous one, the difference being that the count derives from requirements for which no system exists.

3. The enhancement project boundary. This situation arises where a system exists and further releases are made of that system. The enhancement project FPA approach differs from the previous situation in that added, changed and deleted functionality is considered rather than the totality of the system. Do not fall into the trap of counting the total system both before and after enhancement, and subtracting one total from the other. This is not a valid approach as can be seen if a project is considered that adds a certain amount of functionality and deletes the same amount in another part of the system. Subtracting the size of the system before the project from the size after the project will give a project size of zero! There is a subjective element in determining the system boundary and, obviously, changing the system boundary will change the FP score. While this may seem an unscientific approach, in practice the guideline that the analyst should consider is to look at what is managed as a discrete whole. This enables most system boundaries to be defined easily. From this point, FPA is a more mechanistic process, but still demands skill unless a fully electronic design support system is in operation.

Defining the parameters

To be able to determine the parameter occurrences that will contribute to the final FP score, those parameters must be defined. The definitions presented below, which are partial quotes from the IFPUG Counting Practices Manual (1990a), will give an indication of what is required. Note that they are not full definitions.

External inputs: An external input causes the processing of data or control information which crosses the system boundary and, through a unique logical process, maintains an internal LDG. An external input is considered unique if data is maintained within an internal LDG and the input format is unique or the processing logic is unique.

External outputs: An external output results from the processing of data or control information that crosses the system boundary. An external output is considered unique if the format is unique or the processing logic that produces the output is unique.

Internal LDGs: An internal LDG, often called an internal file, is a user-identifiable group of logically related data or control information maintained and utilized within the system boundary. It is data that an experienced user would identify as fulfilling a specific user requirement of the application, and which the system being sized can write to.

External LDGs: An external LDG, often called an interface file, is a user-identifiable group of logically related data or control information utilized by the application, but maintained by another application. Essentially, the application being sized has only read access to the data.

Inquiries: An external inquiry is a unique input/output combination where an input causes an immediate output and an internal LDG is not updated. An input/output combination is considered unique if the input format is unique or the output format is unique.

The word immediate in this definition can cause problems. The key point to bear in mind is that, within an inquiry, data held by the system is not changed.

Identifying unique occurrences of parameters

Having defined parameters we can now proceed to the next step, identifying all unique occurrences of parameters. When doing this, there are certain points that should be remembered:

** The expression maintain means to create, change or delete a record within a LDG.

** For all parameters, a logical rather than physical view should be taken. For example, if you consider a screen output, data may be presented on two physical screens, but if the second is a continuation of the first, then it is classed as one logical screen. Also, if considering LDGS, it is the logical view of data as perceived by the user that should be taken rather than any physical storage view imposed by the technology or the implementation.

* For outputs, duplicate reports on different media or those sent to different locations are counted separately, as this should only be done to satisfy user requirements.

* Internal LDGs are often identifiable as singularly named data stores on a data flow diagram. Note that the system being sized must have the ability to add, change and delete data for it to be classed as an internal LDG.

* For all LDGS, temporary work files are not counted.

* LDGs maintained by more than one application are credited to each.

* No internal LDG is ever maintained by an inquiry.

* Logon screens that provide security functionality are counted as inquiries.

Remember also that a situation may arise, specific to a particular organization, which is not covered by advice in any set of counting guidelines. This simply reflects the fact that, as Albrecht has said at numerous venues, FPA is an evolutionary technique. In such cases it is up to the practitioner to make a decision or formulate a rule based on personal experience as well as that of others. The most important thing is to document that rule and to apply it consistently. For the benefit of other practitioners, it would be of great assistance if the relevant governing bodies, IFPUG, EFPUG etc are advised, so that they can ensure a consistent and standardized application of FPA across the industry.

Complexity assessment

The five basic parameters can be divided into two groups: data function types and transactional function types This is done on the basis of how the complexity assessment is carried out. Complexity assessment is carried out by considering two of three parameter characteristics depending upon the function type.

These characteristics are:

* Record types;

* Data element types;

* File types referenced.

If an LDG is considered to be a file, then a record element type can be thought of as a record. A data element type can be thought of as a field within a record.

Data element types should be viewed at the user recognizable level. If a date is stored as dd, mm, yy, i.e. as three separate fields for implementation reasons rather than to meet any customer requirement, then the three fields should be viewed as one for the purposes of FPA.

Fields that repeat, for example array elements, are counted once. A file type reference is counted for each LDG accessed in any way during the processing of a transactional function type. Note that it is the number of LDGs referenced not the number of references made that is counted.

The complexity assessment is done by using standard assessment matrices. Note that inquiries are assessed at the level of their greatest constituent part. In other words, if the input part of the inquiry has the highest complexity assessment as an 'average', then the total inquiry is assessed as 'average'.

General system characteristic analysis

Having derived the unadjusted function point score, we now come to one of the more contentious areas of FPA. The technique generally used is to take an overall view of the system being sized via 14 general system characteristics.

The set of general system characteristics are each assessed on a scale of O-5. The scores for each characteristic are then summed to derive the total degree of influence figure, the TDI.

Guidance on assessing each of the 14 general system characteristics is provided within the IFPUG Counting Practices Manual. These characteristics are listed below:

* Data communication

* Distributed function

* Performance

* Heavily used configuration

* Transaction rates

* On line data entry

* Design for end user efficiency

* On line update

* Complex processing

* Usable in other applications

* Installation ease

* Operational ease

* Multiple sites

* Facilitate change

Once the TDI value has been derived, it can be used in the following formula to arrive at the value adjustment factor or VAF.

VAF = (TDI X 0.01) + 0.65

The final FP score can then be derived through the following formula:

Final FP score = Unadjusted FP score X VAF

There is considerable debate within the FPA user community regarding the value of the general system characteristics. For one thing their assessment is somewhat subjective. It would also seem that there are dependencies between the various characteristics (Kitchenham, 1990), and, finally, in my experience for most systems the VAF is about I which, as a multiplier, has little effect. However, they do form part of the standard FPA technique, and the more experience that is gained in the use of FPA, the greater the probability that we will move towards a more scientific approach.

Differences between Albrecht's and Symons' variants of FPA

The Symons (1991) variant of FPA takes a slightly different view to that of Albrecht.

The main differences are:

1. Counts are based around logical business transactions. This difference may be less significant than it seems at first, as I believe that this view is implied in the IFPUG definitions of inputs, outputs and inquiries that speak of unique processing.

2. A much more significant difference is that counts of inputs and outputs are based upon counts of data elements rather than of logical occurrences of, say, screens. This means that complexity does not have to be assessed independently of the occurrence.

3. Internal and external LDGs are not counted at the system level but instead data entity references are counted within a logical business transaction. This implies that an entity model may need to be derived for the system, but as the user view is maintained, this is generally not too difficult entity model may need to be derived for the system, but as the user view is maintained, this is generally not too difficult.

4. Symons defines 19 system characteristics and also allows for organizations to add more.

Overall, I believe that there is much to commend the Symons approach in that it has built on the landmark work done by Albrecht. The worst thing that could happen would be for the FPA community to split along political divisions based on method, and one can only hope that all FPA users will adopt the message of both Albrecht's and Symons' work for a harmonious, clearly defined approach to the sizing of software systems and projects. As both Albrecht and Symons have said more than once in public, we should expect to see Mark III, Mark TV and Mark V versions of FPA as the technique develops.

Finally, do not fall into the trap of looking upon FPA as the totality of a metriCs initiative--it never is. Function Point Analysis is a very widely used technique, but all it provides is a measure of software system size. How that data is used to manage and improve a process is what really counts.

From: Practical Implementation of Software Metrics McGaw Hill Book Co.
FPA sample counting form

 Complexity
Parameter Count rating X Weight Product

Inputs Low ____ X 3 = ____
 Average ____ X 4 = ____
 High ____ X 6 = ____
Outputs Low ____ X 4 = ____
 Average ____ X 5 = ____
 High ____ X 7 = ____
Internal LDGs Low ____ X 7 = ____
 Average ____ X 10 = ____
 High ____ X 15 = ____
External LDGs Low ____ X 5 = ____
 Average ____ X 7 = ____
 High ____ X 10 = ____
Inquiries Low ____ X 3 = ____
 Average ____ X 4 = ____
 High ____ X 6 = ____
 Unadjusted ____
 FP score
COPYRIGHT 2005 A.P. Publications Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:SOFTWARE WORLD INTELLIGENCE; Function Point Analysis
Publication:Software World
Geographic Code:4EUUK
Date:Jul 1, 2005
Words:2751
Previous Article:Kaspersky top ten viruses reported in June.
Next Article:The key to compliance.
Topics:


Related Articles
Educational programs for intelligence professionals.
The case for function point analysis.
Know the competition: survey spotlights growing need for competitive intelligence in public relations.
DB2Universal database.
Testing the dose-response specification in epidemiology: public health and policy consequences for lead.
Process monitor makes data more useful.
MySQL V5--ready for prime time business intelligence.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters