Function point
View on WikipediaThe function point is a "unit of measurement" to express the amount of business functionality an information system (as a product) provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost (in dollars or hours) of a single unit is calculated from past projects.[1]
Standards
[edit]There are several recognized standards and/or public specifications for sizing software based on Function Point.
1. ISO Standards
- FiSMA: ISO/IEC 29881:2010 Information technology – Systems and software engineering – FiSMA 1.1 functional size measurement method.
- IFPUG: ISO/IEC 20926:2009 Software and systems engineering – Software measurement – IFPUG functional size measurement method.
- Mark-II: ISO/IEC 20968:2002 Software engineering – Ml II Function Point Analysis – Counting Practices Manual
- Nesma: ISO/IEC 24570:2018 Software engineering – Nesma functional size measurement method version 2.3 – Definitions and counting guidelines for the application of Function Point Analysis
- COSMIC: ISO/IEC 19761:2011 Software engineering. A functional size measurement method.
- OMG: ISO/IEC 19515:2019 Information technology — Object Management Group Automated Function Points (AFP), 1.0
The first five standards are implementations of the over-arching standard for Functional Size Measurement ISO/IEC 14143.[2] The OMG Automated Function Point (AFP) specification, led by the Consortium for IT Software Quality, provides a standard for automating the Function Point counting according to the guidelines of the International Function Point User Group (IFPUG) However, the current implementations of this standard have a limitation in being able to distinguish External Output (EO) from External Inquiries (EQ) out of the box, without some upfront configuration.[3]
Introduction
[edit]Function points were defined in 1979 in Measuring Application Development Productivity by Allan J. Albrecht at IBM.[4] The functional user requirements of the software are identified and each one is categorized into one of five types: outputs, inquiries, inputs, internal files, and external interfaces. Once the function is identified and categorized into a type, it is then assessed for complexity and assigned a number of function points. Each of these functional user requirements maps to an end-user business function, such as a data entry for an Input or a user query for an Inquiry. This distinction is important because it tends to make the functions measured in function points map easily into user-oriented requirements, but it also tends to hide internal functions (e.g. algorithms), which also require resources to implement.
There is currently no ISO recognized FSM Method that includes algorithmic complexity in the sizing result. Recently there have been different approaches proposed to deal with this perceived weakness, implemented in several commercial software products. The variations of the Albrecht-based IFPUG method designed to make up for this (and other weaknesses) include:
- Early and easy function points – Adjusts for problem and data complexity with two questions that yield a somewhat subjective complexity measurement; simplifies measurement by eliminating the need to count data elements.
- Engineering function points – Elements (variable names) and operators (e.g., arithmetic, equality/inequality, Boolean) are counted. This variation highlights computational function.[5] The intent is similar to that of the operator/operand-based Halstead complexity measures.
- Bang measure – Defines a function metric based on twelve primitive (simple) counts that affect or show Bang, defined as "the measure of true function to be delivered as perceived by the user." Bang measure may be helpful in evaluating a software unit's value in terms of how much useful function it provides, although there is little evidence in the literature of such application. The use of Bang measure could apply when re-engineering (either complete or piecewise) is being considered, as discussed in Maintenance of Operational Systems—An Overview.
- Feature points – Adds changes to improve applicability to systems with significant internal processing (e.g., operating systems, communications systems). This allows accounting for functions not readily perceivable by the user, but essential for proper operation.
- Weighted Micro Function Points – One of the newer models (2009) which adjusts function points using weights derived from program flow complexity, operand and operator vocabulary, object usage, and algorithm.
- Fuzzy Function Points - Proposes a fuzzy and gradative transition between low x medium and medium x high complexities[6]
Contrast
[edit]The use of function points in favor of lines of code seek to address several additional issues:
- The risk of "inflation" of the created lines of code, and thus reducing the value of the measurement system, if developers are incentivized to be more productive. FP advocates refer to this as measuring the size of the solution instead of the size of the problem.
- Lines of Code (LOC) measures reward low level languages because more lines of code are needed to deliver a similar amount of functionality to a higher level language.[7] C. Jones offers a method of correcting this in his work.[8]
- LOC measures are not useful during early project phases where estimating the number of lines of code that will be delivered is challenging. However, Function Points can be derived from requirements and therefore are useful in methods such as estimation by proxy.
Criticism
[edit]Albrecht observed in his research that Function Points were highly correlated to lines of code,[9] which has resulted in a questioning of the value of such a measure if a more objective measure, namely counting lines of code, is available. In addition, there have been multiple attempts to address perceived shortcomings with the measure by augmenting the counting regimen.[10][11][12][13][14][15] Others have offered solutions to circumvent the challenges by developing alternative methods which create a proxy for the amount of functionality delivered.[16]
See also
[edit]References
[edit]- ^ Thomas Cutting, Estimating Lessons Learned in Project Management – Traditional, Retrieved on May 28, 2010
- ^ ISO/IEC JTC 1/SC 7 Software and systems engineering (2007-02-01). "ISO/IEC 14143". International Standards Organization. Retrieved 2019-02-26.
{{cite web}}: CS1 maint: numeric names: authors list (link) - ^ OMG/CISQ Specification "Automated Function Points", February 2013, OMG Document Number ptc/2013-02-01 http://www.omg.org/spec/AFP/1.0
- ^ A. J. Albrecht, "Measuring Application Development Productivity," Proceedings of the Joint SHARE, GUIDE, and IBM Application Development Symposium, Monterey, California, October 14–17, IBM Corporation (1979), pp. 83–92.
- ^ Engineering Function Points and Tracking System, Software Technology Support Center Archived 2010-11-11 at the Wayback Machine, Retrieved on May 14, 2008
- ^ Lima, Osias de Souza; Farias, Pedro Porfírio Muniz; Belchior, Arnaldo Dias (2003-06-01). "Fuzzy Modeling for Function Points Analysis". Software Quality Journal. 11 (2): 149–166. doi:10.1023/A:1023716628585. ISSN 1573-1367. S2CID 19655881.
- ^ Jones, C. and Bonsignour O. The Economics of Software Quality, Addison-Wesley, 2012. pp. 105-109.
- ^ Jones, C. Applied Software Measurement: Assuring Productivity and Quality. McGraw-Hill. June 1996.
- ^ Albrecht, A. Software Function, Source Lines of Code, and Development Effort Estimation – A Software Science Validation. 1983.
- ^ Symons, C.R. "Function point analysis: difficulties and improvements." IEEE Transactions on Software Engineering. January 1988. pp. 2-111.
- ^ Hemmstra, F. and Kusters R. "Function point analysis: evaluation of a software cost estimation model." European Journal of Information Systems. 1991. Vol 1, No 4. pp 229-237.
- ^ Jeffery, R and Stathis, J. "Specification-based software sizing: An empirical investigation of function metrics." Proceedings of the Eighteenth Annual Software Engineering Workshop. 1993. p 97-115.
- ^ Symons, C. Software sizing and estimating: Mk II FPA (Function Point Analysis). John Wiley & Sons, Inc. New York, 1991
- ^ Demarco, T. "An algorithm for sizing software products." ACM Sigmetrics Performance Evaluation Review. 1984. Volume 12, Issue 2. pp 13-22.
- ^ Jeffrey, D.R, Low, G.C. and Barnes, M. "A comparison of function point counting techniques." IEEE Transactions on Software Engineering. 1993. Volume 19, Issue 5. pp 529-532.
- ^ Schwartz, Adam. "Using Test Cases To Size Systems: A Case Study." 2012 Ninth International Conference on Information Technology- New Generations. April 2012. pp 242-246.
External links
[edit]Function point
View on GrokipediaIntroduction
Definition and Purpose
A function point (FP) is a standardized unit of measurement used to quantify the functional size of software applications from the perspective of the end user. It focuses on the functionality delivered to users, such as the processing of data inputs, outputs, and inquiries, rather than the technical details of implementation like lines of code or hardware specifics.[1][5] The primary purpose of function points is to provide a technology-independent metric for estimating software development effort, costs, and productivity across the entire software lifecycle. By measuring the size based on user requirements, function points enable consistent comparisons between projects, regardless of the programming language, platform, or development methodology employed. This approach supports benchmarking, resource allocation, and performance analysis in software engineering.[1][6] Key principles of function point analysis emphasize counting user-oriented functions, including external inputs, external outputs, external inquiries, internal logical files, and external interface files, to capture the business value provided by the software. Unlike traditional code-based metrics, which vary with implementation choices, function points prioritize the logical functionality derived from specifications, promoting a stable and repeatable measure. Developed in the 1970s to overcome the shortcomings of code volume metrics in managing large-scale projects, this method ensures assessments remain aligned with user needs and organizational goals.[1][7]Historical Development
Function point analysis originated in the late 1970s at IBM, where Allan J. Albrecht developed it as a method to measure software productivity independent of programming languages or technologies. Albrecht introduced the concept in October 1979 during an internal presentation and subsequently detailed it in his 1979 paper "Measuring Application Development Productivity," presented at the Joint SHARE/GUIDE/IBM Application Development Symposium.[8] This approach addressed limitations in traditional metrics by focusing on five core function types: external inputs, outputs, inquiries, internal logical files, and external interface files.[2] The metric gained broader adoption in the mid-1980s through the formation of the International Function Point Users Group (IFPUG) in 1987, a non-profit organization dedicated to standardizing and promoting function point practices.[9] IFPUG released its first Counting Practices Manual (CPM) in 1988 (version 1.0), providing guidelines for consistent application of Albrecht's method, with subsequent versions refining rules for accuracy and interoperability.[10] During the 1980s and 1990s, refinements addressed ambiguities in counting complex systems, driven by user feedback and committee work, leading to more robust standardization; influential figures like Capers Jones further advanced its global promotion through research on software economics and productivity benchmarking using function points.[11] Initially applied in mainframe environments for project estimation at IBM and early adopters, function points expanded in the 1990s to client-server architectures as organizations sought technology-agnostic sizing. By the 2000s, adaptations extended its use to web and distributed applications, culminating in international recognition with the adoption of IFPUG's method in ISO/IEC 20926:2009, which formalized function point analysis as a standard for software functional size measurement.[12]Function Point Analysis Methodology
Core Components
Function point analysis relies on five primary base functional components to quantify the functional size of software from the user's perspective. These components—external inputs (EI), external outputs (EO), external inquiries (EQ), internal logical files (ILF), and external interface files (EIF)—capture the elementary processes and data entities that deliver functionality across the application's boundary. Each component is identified and weighted based on specific criteria to ensure consistent measurement.[13] External inputs (EI) are elementary processes that process data or control information entering from outside the application boundary into the system, typically to create, update, or delete data in internal logical files or to alter system behavior without maintaining data. Examples include data entry screens that validate and store user information. EIs cross the boundary once and involve processing logic.[13][10] External outputs (EO) are elementary processes that generate and send derived data or control information to an external destination, often involving calculations, derivations, or maintenance of internal logical files during processing. For instance, a report generated from multiple data sources with computed totals qualifies as an EO. EOs cross the boundary once and may include formatting or aggregation.[13][10] External inquiries (EQ) represent the simplest transactional components, consisting of an elementary process that retrieves data from internal sources, applies no derivations or maintenance, and presents the information externally via input and output crossing the boundary. A search screen displaying matching records without updates exemplifies an EQ. EQs emphasize read-only access for information retrieval.[13][10] Internal logical files (ILF) are user-identifiable groups of logically related data maintained entirely within the application's boundary through its elementary processes, such as adding, changing, or deleting records. An ILF might be a customer database table where the application handles all CRUD operations. ILFs do not include temporary data or system-generated files without user recognition.[13][10] External interface files (EIF) are user-identifiable groups of logically related data referenced by the application but maintained by another application outside its boundary; the counted application only reads or derives data from them without maintenance rights. For example, an inventory system referencing a shared supplier catalog maintained elsewhere counts as an EIF. EIFs support integration but exclude any update capabilities within the scope.[13][10] Complexity for these components is classified as low, average, or high using two key metrics: data element types (DETs), which are unique, user-recognizable, non-recursive fields of data crossing boundaries or maintained; file types referenced (FTRs), which count each distinct ILF or EIF involved in processing (one per read or maintain action); and record element types (RETs), which are user-recognizable subgroups of data within an ILF or EIF (e.g., one for the primary record plus additional for subtypes or associations). Transactional functions (EI, EO, EQ) use DETs and FTRs, while data functions (ILF, EIF) use DETs and RETs. Weights, expressed as unadjusted function points, are assigned via standardized matrices.[13][10] The complexity matrix for external inputs (EI) is as follows:| DETs \ FTRs | 0-1 | 2 | 3+ |
|---|---|---|---|
| 1-4 | Low (3) | Low (3) | Avg (4) |
| 5-15 | Low (3) | Avg (4) | High (6) |
| 16+ | Avg (4) | High (6) | High (6) |
| DETs \ FTRs | 0-1 | 2-3 | 4+ |
|---|---|---|---|
| 1-5 | Low | Low | Avg |
| 6-19 | Low | Avg | High |
| 20+ | Avg | High | High |
| DETs \ RETs | 1 | 2-5 | 6+ |
|---|---|---|---|
| 1-19 | Low | Low | Avg |
| 20-50 | Low | Avg | High |
| 51+ | Avg | High | High |
Calculation Process
The calculation of function points begins with identifying the application boundary, which defines the scope of the software being measured by delineating what functionality is internal to the application versus external interfaces or other systems. This boundary is determined from the user's perspective, focusing on the logical design and functionalities perceivable by the end user, ensuring only relevant elements are counted within the project's scope.[1][12] Step 1 involves identifying and counting the five core function types—External Inputs (EIs), External Outputs (EOs), External Inquiries (EQs), Internal Logical Files (ILFs), and External Interface Files (EIFs)—using predefined complexity criteria such as the number of data element types and file type referenced, as detailed in the component definitions. Each identified function type is classified as low, average, or high complexity and assigned a corresponding weight: for example, low-complexity EIs are weighted at 3, average at 4, and high at 6. The Unadjusted Function Points (UFP) are then calculated as the sum of the weighted values across all components:| GSC Number | Characteristic | Description Example |
|---|---|---|
| 1 | Data communications | Extent of communication facilities |
| 2 | Distributed data processing | Distribution of processing components |
| 3 | Performance | Response or throughput specifications |
| 4 | Operational environment | Operating system and network support |
| 5 | Transaction rate | Number of transactions per time period |
| 6 | Online data entry | Proportion of online versus batch |
| 7 | End-user efficiency | Efforts to make system convenient |
| 8 | Online update | Proportion of updates in online mode |
| 9 | Complex processing | Mathematical or statistical computations |
| 10 | Reusability | Modularity for reuse |
| 11 | Installation ease | Ease of converting and installing |
| 12 | Operational ease | Ease of daily operations |
| 13 | Multiple installations | Number of sites for one application |
| 14 | Facilitated changes | Ease of non-functional modifications |
| [1][12][14] |
| GSC Number | Degree of Influence (0-5) |
|---|---|
| 1 | 3 |
| 2 | 2 |
| 3 | 4 |
| 4 | 1 |
| 5 | 0 |
| 6 | 3 |
| 7 | 5 |
| 8 | 4 |
| 9 | 2 |
| 10 | 3 |
| 11 | 2 |
| 12 | 3 |
| 13 | 1 |
| 14 | 2 |
| Total (TDI) | 35 |
| VAF = 0.65 + (35 × 0.01) = 1.0; AFP = 100 × 1.0 = 100.[1][10] |
Standards and Variations
IFPUG Standards
The International Function Point Users Group (IFPUG) was formally established in 1986 to standardize and promote the function point analysis method originally developed by Allan Albrecht.[15] As a non-profit organization, IFPUG maintains the official guidelines for function point counting through its Counting Practices Manual (CPM), with the latest full release being version 4.3.1 in 2010, accompanied by subsequent minor updates and supplementary materials to ensure compliance with international standards.[1][16] The CPM provides detailed rules for function point analysis, including boundary setting, which defines the scope based on user requirements and applies across the software development life cycle.[1] Component identification involves categorizing functional elements such as external inputs, external outputs, external inquiries, internal logical files, and external interface files.[1] Complexity assessment evaluates each component using standard tables based on data elements and record elements types, assigning low, average, or high complexity weights.[1] The Value Adjustment Factor (VAF) is then applied to adjust the unadjusted function point count for general system characteristics, using a 14-factor model rated from 0 to 5.[1] IFPUG offers certification programs to validate expertise in function point analysis, including the Certified Function Point Specialist (CFPS), which requires a minimum score of 90% overall and 80% in each exam section covering definition, implementation, and case studies, demonstrating mastery of best practices.[17] The Certified Function Point Practitioner (CFPP) is an entry-level certification requiring 80% overall and 70% per section, focusing on foundational skills for accurate and consistent counting.[18] Both certifications are valid for three years, with options for extension pending major CPM updates.[18] The IFPUG method has been internationally standardized as ISO/IEC 20926:2009, which defines the rules, steps, and definitions for applying function point analysis as a functional size measurement technique, ensuring interoperability and consistency.[12] Post-2010 revisions and supplementary IFPUG publications, such as the 2012 Guide to IT and Software Measurement, address adaptations for modern technologies, including guidance on applying function points in agile development environments and cloud computing contexts to maintain relevance in iterative and distributed systems.[19][20]Other Variants and Extensions
Beyond the International Function Point Users Group (IFPUG) standard, several alternative functional size measurement (FSM) methods have emerged to address specific limitations or extend applicability to diverse software domains. These variants maintain the core principle of quantifying functionality from the user's perspective but differ in components, weighting schemes, and target applications.[21][22] COSMIC Function Points (CFP), developed by the Common Software Measurement International Consortium in 1998, provide a second-generation FSM approach suitable for all software types, including real-time and embedded systems. Unlike IFPUG's focus on data and transactional functions, CFP measures size based on four elementary data movements—entries (input to the software), exits (output from the software), reads (retrieval of data without change), and writes (storage or update of data)—each assigned a fixed size of 1 CFP. This granularity enables precise sizing in layers or processes, making it ideal for non-business applications where IFPUG may undercount control processes. The method was formalized as the ISO/IEC 19761:2011 standard, emphasizing universality across development paradigms like Agile.[23][24] NESMA Function Points, originating in the Netherlands during the 1990s as a national standard under the Netherlands Software Metrics Association, closely resemble IFPUG but incorporate simplified estimation techniques for early project phases. It classifies functions similarly (internal logical files, external interface files, external inputs, external outputs, external inquiries) but applies predefined weights to standard function types, reducing subjectivity in counting for common business applications. This approach facilitates rapid indicative and estimated sizing, particularly valuable in European outsourcing contracts where contractual benchmarks require consistent, low-effort measurements. NESMA's guidelines align with ISO/IEC 24570:2018 for software enhancement projects.[25][26] Mark II Function Points, introduced in the late 1980s by Charles Symons and detailed in his 1991 publication, represent a UK-originated variant emphasizing transaction-oriented sizing for information systems. It counts logical transactions (external inputs, outputs, and inquiries) weighted by complexity, alongside an "information profile" that assesses data entities (logical data stores and access paths) to capture both processing and data aspects more holistically than early IFPUG versions. This method, standardized under ISO/IEC 20968:2002, supports broader applicability to transaction-heavy systems but has seen limited global adoption compared to newer standards.[27] Extensions to traditional function points have also adapted the metric for modern contexts. Web Function Points (WFP) extend IFPUG by incorporating web-specific elements, such as dynamic pages, hyperlinks, and multimedia content, to better size user interfaces and navigation in web applications where standard counts overlook interactivity. This variant assigns points to web objects like forms and static/dynamic pages, improving estimation accuracy for e-commerce and portal developments. Similarly, Agile Function Points tailor FSM for iterative environments by aligning counts with user stories and sprints, allowing incremental sizing that integrates with story points for velocity-based planning without disrupting agile workflows. These adaptations maintain core FSM principles while enhancing relevance to web and agile paradigms.[28][29]| Variant | Key Components | Weighting Scheme | Primary Applicability |
|---|---|---|---|
| COSMIC (CFP) | Entries, Exits, Reads, Writes | Fixed (1 CFP each) | Real-time, embedded, all software types |
| NESMA | ILF, EIF, EI, EO, EQ (similar to IFPUG) | Predefined for standard functions | Business apps, outsourcing in Europe |
| Mark II | Logical transactions, data entities | Complexity-based (low/avg/high) | Transactional info systems |
| IFPUG (baseline) | ILF, EIF, EI, EO, EQ | Complexity-based (low/avg/high) | Traditional business applications |