Recent from talks
Nothing was collected or created yet.
Common Criteria
View on WikipediaThe Common Criteria for Information Technology Security Evaluation (referred to as Common Criteria or CC) is an international standard (ISO/IEC 15408) for computer security certification. It is currently in version 2022 revision 1.[1]
Common Criteria is a framework in which computer system users can specify their security functional and assurance requirements (SFRs and SARs, respectively) in a Security Target (ST), and may be taken from Protection Profiles (PPs). Vendors can then implement or make claims about the security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. In other words, Common Criteria provides assurance that the process of specification, implementation and evaluation of a computer security product has been conducted in a rigorous and standard and repeatable manner at a level that is commensurate with the target environment for use.[2] Common Criteria maintains a list of certified products, including operating systems, access control systems, databases, and key management systems.[3]
Key concepts
[edit]Common Criteria evaluations are performed on computer security products and systems.
- Target of Evaluation (TOE) – the product or system that is the subject of the evaluation. The evaluation serves to validate claims made about the target. To be of practical use, the evaluation must verify the target's security features. This is done through the following:
- Protection Profile (PP) – a document, typically created by a user or user community, which identifies security requirements for a class of security devices (for example, smart cards used to provide digital signatures, or network firewalls) relevant to that user for a particular purpose. Product vendors can choose to implement products that comply with one or more PPs, and have their products evaluated against those PPs. In such a case, a PP may serve as a template for the product's ST (Security Target, as defined below), or the authors of the ST will at least ensure that all requirements in relevant PPs also appear in the target's ST document. Customers looking for particular types of products can focus on those certified against the PP that meets their requirements.
- Security Target (ST) – the document that identifies the security properties of the target of evaluation. The ST may claim conformance with one or more PPs. The TOE is evaluated against the SFRs (Security Functional Requirements. Again, see below) established in its ST, no more and no less. This allows vendors to tailor the evaluation to accurately match the intended capabilities of their product. This means that a network firewall does not have to meet the same functional requirements as a database management system, and that different firewalls may in fact be evaluated against completely different lists of requirements. The ST is usually published so that potential customers may determine the specific security features that have been certified by the evaluation.
- Security Functional Requirements (SFRs) – specify individual security functions which may be provided by a product. The Common Criteria presents a standard catalogue of such functions. For example, a SFR may state how a user acting a particular role might be authenticated. The list of SFRs can vary from one evaluation to the next, even if two targets are the same type of product. Although Common Criteria does not prescribe any SFRs to be included in an ST, it identifies dependencies where the correct operation of one function (such as the ability to limit access according to roles) is dependent on another (such as the ability to identify individual roles).
The evaluation process also tries to establish the level of confidence that may be placed in the product's security features through quality assurance processes:
- Security Assurance Requirements (SARs) – descriptions of the measures taken during development and evaluation of the product to assure compliance with the claimed security functionality. For example, an evaluation may require that all source code is kept in a change management system, or that full functional testing is performed. The Common Criteria provides a catalogue of these, and the requirements may vary from one evaluation to the next. The requirements for particular targets or types of products are documented in the ST and PP, respectively.
- Evaluation Assurance Level (EAL) – the numerical rating describing the depth and rigor of an evaluation. Each EAL corresponds to a package of security assurance requirements (SARs, see above) which covers the complete development of a product, with a given level of strictness. Common Criteria lists seven levels, with EAL 1 being the most basic (and therefore cheapest to implement and evaluate) and EAL 7 being the most stringent (and most expensive). Normally, an ST or PP author will not select assurance requirements individually but choose one of these packages, possibly 'augmenting' requirements in a few areas with requirements from a higher level. Higher EALs do not necessarily imply "better security", they only mean that the claimed security assurance of the TOE has been more extensively verified.
So far, most PPs and most evaluated STs/certified products have been for IT components (e.g., firewalls, operating systems, smart cards).
Common Criteria certification is sometimes specified for IT procurement. Other standards containing, e.g., interoperation, system management, user training, supplement CC and other product standards. Examples include the ISO/IEC 27002 and the German IT baseline protection.
Details of cryptographic implementation within the TOE are outside the scope of the CC. Instead, national standards, like FIPS 140-2, give the specifications for cryptographic modules, and various standards specify the cryptographic algorithms in use.
More recently, PP authors are including cryptographic requirements for CC evaluations that would typically be covered by FIPS 140-2 evaluations, broadening the bounds of the CC through scheme-specific interpretations.
Some national evaluation schemes are phasing out EAL-based evaluations and only accept products for evaluation that claim strict conformance with an approved PP. The United States currently only allows PP-based evaluations.
History
[edit]CC originated out of three standards:
- ITSEC – The European standard, developed in the early 1990s by France, Germany, the Netherlands and the UK. It too was a unification of earlier work, such as the two UK approaches (the CESG UK Evaluation Scheme aimed at the defence/intelligence market and the DTI Green Book aimed at commercial use), and was adopted by some other countries, e.g. Australia.
- CTCPEC – The Canadian standard followed from the US DoD standard, but avoided several problems and was used jointly by evaluators from both the U.S. and Canada. The CTCPEC standard was first published in May 1993.
- TCSEC – The United States Department of Defense DoD 5200.28 Std, called the Orange Book and parts of the Rainbow Series. The Orange Book originated from Computer Security work including the Anderson Report, done by the National Security Agency and the National Bureau of Standards (the NBS eventually became NIST) in the late 1970s and early 1980s. The central thesis of the Orange Book follows from the work done by Dave Bell and Len LaPadula for a set of protection mechanisms.
CC was produced by unifying these pre-existing standards, predominantly so that companies selling computer products for the government market (mainly for Defence or Intelligence use) would only need to have them evaluated against one set of standards. The CC was developed by the governments of Canada, France, Germany, the Netherlands, the UK, and the U.S.
Testing organizations
[edit]All testing laboratories must comply with ISO/IEC 17025, and certification bodies will normally be approved against ISO/IEC 17065.
The compliance with ISO/IEC 17025 is typically demonstrated to a National approval authority:
- In Canada, the Standards Council of Canada (SCC) under Program for the Accreditation of Laboratories (PALCAN) accredits Common Criteria Evaluation Facilities (CCEF)
- In France, the Comité français d'accréditation (COFRAC) accredits Common Criteria evaluation facilities, commonly called Centre d'évaluation de la sécurité des technologies de l'information (CESTI). Evaluations are done according to norms and standards specified by the Agence nationale de la sécurité des systèmes d'information (ANSSI).
- In Italy, the OCSI (Organismo di Certificazione della Sicurezza Informatica) accredits Common Criteria evaluation laboratories
- In India, the STQC Directorate of the Ministry of Electronics and Information Technology evaluates and certifies IT products at assurance levels EAL 1 through EAL 4.[4]
- In the UK the United Kingdom Accreditation Service (UKAS) used to accredit Commercial Evaluation Facilities (CLEF) Archived 2015-10-28 at the Wayback Machine; the UK is since 2019 only a consumer in the CC ecosystem
- In the US, the National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits Common Criteria Testing Laboratories (CCTL)
- In Germany, the Bundesamt für Sicherheit in der Informationstechnik (BSI)
- In Spain, the National Cryptologic Center (CCN) accredits Common Criteria Testing Laboratories operating in the Spanish Scheme.
- In The Netherlands, the Netherlands scheme for Certification in the Area of IT Security (NSCIB) accredits IT Security Evaluation Facilities (ITSEF).
- In Sweden, the Swedish Certification Body for IT Security (CSEC) licenses IT Security Evaluation Facilities (ITSEF).
Characteristics of these organizations were examined and presented at ICCC 10.
Mutual recognition arrangement
[edit]As well as the Common Criteria standard, there is also a sub-treaty level Common Criteria MRA (Mutual Recognition Arrangement), whereby each party thereto recognizes evaluations against the Common Criteria standard done by other parties. Originally signed in 1998 by Canada, France, Germany, the United Kingdom and the United States, Australia and New Zealand joined 1999, followed by Finland, Greece, Israel, Italy, the Netherlands, Norway and Spain in 2000. The Arrangement has since been renamed Common Criteria Recognition Arrangement (CCRA) and membership continues to expand.[5] Within the CCRA only evaluations up to EAL 2 are mutually recognized (Including augmentation with flaw remediation). The European countries within the SOGIS-MRA typically recognize higher EALs as well. Evaluations at EAL5 and above tend to involve the security requirements of the host nation's government.
In September 2012, a majority of members of the CCRA produced a vision statement whereby mutual recognition of CC evaluated products will be lowered to EAL 2 (Including augmentation with flaw remediation). Further, this vision indicates a move away from assurance levels altogether and evaluations will be confined to conformance with Protection Profiles that have no stated assurance level. This will be achieved through technical working groups developing worldwide PPs, and as yet a transition period has not been fully determined.
On July 2, 2014, a new CCRA was ratified[6] per the goals outlined within the 2012 vision statement.[7] Major changes to the Arrangement include:
- Recognition of evaluations against only a collaborative Protection Profile (cPP) or Evaluation Assurance Levels 1 through 2 and ALC_FLR.
- The emergence of international Technical Communities (iTC), groups of technical experts charged with the creation of cPPs.
- A transition plan from the previous CCRA, including recognition of certificates issued under the previous version of the Arrangement.
Issues
[edit]Requirements
[edit]Common Criteria is very generic; it does not directly provide a list of product security requirements or features for specific (classes of) products: this follows the approach taken by ITSEC, but has been a source of debate to those used to the more prescriptive approach of other earlier standards such as TCSEC and FIPS 140-2.
Value of certification
[edit]Common Criteria certification cannot guarantee security, but it can ensure that claims about the security attributes of the evaluated product were independently verified. In other words, products evaluated against a Common Criteria standard exhibit a clear chain of evidence that the process of specification, implementation, and evaluation has been conducted in a rigorous and standard manner.
Various Microsoft Windows versions, including Windows Server 2003 and Windows XP, have been certified,[8] but security patches to address security vulnerabilities are still getting published by Microsoft for these Windows systems. This is possible because the process of obtaining a Common Criteria certification allows a vendor to restrict the analysis to certain security features and to make certain assumptions about the operating environment and the strength of threats faced by the product in that environment. Additionally, the CC recognizes a need to limit the scope of evaluation in order to provide cost-effective and useful security certifications, such that evaluated products are examined to a level of detail specified by the assurance level or PP. Evaluations activities are therefore only performed to a certain depth, use of time, and resources and offer reasonable assurance for the intended environment.
In the Microsoft case, the assumptions include A.PEER:
"Any other systems with which the TOE communicates are assumed to be under the same management control and operate under the same security policy constraints. The TOE is applicable to networked or distributed environments only if the entire network operates under the same constraints and resides within a single management domain. There are no security requirements that address the need to trust external systems or the communications links to such systems."
This assumption is contained in the Controlled Access Protection Profile (CAPP) to which their products adhere. Based on this and other assumptions, which may not be realistic for the common use of general-purpose operating systems, the claimed security functions of the Windows products are evaluated. Thus they should only be considered secure in the assumed, specified circumstances, also known as the evaluated configuration.
Whether you run Microsoft Windows in the precise evaluated configuration or not, you should apply Microsoft's security patches for the vulnerabilities in Windows as they continue to appear. If any of these security vulnerabilities are exploitable in the product's evaluated configuration, the product's Common Criteria certification should be voluntarily withdrawn by the vendor. Alternatively, the vendor should re-evaluate the product to include the application of patches to fix the security vulnerabilities within the evaluated configuration. Failure by the vendor to take either of these steps would result in involuntary withdrawal of the product's certification by the certification body of the country in which the product was evaluated.
The certified Microsoft Windows versions remain at EAL4+ without including the application of any Microsoft security vulnerability patches in their evaluated configuration. This shows both the limitation and strength of an evaluated configuration.
Criticisms
[edit]In August 2007, Government Computing News (GCN) columnist William Jackson critically examined Common Criteria methodology and its US implementation by the Common Criteria Evaluation and Validation Scheme (CCEVS).[9] In the column executives from the security industry, researchers, and representatives from the National Information Assurance Partnership (NIAP) were interviewed. Objections outlined in the article include:
- Evaluation is a costly process (often measured in hundreds of thousands of US dollars) – and the vendor's return on that investment is not necessarily a more secure product.
- Evaluation focuses primarily on assessing the evaluation documentation, not on the actual security, technical correctness or merits of the product itself. For U.S. evaluations, only at EAL5 and higher do experts from the National Security Agency participate in the analysis; and only at EAL7 is full source code analysis required.
- The effort and time necessary to prepare evaluation evidence and other evaluation-related documentation is so cumbersome that by the time the work is completed, the product in evaluation is generally obsolete.
- Industry input, including that from organizations such as the Common Criteria Vendor's Forum, generally has little impact on the process as a whole.
In a 2006 research paper, computer specialist David A. Wheeler suggested that the Common Criteria process discriminates against free and open-source software (FOSS)-centric organizations and development models.[10] Common Criteria assurance requirements tend to be inspired by the traditional waterfall software development methodology. In contrast, much FOSS software is produced using modern agile paradigms. Although some have argued that both paradigms do not align well,[11] others have attempted to reconcile both paradigms.[12] Political scientist Jan Kallberg raised concerns over the lack of control over the actual production of the products once they are certified, the absence of a permanently staffed organizational body that monitors compliance, and the idea that the trust in the Common Criteria IT-security certifications will be maintained across geopolitical boundaries.[13]
In 2017, the ROCA vulnerability was found in a list of Common Criteria certified smart card products. The vulnerability highlighted several shortcomings of Common Criteria certification scheme:[14]
- The vulnerability resided in a homegrown RSA key generation algorithm that has not been published and analyzed by the cryptanalysis community. However, the testing laboratory TÜV Informationstechnik GmbH (TÜViT) in Germany approved its use and the certification body BSI in Germany issued Common Criteria certificates for the vulnerable products. The Security Target of the evaluated product claimed that RSA keys are generated according to the standard algorithm. In response to this vulnerability, BSI now plans to improve transparency by requiring that the certification report at least specifies if the implemented proprietary cryptography is not exactly conformant to a recommended standard. BSI does not plan on requiring the proprietary algorithm to be published in any way.
- Even though the certification bodies are now aware that the security claims specified in the Common Criteria certificates do not hold anymore, neither ANSSI nor BSI have revoked the corresponding certificates. According to BSI, a certificate can only be withdrawn when it was issued under misconception, e.g., when it turns out that wrong evidence was submitted. After a certificate is issued, it must be presumed that the validity of the certificate decreases over time by improved and new attacks being discovered. Certification bodies can issue maintenance reports and even perform a re-certification of the product. These activities, however, have to be initiated and sponsored by the vendor.
- While several Common Criteria certified products have been affected by the ROCA flaw, vendors' responses in the context of certification have been different. For some products a maintenance report was issued, which states that only RSA keys with a length of 3072 and 3584 bits have a security level of at least 100 bits, while for some products the maintenance report does not mention that the change to the TOE affects certified cryptographic security functionality, but concludes that the change is at the level of guidance documentation and has no effect on assurance.
- According to BSI, the users of the certified end products should have been informed of the ROCA vulnerability by the vendors. This information, however, did not reach in a timely manner the Estonian authorities who had deployed the vulnerable product on more than 750,000 Estonian identity cards.
Alternative approaches
[edit]Throughout the lifetime of CC, it has not been universally adopted even by the creator nations, with, in particular, cryptographic approvals being handled separately, such as by the Canadian / US implementation of FIPS-140, and the CESG Assisted Products Scheme (CAPS)[15] in the UK.
The UK has also produced a number of alternative schemes when the timescales, costs and overheads of mutual recognition have been found to be impeding the operation of the market:
- The CESG System Evaluation (SYSn) and Fast Track Approach (FTA) schemes for assurance of government systems rather than generic products and services, which have now been merged into the CESG Tailored Assurance Service (CTAS) [16]
- The CESG Claims Tested Mark (CCT Mark), which is aimed at handling less exhaustive assurance requirements for products and services in a cost and time efficient manner.
In early 2011, NSA/CSS published a paper by Chris Salter, which proposed a Protection Profile oriented approach towards evaluation. In this approach, communities of interest form around technology types which in turn develop protection profiles that define the evaluation methodology for the technology type.[17] The objective is a more robust evaluation. There is some concern that this may have a negative impact on mutual recognition.[18]
In Sept of 2012, the Common Criteria published a Vision Statement[19] implementing to a large extent Chris Salter's thoughts from the previous year. Key elements of the Vision included:
- Technical Communities will be focused on authoring Protection Profiles (PP) that support their goal of reasonable, comparable, reproducible and cost-effective evaluation results
- Evaluations should be done against these PP's if possible; if not mutual recognition of Security Target evaluations would be limited to EAL2.
See also
[edit]References
[edit]- ^ "Publications: CC Portal". Retrieved 2025-07-10.
- ^ "Common Criteria - Communication Security Establishment". Archived from the original on 2021-02-01. Retrieved 2015-03-02.
- ^ "Common Criteria Certified Products". Retrieved 2023-12-30.
- ^ "Indian Common Criteria Certification Scheme (IC3S) Overview". Retrieved 2023-12-30.
- ^ "Members of the CCRA". The Common Criteria Portal. Archived from the original on 2008-08-22.
- ^ "Arrangement on the Recognition of Common Criteria Certificates in the field of Information Technology Security" (PDF). 2014-07-02. Retrieved 2023-12-30.
- ^ "Common Criteria Management Committee Vision Statement" (PDF). 2012-09-01. Retrieved 2023-12-30.
- ^ "Versions of Windows obtain Common Criteria EAL level 4+". Network Information Security & Technology News. 2005-12-14. Archived from the original on 2006-10-14.
- ^ Under Attack: Common Criteria has loads of critics, but is it getting a bum rap Archived 2021-04-23 at the Wayback Machine Government Computer News, retrieved 2007-12-14
- ^ Wheeler, David (2006-12-11). "Free-Libre / Open Source Software (FLOSS) and Software Assurance / Software Security" (PDF). Retrieved 2023-12-30.
- ^ Wäyrynen, J.; Bodén, M.; Boström, G. (2004). "Security Engineering and eXtreme Programming: An Impossible Marriage?". Extreme Programming and Agile Methods - XP/Agile Universe 2004. Lecture Notes in Computer Science. Vol. 3134. pp. 117–128. doi:10.1007/978-3-540-27777-4_12. ISBN 978-3-540-22839-4.
- ^ Beznosov, Konstantin; Kruchten, Philippe (2005-10-16). "Towards Agile Security Assurance". Retrieved 2023-12-30.
- ^ Kallberg, Jan (2012-08-01). "Common Criteria meets Realpolitik - Trust, Alliances, and Potential Betrayal" (PDF). Retrieved 2023-12-30.
- ^ Parsovs, Arnis (2021-03-03). Estonian Electronic Identity Card and its Security Challenges (PhD) (in Estonian). University of Tartu. pp. 141–143. Retrieved 2023-12-30.
- ^ "CAPS: CESG Assisted Products Scheme". Archived from the original on 2008-08-01.
- ^ Infosec Assurance and Certification Services (IACS) Archived February 20, 2008, at the Wayback Machine
- ^ Salter, Chris (2011-01-10). "Common Criteria Reforms: Better Security Products Through Increased Cooperation with Industry" (PDF). Archived from the original (PDF) on April 17, 2012.
- ^ Brickman, Joshua (2011-03-11). "Common Criteria "Reforms"—Sink or Swim-- How should Industry Handle the Revolution Brewing with Common Criteria?". Archived from the original on 2012-05-29.
- ^ "CCRA Management Committee Vision statement for the future direction of the application of the CC and the CCRA" (DOCX). 2012-09-18. Retrieved 2023-12-30.
External links
[edit]- The official website of the Common Criteria Project
- The Common Criteria standard documents
- List of Common Criteria evaluated products
- List of Licensed Common Criteria Laboratories
- Towards Agile Security Assurance
- Important Common Criteria Acronyms
- Common Criteria Users Forum
- Additional Common Criteria Information on Google Knol
- OpenCC Project – free Apache license CC docs, templates and tools
- Common Criteria Quick Reference Card
- Common Criteria process cheatsheet
- Common Criteria process timeline
Common Criteria
View on GrokipediaFundamentals
Definition and Purpose
Common Criteria, designated as ISO/IEC 15408, constitutes an international standard establishing a unified framework for evaluating the security of information technology (IT) products and systems. It delineates criteria for specifying security functional requirements (SFRs)—which outline the intended security behaviors such as access control, authentication, and data protection—and security assurance requirements (SARs)—which verify that these functions are implemented correctly and resiliently against threats. This standard, comprising multiple parts, outlines evaluation concepts, principles, and a general model applicable to diverse IT entities including hardware, software, firmware, and networked systems.[1][6] The primary purpose of Common Criteria is to foster confidence in the security attributes of IT products by mandating a structured, evidence-based evaluation process conducted by accredited laboratories. This process mitigates risks associated with vulnerabilities by ensuring products undergo rigorous testing proportional to their operational context, thereby supporting procurement decisions in high-stakes environments like government agencies and critical infrastructure. Unlike vendor self-attestations, Common Criteria emphasizes independent validation to detect flaws in design, implementation, and documentation, reducing the likelihood of exploitable weaknesses in deployed systems.[7][8] Furthermore, the standard enables international harmonization of security evaluations, minimizing redundant assessments across jurisdictions through mutual recognition agreements among participating nations. Originating from the need to reconcile disparate national schemes, it promotes interoperability and trust in certified products, particularly for cross-border trade and defense applications, while allowing stakeholders to tailor evaluations via protection profiles or security targets to specific threats.[1][6]Key Concepts
The Common Criteria (CC) evaluation framework, standardized as ISO/IEC 15408, establishes a structured approach to specifying, implementing, and evaluating security functions within information technology (IT) products and systems.[1] It distinguishes between security functional requirements (SFRs), which define the specific security behaviors and capabilities a product must exhibit—such as access control, authentication, or data protection—and security assurance requirements (SARs), which outline the evidence and processes needed to demonstrate that the product correctly implements those functions without vulnerabilities.[7] This separation enables stakeholders to articulate precise security needs while ensuring evaluations focus on both intended functionality and reliability under scrutiny.[7] At the heart of any CC evaluation is the Target of Evaluation (TOE), defined as the particular IT product or system—potentially including hardware, software, firmware, or a combination—subject to security assessment.[7] The TOE's boundaries and scope are explicitly delineated to isolate the evaluated components from extraneous elements, ensuring the assessment targets only the claimed secure elements.[7] SFRs for the TOE are drawn from a standardized catalog in CC Part 3, covering 11 security function classes like cryptographic support (FCS), identification and authentication (FIA), and security management (FMT), each with hierarchical components of increasing rigor.[9] SARs, conversely, are organized into assurance families such as development, guidance documents, life-cycle support, testing, and vulnerability assessment, providing a methodology to verify SFR conformance through evidence like design documentation or penetration testing.[9] Protection Profiles (PPs) serve as reusable templates that standardize SFRs and SARs for a class of TOEs sharing similar security needs, such as operating systems or firewalls, thereby facilitating consistent evaluations across vendors.[7] A PP claims conformance to base PPs or modules, enabling modular extensions for evolving threats, and includes threats, objectives, and organizational security policies to contextualize requirements.[7] In contrast, a Security Target (ST) is TOE-specific, detailing the SFRs and SARs claimed by the developer, often referencing a PP for baseline assurance while allowing refinements or augmentations tailored to the product's implementation.[7] STs must demonstrate how the TOE addresses PP-derived objectives or directly mitigate identified threats, forming the basis for certification claims.[7] Evaluation Assurance Levels (EALs) provide a graduated scale from 1 to 7, quantifying the rigor of SAR application, where EAL1 offers basic functional testing and EAL7 demands formally verified design and testing under strict configuration management.[9] Higher EALs incorporate more comprehensive evidence, such as semi-formal or formal models, but do not inherently imply superior security functionality—only deeper assurance of correct implementation.[9] Conformance to CC requires matching SFRs and SARs to these elements, with evaluations conducted by accredited labs to validate claims against the CC methodology in ISO/IEC 18045. This framework promotes repeatability and international recognition through the Common Criteria Recognition Arrangement (CCRA), effective for certificates up to EAL4 or equivalent PP-based assurances across 30+ member countries as of 2022.[10]Evaluation Framework
Protection Profiles and Security Targets
A Protection Profile (PP) is a formal document that specifies implementation-independent security requirements for a class of IT products or systems addressing a defined set of security problems, threats, and objectives.[11] PPs are typically developed by user communities, governments, or industry groups to establish reusable templates of security functional requirements (SFRs) and assurance requirements (SARs) under ISO/IEC 15408, enabling consistent evaluations across similar products without tying to vendor-specific implementations.[8] For instance, PPs exist for categories like mobile devices, firewalls, or application software, outlining threats such as unauthorized data access and corresponding countermeasures.[11] In contrast, a Security Target (ST) is an implementation-dependent document tailored to a specific product or system, known as the Target of Evaluation (TOE), that details its SFRs and SARs for certification purposes.[11] The ST often claims conformance to one or more relevant PPs, refining their requirements to match the TOE's design, such as specifying exact cryptographic algorithms or access controls implemented in the product.[12] It serves as the foundational reference for the evaluation process, where laboratories verify that the TOE meets the stated security claims through testing and analysis.[8] PPs and STs interact hierarchically in evaluations: a TOE's ST must demonstrate PP conformance if claimed, ensuring the product meets baseline class-wide standards while allowing vendor-specific enhancements or limitations.[11] This structure promotes interoperability and trust in certified products, as STs are publicly available post-certification via repositories like the Common Criteria Portal, which as of 2023 lists thousands of validated STs. Non-conformance to a PP limits certification scope, potentially reducing market applicability in procurement scenarios favoring PP-aligned products.[13]Evaluation Assurance Levels
The Evaluation Assurance Levels (EALs) comprise a hierarchy of seven predefined packages of security assurance requirements defined in the Common Criteria standard (ISO/IEC 15408), specifying the rigor of evaluation applied to a Target of Evaluation (TOE) to confirm it satisfies its claimed security functions.[9] These levels range from EAL1, which involves basic functional testing, to EAL7, which demands formal verification of design and implementation; each successive level builds on the previous by incorporating additional assurance activities such as deeper vulnerability analysis, configuration management, and evidence of development lifecycle controls.[9] EALs provide a metric of evaluation thoroughness rather than intrinsic product security strength, meaning a higher EAL offers greater confidence in the accuracy of the TOE's security claims but does not imply superiority over lower-EAL products with more robust functional protections.[14] The assurance requirements for each EAL are detailed in Common Criteria Part 3, encompassing classes such as security target evaluation, development evidence, testing, vulnerability assessment, and guidance documentation.[9] Lower levels (EAL1–EAL4) emphasize methodical testing and review suitable for commercial IT products, while upper levels (EAL5–EAL7) require semi-formal or formal modeling, often using mathematical proofs, and are typically reserved for high-assurance systems like those in military or critical infrastructure applications.[8] Evaluations at EAL4 or below are mutually recognized under the Common Criteria Recognition Arrangement (CCRA), facilitating international acceptance up to that threshold as of the 2022 updates.[2] The following table summarizes the seven EALs, including their core characterization and key incremental assurance elements:| EAL | Designation | Key Characteristics |
|---|---|---|
| EAL1 | Functionally tested | Basic testing of security functions against specified requirements; minimal developer evidence beyond functional specifications.[9] |
| EAL2 | Structurally tested | Adds structural code testing, independent vulnerability analysis, and configuration item control to EAL1.[9] |
| EAL3 | Methodically tested and checked | Incorporates methodical testing with developer risk analysis and evidence of secure development environment.[9] |
| EAL4 | Methodically designed, tested, and reviewed | Requires methodical design with detailed design reviews, semi-formal testing, and comprehensive vulnerability assessments.[9] |
| EAL5 | Semi-formally designed and tested | Builds on EAL4 with semi-formal design and interface specifications, plus enhanced covert channel analysis.[9] |
| EAL6 | Semi-formally verified design and tested | Adds semi-formal verification of design subsystems and structured source code analysis for flaws.[9] |
| EAL7 | Formally verified design and tested | Demands formal (mathematical) verification of design, implementation consistency, and exhaustive testing against a formal model.[9] |
Certification Process
The Common Criteria certification process entails a rigorous, third-party evaluation of an IT product's security claims, overseen by national schemes participating in the Common Criteria Recognition Arrangement (CCRA). Developers begin by selecting a national certification body—such as the National Information Assurance Partnership (NIAP) in the United States or the Bundesamt für Sicherheit in der Informationstechnik (BSI) in Germany—and engaging an accredited testing laboratory to conduct the assessment.[16][17] The developer defines the Target of Evaluation (TOE), which delineates the specific product components subject to scrutiny, selects an Evaluation Assurance Level (EAL) from EAL1 (functionally tested) to EAL7 (formally verified design and tested), and authors a Security Target (ST) document outlining the TOE's security functional requirements, assurance measures, and threat mitigations.[18][17] Conformance to a Protection Profile (PP), if applicable, standardizes requirements for particular product types, such as operating systems or cryptographic modules, by incorporating predefined security objectives.[8] The developer and laboratory collaborate on an Evaluation Work Plan (EWP), approved by the certification body, which details testing scope, evidence requirements, and timelines.[17] Evaluation proceeds iteratively per the Common Methodology for Information Technology Security Evaluation (CEM version 3.1 Release 7, updated as of 2022), encompassing assurance classes like security target evaluation, development, guidance documents, life-cycle support, testing, vulnerability assessment, and site security.[18] The laboratory examines design documentation, performs independent testing, identifies vulnerabilities, and issues activity reports (ARs) and observation reports (e.g., for evaluator evidence or developer issues) throughout, ensuring all claims receive verdicts of pass, fail, or inconclusive.[17][16] The laboratory compiles findings into an Evaluation Technical Report (ETR), submitted to the certification body alongside developer-supplied evidence.[8] The certification body independently validates the ETR, verifies compliance with Common Criteria parts 1-3, and addresses any discrepancies through developer responses or re-evaluation.[16] If the TOE satisfies the ST claims at the chosen EAL, the body issues a certification report and certificate, valid for the specific TOE version and typically two years, after which recertification may be required for updates.[17] Certificates are published on the international Common Criteria portal, enabling mutual recognition across CCRA members for evaluations up to EAL4 (or EAL2 for certain collaborative schemes as of 2023), facilitating global procurement without redundant testing.[2] Higher EALs beyond EAL4 lack automatic mutual recognition due to varying national sensitivities for formal methods and penetration testing depth.[10]Historical Development
Origins in National Standards
The Common Criteria originated from efforts to harmonize disparate national frameworks for evaluating information technology security, which had proliferated in the 1980s and early 1990s, complicating cross-border recognition of certified products.[3] In the United States, the Trusted Computer System Evaluation Criteria (TCSEC), developed by the National Security Agency under the Department of Defense, provided the foundational model; its initial version was released on August 15, 1983, with the final edition published on December 26, 1985, as DoD 5200.28-STD.[19] The TCSEC categorized systems into four divisions (D through A) based on increasing levels of assurance against unauthorized disclosure, emphasizing design verification, testing, and documentation for higher classes like A1.[20] European nations pursued a separate approach with the Information Technology Security Evaluation Criteria (ITSEC), jointly developed by France, Germany, the Netherlands, and the United Kingdom to address perceived limitations in the TCSEC's focus on confidentiality.[3] ITSEC version 1.0 emerged around 1990, evolving to version 1.2 by June 28, 1991, which decoupled functional security requirements (F levels from F1 to F10) from assurance levels (E0 to E6), allowing more flexible evaluations of integrity, availability, and non-repudiation alongside confidentiality.[21] Canada's Trusted Computer Product Evaluation Criteria (CTCPEC), published in version 3.0 on January 1, 1993, by the Communications Security Establishment, closely mirrored the TCSEC structure but incorporated adaptations for broader product types and levels A1 through C2.[22] These standards, while advancing secure system evaluation domestically, lacked mutual acceptance internationally, prompting collaborative alignment in June 1993 among sponsoring bodies from Canada (CTCPEC), the United States (TCSEC), and Europe (ITSEC, including French criteria).[23] Governments of Canada, France, Germany, the Netherlands, the United Kingdom, and the United States formalized this effort, producing Common Criteria version 1.0 in 1994 as a unified, vendor-neutral framework that retained core elements like assurance levels while introducing protection profiles for reusable requirements.[3] This synthesis addressed redundancies and gaps, such as ITSEC's functional-assurance split influencing CC's structure, without endorsing any single national model's assumptions as universally optimal.[24]Standardization and Initial Adoption
The Common Criteria emerged from efforts to harmonize disparate national security evaluation standards, including the U.S. Trusted Computer System Evaluation Criteria (TCSEC), the European Information Technology Security Evaluation Criteria (ITSEC), and the Canadian Trusted Computer Product Evaluation Criteria (CTCPEC).[3] In 1991, representatives from Canada, France, Germany, the Netherlands, the United Kingdom, and the United States initiated collaborative work to develop a unified framework, culminating in the release of Common Criteria version 1.0 in June 1994.[25] This initial version provided a common language for specifying security requirements and assurance levels, facilitating cross-border evaluations without requiring redundant testing.[26] To achieve broader international legitimacy, the Common Criteria document was submitted to the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) for adoption as a formal standard.[3] This process aligned the criteria with ISO procedures, resulting in its publication as ISO/IEC 15408 (parts 1, 2, and 3) in 1999, concurrent with the release of Common Criteria version 2.1.[25] The standardization emphasized modular security functional requirements and graduated assurance levels, enabling vendors to target specific threats while allowing evaluators to assess implementation rigor consistently.[12] Initial adoption was driven by the founding nations, who established the Common Criteria Recognition Arrangement (CCRA) in May 1999 to promote mutual acceptance of certifications up to Evaluation Assurance Level 4 (EAL4).[3] This arrangement, signed by Australia, Canada, Finland, France, Germany, Greece, Italy, Japan, the Netherlands, Norway, Spain, the United Kingdom, the United States, and others, reduced barriers for IT products in government procurement and commercial markets.[27] Early certifications, such as those for operating systems and firewalls in the late 1990s, demonstrated practical uptake, though adoption was initially limited to high-security sectors due to the resource-intensive evaluation process.[26] By 2000, over 50 certifications had been issued under the framework, signaling growing vendor participation despite the absence of mandatory requirements in most jurisdictions.[25]Versions and Subsequent Updates
The Common Criteria was initially released as version 1.0 in 1994 by a collaboration of governments from Canada, France, Germany, the Netherlands, the United Kingdom, and the United States, aiming to harmonize disparate national evaluation criteria such as the European ITSEC, U.S. TCSEC (Orange Book), and Canadian CTCPEC.[3] Version 2.1 followed in 1999, marking the standard's adoption as the international ISO/IEC 15408 and expanding its scope for broader endorsement, with refinements to security functional requirements and evaluation assurance levels to support more consistent international certifications.[3] Subsequent minor updates, including versions 2.2 and 2.3, focused on clarifying evaluation methodologies and addressing practical issues in applying the framework to diverse IT products.[28] Common Criteria version 3.1, introduced in the mid-2000s with progressive revisions up to Revision 5, served as the dominant iteration for over a decade, aligning with ISO/IEC 15408:2009 and incorporating enhancements such as extended assurance packages for higher evaluation levels, improved guidance on vulnerability assessments, and adaptations for emerging technologies like networked systems.[3][29] These revisions emphasized rigor in evidence collection and testing while maintaining backward compatibility for protection profiles developed under prior versions. In 2022, the Common Criteria framework advanced to version 2022 (CC2022), published as ISO/IEC 15408:2022 across five parts covering concepts, functional requirements, assurance requirements, extended components, and methodology linkages.[1][18] This update, accompanied by revisions to the Common Evaluation Methodology (ISO/IEC 18045:2022), introduced new evaluation activities for contemporary threats like supply chain risks and cloud environments, streamlined assurance derivations for protection profiles, and deprecated certain legacy elements to enhance efficiency without compromising security claims.[7][30] Transition provisions allow evaluations under CC 3.1 Revision 5 to continue, with security targets based on CC 3.1-certified protection profiles accepted under CC2022 until December 31, 2027, facilitating gradual adoption.[31]International Structure
Accredited Testing Laboratories
Accredited Testing Laboratories (ATLs), also known as Common Criteria Testing Laboratories (CCTLs) in certain jurisdictions like the United States, are independent third-party facilities authorized to conduct security evaluations of information technology products under the Common Criteria (CC) standard (ISO/IEC 15408).[32] These laboratories perform detailed assessments to verify that products meet the security functional and assurance requirements defined in Protection Profiles or Security Targets, including vulnerability analysis, testing of cryptographic modules, and review of design documentation.[33] Their evaluations form the technical basis for certification decisions by national schemes, ensuring impartiality as vendors select and contract with labs rather than schemes directly performing tests.[34] Accreditation for ATLs requires compliance with ISO/IEC 17025 general requirements for testing laboratory competence, supplemented by CC-specific criteria such as proficiency in evaluation methodologies and ongoing proficiency testing.[35] In the United States, the National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits CCTLs, which must also satisfy additional requirements from the National Information Assurance Partnership (NIAP) for Common Criteria Evaluation and Validation Scheme (CCEVS) participation.[36] As of recent listings, NIAP has approved nine such labs, including facilities operated by entities like Leidos and Gossamer Security, capable of handling evaluations up to Evaluation Assurance Level (EAL) 4 or higher depending on scope.[32] Internationally, under the Common Criteria Recognition Arrangement (CCRA), labs are licensed by participating national certification bodies, with over 50 licensed facilities worldwide as tracked by the Common Criteria Portal, enabling mutual recognition of evaluations across 30+ member countries.[37] The evaluation process at accredited labs follows a structured workflow: upon vendor submission of a Security Target and product evidence, labs conduct design reviews, functional testing, and penetration testing per the CC methodology, culminating in an Evaluation Technical Report (ETR) submitted to the certification body for validation.[35] Labs maintain independence through contractual separations from vendors and undergo regular audits, peer reviews, and proficiency assessments to uphold evaluation rigor; for instance, NVLAP requires annual commercial evaluations and site visits.[33] This accreditation framework aims to foster confidence in evaluation outcomes, though critics note variability in lab capacity and potential bottlenecks in high-demand areas like cryptographic validations.[38]Mutual Recognition Arrangement
The Common Criteria Recognition Arrangement (CCRA) is an international agreement among participating governments to mutually recognize security evaluation certificates for information technology products and protection profiles issued under the Common Criteria framework. Established to promote consistent evaluation standards and reduce redundant testing, the CCRA enables signatory nations to accept certified products for government procurement and use without requiring additional national evaluations, provided the certificates meet specified criteria. This arrangement advances objectives such as enhancing confidence in IT security, increasing the availability of evaluated products, and improving the efficiency and cost-effectiveness of certification processes.[10] The CCRA originated from efforts to harmonize national IT security evaluation schemes in the late 1990s, with the foundational Arrangement on the Recognition of Common Criteria Certificates signed on May 23, 2000, by initial participants including Canada, Finland, France, Germany, Greece, Italy, the Netherlands, Norway, Portugal, Spain, the United Kingdom, and the United States. Subsequent updates have expanded membership and refined procedures, including periodic reviews by a Management Committee to assess compliance, admit new certification bodies, and adapt to technological advancements. As of 2025, the CCRA comprises 40 members: 18 authorizing members capable of issuing mutually recognized certificates—Australia, Canada, France, Germany, India, Italy, Japan, Malaysia, Netherlands, Norway, Poland, Qatar, Republic of Korea, Singapore, Spain, Sweden, Turkey, and the United States—and 22 consuming members that recognize but do not issue such certificates, including Austria, Bangladesh, Belgium, Cyprus, Czech Republic, Denmark, Ethiopia, Finland, Greece, Hungary, Indonesia, Israel, Jordan, New Zealand, Pakistan, Slovakia, Ukraine, and the United Kingdom. Recent developments include Cyprus's acceptance as a consuming participant and discussions on coexistence with the European Union's Common Criteria scheme (EUCC).[39][40][41] Under the CCRA, mutual recognition applies to certificates up to Evaluation Assurance Level 4 (EAL4) or equivalent assurance packages, allowing products evaluated by accredited laboratories and validated by an authorizing participant's certification body to be accepted across member nations for relevant applications. Certificates must adhere to CCRA operating procedures, including oversight by national schemes, periodic laboratory assessments, and provisions for assurance continuity in product updates. Higher assurance levels beyond EAL4 are not automatically recognized and may necessitate supplementary evaluations or bilateral agreements. The arrangement is overseen by bodies such as the United States' National Information Assurance Partnership (NIAP) and Germany's Federal Office for Information Security (BSI), ensuring evaluations meet rigorous, repeatable standards.[42][43][44]Practical and Economic Aspects
Certification Requirements
Certification of an IT product under Common Criteria necessitates sponsorship by the developer, who must define a Security Target (ST) documenting the Target of Evaluation (TOE), its security objectives, functional requirements from Part 2 of ISO/IEC 15408, and assurance requirements from Part 3 up to a specified Evaluation Assurance Level (EAL).[45] The ST may claim conformance to one or more Protection Profiles (PPs), which establish reusable sets of security requirements for specific technology types, ensuring the TOE addresses predefined threats and policies.[8] Evaluations must follow the Common Methodology for IT Security Evaluation (CEM), applying testing and analysis scaled to the claimed EAL, with EAL1 requiring basic functional testing and documentation review, progressing to EAL7's formally verified design and testing.[46] The developer selects a laboratory accredited by a national certification scheme participating in the Common Criteria Recognition Arrangement (CCRA), comprising 31 member countries as of 2021, to conduct the independent evaluation.[2] Laboratories verify TOE implementation through source code review, configuration testing, vulnerability analysis, and evidence of development processes, producing an Evaluation Technical Report (ETR) that details findings and any vulnerabilities.[45] For U.S. evaluations, conformance to NIAP-approved PPs is mandatory, extending to all requirements within the PP and any extended packages.[45] Upon ETR completion, the laboratory submits it to the national certification body (e.g., BSI in Germany or ANSSI in France), which reviews for completeness, independence, and adherence to CCRA governance documents before issuing a certificate.[47] Certificates specify the TOE version, ST or PP claims, EAL achieved, and validity period—typically up to five years from issuance, contingent on no significant changes and developer maintenance reporting.[47] CCRA mutual recognition applies to certificates up to EAL4 (or equivalent PP-based assurances), facilitating cross-border acceptance without re-evaluation.[2] Additional requirements include site security audits for higher EALs (EAL4+), where the developer's facilities undergo certification to protect sensitive TOE data, and assurance continuity processes for minor updates to avoid full re-evaluation.[44] All evaluations conform to Common Criteria version 3.1 Release 5, the current iteration since 2017, incorporating updates to PPs and methodology for emerging threats like mobile devices and cloud services.[31] Non-conformance in any SFR or SAR results in denial, emphasizing rigorous, evidence-based validation over self-attestation.[48]Costs and Timeframes
The costs associated with Common Criteria certification encompass laboratory evaluation fees, consulting services, internal development and documentation efforts, and potential remediation of identified vulnerabilities, often totaling in the hundreds of thousands of U.S. dollars for mid-level assurances.[49] Higher Evaluation Assurance Levels (EALs) demand more rigorous testing, vulnerability analysis, and evidence provision, escalating expenses; for instance, EAL4 evaluations frequently range from $300,000 to $750,000, reflecting increased scrutiny of design and implementation.[50] These figures exclude indirect costs such as delayed product market entry or opportunity costs from resource allocation, which can amplify the financial burden for vendors pursuing certification.[51] Timeframes for certification similarly scale with EAL and product scope, typically spanning 6 to 24 months from initial preparation to issuance of the certificate.[52] Preparation phases, including Security Target development and evidence assembly, can consume 3 to 6 months, followed by formal evaluation lasting 4 to 12 months depending on the level; EAL2 processes often conclude in 4 to 6 months, EAL3 in 6 to 9 months, and EAL4 in 7 to 12 months or longer due to methodical design reviews and penetration testing.[52] Delays frequently arise from iterative flaw remediation or coordination with certification bodies, with overall timelines averaging one year for many conformance claims.[53]| EAL Level | Typical Timeframe | Typical Cost Range (USD) |
|---|---|---|
| EAL2 | 4-6 months | 150,000 |
| EAL3 | 6-9 months | 200,000 |
| EAL4 | 7-12 months | 750,000 |
Efficacy and Criticisms
Claimed Benefits of Certification
Certification under the Common Criteria framework is claimed to provide vendors and users with assurance that IT products have undergone a standardized, rigorous evaluation of their security functionality and assurance measures, fostering confidence in their ability to mitigate specified threats.[54] This process, aligned with ISO/IEC 15408, involves independent testing by accredited laboratories against predefined Protection Profiles or Security Targets, purportedly ensuring consistent and repeatable assessments across diverse products like operating systems, cryptographic modules, and network devices.[55] Proponents argue that such evaluations reveal vulnerabilities through structured analysis, design review, and testing, thereby enabling informed procurement decisions in high-stakes environments.[54] A primary asserted advantage is the facilitation of international market access via the Common Criteria Recognition Arrangement (CCRA), established in 1999 and encompassing over 30 participating countries as of 2023, which mandates mutual acceptance of certificates up to Evaluation Assurance Level 4 (or higher for specific schemes).[56] This arrangement eliminates the need for duplicate certifications in signatory nations, reducing evaluation redundancies and trade barriers for secure ICT products, as evidenced by streamlined exports of certified hardware and software to government and regulated sectors worldwide.[57] For instance, a product certified in the United States by the National Information Assurance Partnership (NIAP) is recognized in Europe under schemes like those of the German Federal Office for Information Security (BSI), purportedly accelerating global deployment while maintaining baseline security equivalence.[58] Higher EAL certifications, ranging from EAL1 (basic functional testing) to EAL7 (formally verified design and testing), are said to deliver escalating degrees of scrutiny, with EAL4+ often cited as sufficient for moderate-risk applications yet indicative of enhanced reliability through comprehensive vulnerability analysis and configuration management.[51] Advocates, including certification laboratories, claim this builds stakeholder trust by signaling superior development practices and flaw detection, distinguishing certified products in competitive bids for defense, finance, and critical infrastructure contracts.[59] Compliance with Common Criteria is frequently mandated by regulations such as the U.S. Department of Defense's Approved Products List or the European Union's cybersecurity requirements, allegedly lowering acquisition risks and supporting long-term operational security.[60] The framework is further promoted for promoting transparency in security claims, as certification mandates detailed public documentation of threats addressed, countermeasures implemented, and residual risks, purportedly minimizing misunderstandings between developers, evaluators, and end-users.[61] This structured approach is asserted to balance security assurances with practical costs, aiding resource allocation in evaluations without overemphasizing unattainable perfection.[54]Empirical Limitations and Failures
Despite rigorous evaluation processes, products certified under Common Criteria have repeatedly exhibited critical vulnerabilities post-certification, undermining claims of enhanced security assurance. For instance, the ROCA vulnerability, disclosed in October 2017, affected RSA key generation in Infineon Technologies smart cards certified to EAL4+ and higher levels, enabling attackers to recover private keys from public keys due to flawed prime number generation algorithms present since at least 2012.[62] Similarly, the Minerva attack, identified in 2019, exploited timing side-channels in ECDSA implementations on programmable smart cards certified under Common Criteria, allowing nonce recovery and private key extraction through loop-bound leaks that evaded standard testing.[63] The TPM-Fail vulnerabilities, presented at USENIX Security 2020, demonstrated timing and lattice attacks on Trusted Platform Modules, including Infineon SLB9670 chips certified to EAL4+, which permitted extraction of hundreds of RSA bits and full key recovery in practical scenarios despite evaluations intended to detect such leakages.[64] Large-scale empirical analyses of certified products reveal persistent security gaps, with no demonstrable correlation between higher Evaluation Assurance Levels (EALs) and reduced vulnerability incidence. A 2024 study mapping National Vulnerability Database entries to over 3,000 Common Criteria certificates found that high-assurance products (EAL4+) hosted critical flaws like private key recoveries, attributing this to certification's focus on fixed configurations that fail to capture real-world deployments or evolving threats. Certification processes permit known vulnerabilities if deemed non-exploitable in the evaluated setup, yet these often propagate via component reuse across products, transmitting flaws without reevaluation.[65] U.S. Government Accountability Office assessments have highlighted a broader lack of empirical evidence for Common Criteria's effectiveness in bolstering IT security, noting in 2006 the absence of performance metrics or data linking certifications to reduced risks in federal systems.[66] Industry surveys echo this, with vendors and experts reporting that the framework provides inadequate vulnerability detection, as evidenced by post-certification discoveries in scrutinized modules, while its high costs—often exceeding hundreds of thousands of dollars and spanning 12-24 months—yield marginal assurance gains.[4] These failures stem from methodological constraints, including evaluator incentives tied to vendors and insufficient coverage of dynamic attack vectors like side-channels, rendering certifications more indicative of compliance than causal security improvements.Expert Critiques and Debates
Experts have long critiqued the Common Criteria (CC) framework for its bureaucratic inefficiencies and disproportionate costs relative to security gains. Jim Yuill, in a 2008 survey of CC literature, documented complaints from vendors and researchers about the process becoming a "paperwork exercise," with evaluations generating hundreds of pages of documentation, such as over 800 pages for a Linux project, diverting resources from actual security improvements.[4] Certification timelines typically span 10 to 24 months and incur costs in the mid-six figures to millions of dollars, excluding smaller vendors and clashing with agile product development cycles where requirements evolve post-evaluation.[4] A U.S. Government Accountability Office (GAO) report cited by Yuill found insufficient evidence that CC enhances government IT security in a cost-effective manner, highlighting systemic implementation flaws in procurement.[4] Debates center on CC's assurance levels (EALs), which critics argue provide illusory confidence rather than rigorous vulnerability detection. A 2013 analysis using structured argumentation identified 121 issues in CC's application to software assurance, including vagueness in requirements and failure to mandate evidence that meaningfully boosts security confidence, even after multiple reviews.[67] Yuill noted that CC evaluations focus narrowly on specific product configurations and ignore operational environments or non-IT factors like physical security, limiting real-world applicability; vendors have been accused of misleading claims implying blanket government endorsement.[4] Symantec officials, for instance, asserted that protection profiles offer "no confidence or assurance" against emerging threats, as the framework assumes static threats predefined in evaluations.[4] Empirical evidence underscores these limitations, with certified products repeatedly exhibiting critical vulnerabilities post-evaluation. A 2024 study of CC certifications revealed persistent flaws in certified hardware, including private key recovery attacks like ROCA, Minerva, and TPM-Fail, indicating that the process does not preclude exploitable weaknesses due to its snapshot nature—evaluating fixed versions and configurations while overlooking deviations in deployment.[68] Experts debate whether CC primarily serves compliance for government procurement rather than causal security enhancement, as non-certified products sometimes demonstrate fewer vulnerabilities in practice, per observational data; proponents counter that it standardizes baseline scrutiny, but critics like Yuill advocate reforms such as modular re-evaluations to address lifecycle mismatches without overhauling the core methodology.[4][65] These tensions reflect broader causal realism concerns: CC's document-heavy approach may incentivize superficial adherence over first-principles threat modeling, yet its international mutual recognition sustains its role despite alternatives gaining traction.[68]Alternatives and Comparisons
National Evaluation Schemes
National evaluation schemes are government-administered programs that evaluate and certify the information security of IT products and systems within their jurisdictions, predominantly utilizing the Common Criteria (CC) framework to ensure consistency and mutual recognition under the CCRA. As of 2023, the CCRA includes 38 participating nations, with 18 authorizing members responsible for conducting evaluations and issuing certificates valid up to Evaluation Assurance Level 4 (or higher in some cases), while 20 consuming members recognize these without performing independent evaluations.[40] These schemes allow countries to adapt CC methodologies to domestic priorities, such as specific protection profiles for national infrastructure, but variations in laboratory accreditation, evaluation depth, and processing times can influence vendor choices and international procurement preferences.[69] The following table summarizes the 18 authorizing schemes:| Country | Scheme Name | Acronym/Operator |
|---|---|---|
| Australia | Australian Information Security Evaluation Program | AISEP/ACSC |
| Canada | Canadian Common Criteria Scheme | CCCS |
| France | Agence Nationale de la Sécurité des Systèmes d'Information | ANSSI |
| Germany | Bundesamt für Sicherheit in der Informationstechnik | BSI |
| India | Indian Common Criteria Certification Scheme | IC3S |
| Italy | Organismo di Certificazione della Sicurezza Informatica | OCSI |
| Japan | Japan IT Security Evaluation and Certification Scheme | JISEC |
| Malaysia | CyberSecurity Malaysia | MyCC |
| Netherlands | Netherlands Scheme for Certification in the Area of IT Security | NSCIB/TrustCB |
| Norway | SERTIT | SERTIT/NSM |
| Poland | NASK National Research Institute | NASK |
| Qatar | Qatar Common Criteria Scheme | QCCS/NCSA |
| Republic of Korea | IT Security Certification Center | ITSCC/NSRI |
| Singapore | Cyber Security Agency of Singapore | CSA |
| Spain | Organismo de Certificación de la Seguridad de las Tecnologías de la Información | CCN |
| Sweden | Swedish Certification Body for IT Security | CSEC/FMV |
| Turkey | TSE Common Criteria Certification Scheme | TSE |
| United States | National Information Assurance Partnership | NIAP/CCEVS |
