Hubbry Logo
Common CriteriaCommon CriteriaMain
Open search
Common Criteria
Community hub
Common Criteria
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Common Criteria
Common Criteria
from Wikipedia

The Common Criteria for Information Technology Security Evaluation (referred to as Common Criteria or CC) is an international standard (ISO/IEC 15408) for computer security certification. It is currently in version 2022 revision 1.[1]

Common Criteria is a framework in which computer system users can specify their security functional and assurance requirements (SFRs and SARs, respectively) in a Security Target (ST), and may be taken from Protection Profiles (PPs). Vendors can then implement or make claims about the security attributes of their products, and testing laboratories can evaluate the products to determine if they actually meet the claims. In other words, Common Criteria provides assurance that the process of specification, implementation and evaluation of a computer security product has been conducted in a rigorous and standard and repeatable manner at a level that is commensurate with the target environment for use.[2] Common Criteria maintains a list of certified products, including operating systems, access control systems, databases, and key management systems.[3]

Key concepts

[edit]

Common Criteria evaluations are performed on computer security products and systems.

  • Target of Evaluation (TOE) – the product or system that is the subject of the evaluation. The evaluation serves to validate claims made about the target. To be of practical use, the evaluation must verify the target's security features. This is done through the following:
    • Protection Profile (PP) – a document, typically created by a user or user community, which identifies security requirements for a class of security devices (for example, smart cards used to provide digital signatures, or network firewalls) relevant to that user for a particular purpose. Product vendors can choose to implement products that comply with one or more PPs, and have their products evaluated against those PPs. In such a case, a PP may serve as a template for the product's ST (Security Target, as defined below), or the authors of the ST will at least ensure that all requirements in relevant PPs also appear in the target's ST document. Customers looking for particular types of products can focus on those certified against the PP that meets their requirements.
    • Security Target (ST) – the document that identifies the security properties of the target of evaluation. The ST may claim conformance with one or more PPs. The TOE is evaluated against the SFRs (Security Functional Requirements. Again, see below) established in its ST, no more and no less. This allows vendors to tailor the evaluation to accurately match the intended capabilities of their product. This means that a network firewall does not have to meet the same functional requirements as a database management system, and that different firewalls may in fact be evaluated against completely different lists of requirements. The ST is usually published so that potential customers may determine the specific security features that have been certified by the evaluation.
    • Security Functional Requirements (SFRs) – specify individual security functions which may be provided by a product. The Common Criteria presents a standard catalogue of such functions. For example, a SFR may state how a user acting a particular role might be authenticated. The list of SFRs can vary from one evaluation to the next, even if two targets are the same type of product. Although Common Criteria does not prescribe any SFRs to be included in an ST, it identifies dependencies where the correct operation of one function (such as the ability to limit access according to roles) is dependent on another (such as the ability to identify individual roles).

The evaluation process also tries to establish the level of confidence that may be placed in the product's security features through quality assurance processes:

  • Security Assurance Requirements (SARs) – descriptions of the measures taken during development and evaluation of the product to assure compliance with the claimed security functionality. For example, an evaluation may require that all source code is kept in a change management system, or that full functional testing is performed. The Common Criteria provides a catalogue of these, and the requirements may vary from one evaluation to the next. The requirements for particular targets or types of products are documented in the ST and PP, respectively.
  • Evaluation Assurance Level (EAL) – the numerical rating describing the depth and rigor of an evaluation. Each EAL corresponds to a package of security assurance requirements (SARs, see above) which covers the complete development of a product, with a given level of strictness. Common Criteria lists seven levels, with EAL 1 being the most basic (and therefore cheapest to implement and evaluate) and EAL 7 being the most stringent (and most expensive). Normally, an ST or PP author will not select assurance requirements individually but choose one of these packages, possibly 'augmenting' requirements in a few areas with requirements from a higher level. Higher EALs do not necessarily imply "better security", they only mean that the claimed security assurance of the TOE has been more extensively verified.

So far, most PPs and most evaluated STs/certified products have been for IT components (e.g., firewalls, operating systems, smart cards).

Common Criteria certification is sometimes specified for IT procurement. Other standards containing, e.g., interoperation, system management, user training, supplement CC and other product standards. Examples include the ISO/IEC 27002 and the German IT baseline protection.

Details of cryptographic implementation within the TOE are outside the scope of the CC. Instead, national standards, like FIPS 140-2, give the specifications for cryptographic modules, and various standards specify the cryptographic algorithms in use.

More recently, PP authors are including cryptographic requirements for CC evaluations that would typically be covered by FIPS 140-2 evaluations, broadening the bounds of the CC through scheme-specific interpretations.

Some national evaluation schemes are phasing out EAL-based evaluations and only accept products for evaluation that claim strict conformance with an approved PP. The United States currently only allows PP-based evaluations.

History

[edit]

CC originated out of three standards:

  • ITSEC – The European standard, developed in the early 1990s by France, Germany, the Netherlands and the UK. It too was a unification of earlier work, such as the two UK approaches (the CESG UK Evaluation Scheme aimed at the defence/intelligence market and the DTI Green Book aimed at commercial use), and was adopted by some other countries, e.g. Australia.
  • CTCPEC – The Canadian standard followed from the US DoD standard, but avoided several problems and was used jointly by evaluators from both the U.S. and Canada. The CTCPEC standard was first published in May 1993.
  • TCSEC – The United States Department of Defense DoD 5200.28 Std, called the Orange Book and parts of the Rainbow Series. The Orange Book originated from Computer Security work including the Anderson Report, done by the National Security Agency and the National Bureau of Standards (the NBS eventually became NIST) in the late 1970s and early 1980s. The central thesis of the Orange Book follows from the work done by Dave Bell and Len LaPadula for a set of protection mechanisms.

CC was produced by unifying these pre-existing standards, predominantly so that companies selling computer products for the government market (mainly for Defence or Intelligence use) would only need to have them evaluated against one set of standards. The CC was developed by the governments of Canada, France, Germany, the Netherlands, the UK, and the U.S.

Testing organizations

[edit]

All testing laboratories must comply with ISO/IEC 17025, and certification bodies will normally be approved against ISO/IEC 17065.

The compliance with ISO/IEC 17025 is typically demonstrated to a National approval authority:

Characteristics of these organizations were examined and presented at ICCC 10.

Mutual recognition arrangement

[edit]

As well as the Common Criteria standard, there is also a sub-treaty level Common Criteria MRA (Mutual Recognition Arrangement), whereby each party thereto recognizes evaluations against the Common Criteria standard done by other parties. Originally signed in 1998 by Canada, France, Germany, the United Kingdom and the United States, Australia and New Zealand joined 1999, followed by Finland, Greece, Israel, Italy, the Netherlands, Norway and Spain in 2000. The Arrangement has since been renamed Common Criteria Recognition Arrangement (CCRA) and membership continues to expand.[5] Within the CCRA only evaluations up to EAL 2 are mutually recognized (Including augmentation with flaw remediation). The European countries within the SOGIS-MRA typically recognize higher EALs as well. Evaluations at EAL5 and above tend to involve the security requirements of the host nation's government.

In September 2012, a majority of members of the CCRA produced a vision statement whereby mutual recognition of CC evaluated products will be lowered to EAL 2 (Including augmentation with flaw remediation). Further, this vision indicates a move away from assurance levels altogether and evaluations will be confined to conformance with Protection Profiles that have no stated assurance level. This will be achieved through technical working groups developing worldwide PPs, and as yet a transition period has not been fully determined.

On July 2, 2014, a new CCRA was ratified[6] per the goals outlined within the 2012 vision statement.[7] Major changes to the Arrangement include:

  • Recognition of evaluations against only a collaborative Protection Profile (cPP) or Evaluation Assurance Levels 1 through 2 and ALC_FLR.
  • The emergence of international Technical Communities (iTC), groups of technical experts charged with the creation of cPPs.
  • A transition plan from the previous CCRA, including recognition of certificates issued under the previous version of the Arrangement.

Issues

[edit]

Requirements

[edit]

Common Criteria is very generic; it does not directly provide a list of product security requirements or features for specific (classes of) products: this follows the approach taken by ITSEC, but has been a source of debate to those used to the more prescriptive approach of other earlier standards such as TCSEC and FIPS 140-2.

Value of certification

[edit]

Common Criteria certification cannot guarantee security, but it can ensure that claims about the security attributes of the evaluated product were independently verified. In other words, products evaluated against a Common Criteria standard exhibit a clear chain of evidence that the process of specification, implementation, and evaluation has been conducted in a rigorous and standard manner.

Various Microsoft Windows versions, including Windows Server 2003 and Windows XP, have been certified,[8] but security patches to address security vulnerabilities are still getting published by Microsoft for these Windows systems. This is possible because the process of obtaining a Common Criteria certification allows a vendor to restrict the analysis to certain security features and to make certain assumptions about the operating environment and the strength of threats faced by the product in that environment. Additionally, the CC recognizes a need to limit the scope of evaluation in order to provide cost-effective and useful security certifications, such that evaluated products are examined to a level of detail specified by the assurance level or PP. Evaluations activities are therefore only performed to a certain depth, use of time, and resources and offer reasonable assurance for the intended environment.

In the Microsoft case, the assumptions include A.PEER:

"Any other systems with which the TOE communicates are assumed to be under the same management control and operate under the same security policy constraints. The TOE is applicable to networked or distributed environments only if the entire network operates under the same constraints and resides within a single management domain. There are no security requirements that address the need to trust external systems or the communications links to such systems."

This assumption is contained in the Controlled Access Protection Profile (CAPP) to which their products adhere. Based on this and other assumptions, which may not be realistic for the common use of general-purpose operating systems, the claimed security functions of the Windows products are evaluated. Thus they should only be considered secure in the assumed, specified circumstances, also known as the evaluated configuration.

Whether you run Microsoft Windows in the precise evaluated configuration or not, you should apply Microsoft's security patches for the vulnerabilities in Windows as they continue to appear. If any of these security vulnerabilities are exploitable in the product's evaluated configuration, the product's Common Criteria certification should be voluntarily withdrawn by the vendor. Alternatively, the vendor should re-evaluate the product to include the application of patches to fix the security vulnerabilities within the evaluated configuration. Failure by the vendor to take either of these steps would result in involuntary withdrawal of the product's certification by the certification body of the country in which the product was evaluated.

The certified Microsoft Windows versions remain at EAL4+ without including the application of any Microsoft security vulnerability patches in their evaluated configuration. This shows both the limitation and strength of an evaluated configuration.

Criticisms

[edit]

In August 2007, Government Computing News (GCN) columnist William Jackson critically examined Common Criteria methodology and its US implementation by the Common Criteria Evaluation and Validation Scheme (CCEVS).[9] In the column executives from the security industry, researchers, and representatives from the National Information Assurance Partnership (NIAP) were interviewed. Objections outlined in the article include:

  • Evaluation is a costly process (often measured in hundreds of thousands of US dollars) – and the vendor's return on that investment is not necessarily a more secure product.
  • Evaluation focuses primarily on assessing the evaluation documentation, not on the actual security, technical correctness or merits of the product itself. For U.S. evaluations, only at EAL5 and higher do experts from the National Security Agency participate in the analysis; and only at EAL7 is full source code analysis required.
  • The effort and time necessary to prepare evaluation evidence and other evaluation-related documentation is so cumbersome that by the time the work is completed, the product in evaluation is generally obsolete.
  • Industry input, including that from organizations such as the Common Criteria Vendor's Forum, generally has little impact on the process as a whole.

In a 2006 research paper, computer specialist David A. Wheeler suggested that the Common Criteria process discriminates against free and open-source software (FOSS)-centric organizations and development models.[10] Common Criteria assurance requirements tend to be inspired by the traditional waterfall software development methodology. In contrast, much FOSS software is produced using modern agile paradigms. Although some have argued that both paradigms do not align well,[11] others have attempted to reconcile both paradigms.[12] Political scientist Jan Kallberg raised concerns over the lack of control over the actual production of the products once they are certified, the absence of a permanently staffed organizational body that monitors compliance, and the idea that the trust in the Common Criteria IT-security certifications will be maintained across geopolitical boundaries.[13]

In 2017, the ROCA vulnerability was found in a list of Common Criteria certified smart card products. The vulnerability highlighted several shortcomings of Common Criteria certification scheme:[14]

  • The vulnerability resided in a homegrown RSA key generation algorithm that has not been published and analyzed by the cryptanalysis community. However, the testing laboratory TÜV Informationstechnik GmbH (TÜViT) in Germany approved its use and the certification body BSI in Germany issued Common Criteria certificates for the vulnerable products. The Security Target of the evaluated product claimed that RSA keys are generated according to the standard algorithm. In response to this vulnerability, BSI now plans to improve transparency by requiring that the certification report at least specifies if the implemented proprietary cryptography is not exactly conformant to a recommended standard. BSI does not plan on requiring the proprietary algorithm to be published in any way.
  • Even though the certification bodies are now aware that the security claims specified in the Common Criteria certificates do not hold anymore, neither ANSSI nor BSI have revoked the corresponding certificates. According to BSI, a certificate can only be withdrawn when it was issued under misconception, e.g., when it turns out that wrong evidence was submitted. After a certificate is issued, it must be presumed that the validity of the certificate decreases over time by improved and new attacks being discovered. Certification bodies can issue maintenance reports and even perform a re-certification of the product. These activities, however, have to be initiated and sponsored by the vendor.
  • While several Common Criteria certified products have been affected by the ROCA flaw, vendors' responses in the context of certification have been different. For some products a maintenance report was issued, which states that only RSA keys with a length of 3072 and 3584 bits have a security level of at least 100 bits, while for some products the maintenance report does not mention that the change to the TOE affects certified cryptographic security functionality, but concludes that the change is at the level of guidance documentation and has no effect on assurance.
  • According to BSI, the users of the certified end products should have been informed of the ROCA vulnerability by the vendors. This information, however, did not reach in a timely manner the Estonian authorities who had deployed the vulnerable product on more than 750,000 Estonian identity cards.

Alternative approaches

[edit]

Throughout the lifetime of CC, it has not been universally adopted even by the creator nations, with, in particular, cryptographic approvals being handled separately, such as by the Canadian / US implementation of FIPS-140, and the CESG Assisted Products Scheme (CAPS)[15] in the UK.

The UK has also produced a number of alternative schemes when the timescales, costs and overheads of mutual recognition have been found to be impeding the operation of the market:

  • The CESG System Evaluation (SYSn) and Fast Track Approach (FTA) schemes for assurance of government systems rather than generic products and services, which have now been merged into the CESG Tailored Assurance Service (CTAS) [16]
  • The CESG Claims Tested Mark (CCT Mark), which is aimed at handling less exhaustive assurance requirements for products and services in a cost and time efficient manner.

In early 2011, NSA/CSS published a paper by Chris Salter, which proposed a Protection Profile oriented approach towards evaluation. In this approach, communities of interest form around technology types which in turn develop protection profiles that define the evaluation methodology for the technology type.[17] The objective is a more robust evaluation. There is some concern that this may have a negative impact on mutual recognition.[18]

In Sept of 2012, the Common Criteria published a Vision Statement[19] implementing to a large extent Chris Salter's thoughts from the previous year. Key elements of the Vision included:

  • Technical Communities will be focused on authoring Protection Profiles (PP) that support their goal of reasonable, comparable, reproducible and cost-effective evaluation results
  • Evaluations should be done against these PP's if possible; if not mutual recognition of Security Target evaluations would be limited to EAL2.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Common Criteria (CC), formally standardized as ISO/IEC 15408, is an international framework that defines criteria for evaluating the security functionality and assurance of information technology products and systems, enabling standardized certification processes through accredited laboratories. Developed in the mid-1990s by Canada, France, Germany, the Netherlands, the United Kingdom, and the United States to harmonize disparate national standards such as the U.S. Trusted Computer System Evaluation Criteria (TCSEC) and the European Information Technology Security Evaluation Criteria (ITSEC), its first version was released in 1994, with adoption as an ISO standard in 1999 and ongoing updates, including the 2022 edition of Part 1. Central to CC are Protection Profiles (PPs), which outline reusable security requirements for product categories, and Security Targets (STs), which detail claims for specific products; evaluations assess conformance to security functional requirements (SFRs) and assurance requirements (SARs) at one of seven Evaluation Assurance Levels (EALs), ranging from EAL1 (basic functional testing) to EAL7 (formal verification and extensive testing), with higher levels demanding greater rigor but not necessarily implying superior real-world security. The Common Criteria Recognition Arrangement (CCRA), established in 2000 and currently comprising 26 participating nations as certificate producers or consumers, promotes mutual acceptance of certifications up to EAL4 or equivalent, reducing redundant evaluations for government and commercial procurement in regulated sectors like defense and finance. While CC certifications provide a structured approach to IT security validation and are mandated in various national policies, they have drawn criticism for exorbitant costs, protracted timelines often exceeding a year, overemphasis on documentation rather than vulnerability resistance, and weak empirical links between assurance levels and practical security outcomes, positioning it more as a regulatory compliance tool than a definitive security enhancer.

Fundamentals

Definition and Purpose

Common Criteria, designated as ISO/IEC 15408, constitutes an establishing a unified framework for evaluating the security of (IT) products and systems. It delineates criteria for specifying security functional requirements (SFRs)—which outline the intended security behaviors such as , , and data protection—and security assurance requirements (SARs)—which verify that these functions are implemented correctly and resiliently against threats. This standard, comprising multiple parts, outlines evaluation concepts, principles, and a general model applicable to diverse IT entities including hardware, software, , and networked systems. The primary purpose of Common Criteria is to foster confidence in the attributes of IT products by mandating a structured, evidence-based conducted by accredited laboratories. This mitigates risks associated with vulnerabilities by ensuring products undergo rigorous testing proportional to their operational context, thereby supporting decisions in high-stakes environments like government agencies and . Unlike vendor self-attestations, Common Criteria emphasizes independent validation to detect flaws in , , and documentation, reducing the likelihood of exploitable weaknesses in deployed systems. Furthermore, the standard enables international harmonization of evaluations, minimizing redundant assessments across jurisdictions through mutual recognition agreements among participating nations. Originating from the need to reconcile disparate national schemes, it promotes and trust in certified products, particularly for cross-border trade and defense applications, while allowing stakeholders to tailor evaluations via protection profiles or security targets to specific threats.

Key Concepts

The Common Criteria (CC) evaluation framework, standardized as ISO/IEC 15408, establishes a structured approach to specifying, implementing, and evaluating security functions within (IT) products and systems. It distinguishes between security functional requirements (SFRs), which define the specific security behaviors and capabilities a product must exhibit—such as , , or data protection—and security assurance requirements (SARs), which outline the evidence and processes needed to demonstrate that the product correctly implements those functions without vulnerabilities. This separation enables stakeholders to articulate precise security needs while ensuring evaluations focus on both intended functionality and reliability under scrutiny. At the heart of any CC evaluation is the Target of Evaluation (TOE), defined as the particular IT product or system—potentially including hardware, software, firmware, or a combination—subject to security assessment. The TOE's boundaries and scope are explicitly delineated to isolate the evaluated components from extraneous elements, ensuring the assessment targets only the claimed secure elements. SFRs for the TOE are drawn from a standardized catalog in CC Part 3, covering 11 security function classes like cryptographic support (FCS), identification and authentication (FIA), and security management (FMT), each with hierarchical components of increasing rigor. SARs, conversely, are organized into assurance families such as development, guidance documents, life-cycle support, testing, and vulnerability assessment, providing a methodology to verify SFR conformance through evidence like design documentation or penetration testing. Protection Profiles (PPs) serve as reusable templates that standardize SFRs and SARs for a class of TOEs sharing similar security needs, such as operating systems or firewalls, thereby facilitating consistent evaluations across vendors. A PP claims conformance to base PPs or modules, enabling modular extensions for evolving threats, and includes threats, objectives, and organizational security policies to contextualize requirements. In contrast, a Security Target (ST) is TOE-specific, detailing the SFRs and SARs claimed by the developer, often referencing a PP for baseline assurance while allowing refinements or augmentations tailored to the product's implementation. STs must demonstrate how the TOE addresses PP-derived objectives or directly mitigate identified threats, forming the basis for certification claims. Evaluation Assurance Levels (EALs) provide a graduated scale from 1 to 7, quantifying the rigor of SAR application, where EAL1 offers basic and EAL7 demands formally verified design and testing under strict . Higher EALs incorporate more comprehensive evidence, such as semi-formal or formal models, but do not inherently imply superior security functionality—only deeper assurance of correct . Conformance to CC requires matching SFRs and SARs to these elements, with evaluations conducted by accredited labs to validate claims against the CC methodology in ISO/IEC 18045. This framework promotes repeatability and international recognition through the Common Criteria Recognition Arrangement (CCRA), effective for certificates up to EAL4 or equivalent PP-based assurances across 30+ member countries as of 2022.

Evaluation Framework

Protection Profiles and Security Targets

A Protection Profile (PP) is a formal document that specifies implementation-independent security requirements for a class of IT products or systems addressing a defined set of security problems, threats, and objectives. PPs are typically developed by user communities, governments, or industry groups to establish reusable templates of security functional requirements (SFRs) and assurance requirements (SARs) under ISO/IEC 15408, enabling consistent evaluations across similar products without tying to vendor-specific implementations. For instance, PPs exist for categories like mobile devices, firewalls, or , outlining threats such as unauthorized access and corresponding countermeasures. In contrast, a Security Target (ST) is an implementation-dependent document tailored to a specific product or system, known as the Target of Evaluation (TOE), that details its SFRs and SARs for certification purposes. The ST often claims conformance to one or more relevant PPs, refining their requirements to match the TOE's design, such as specifying exact cryptographic algorithms or access controls implemented in the product. It serves as the foundational reference for the evaluation process, where laboratories verify that the TOE meets the stated security claims through testing and analysis. PPs and STs interact hierarchically in evaluations: a TOE's ST must demonstrate PP conformance if claimed, ensuring the product meets baseline class-wide standards while allowing vendor-specific enhancements or limitations. This structure promotes interoperability and trust in certified products, as STs are publicly available post-certification via repositories like the Common Criteria Portal, which as of 2023 lists thousands of validated STs. Non-conformance to a PP limits certification scope, potentially reducing market applicability in procurement scenarios favoring PP-aligned products.

Evaluation Assurance Levels

The Evaluation Assurance Levels (EALs) comprise a of seven predefined packages of assurance requirements defined in the Common Criteria standard (ISO/IEC 15408), specifying the rigor of evaluation applied to a Target of Evaluation (TOE) to confirm it satisfies its claimed functions. These levels range from EAL1, which involves basic , to EAL7, which demands of design and implementation; each successive level builds on the previous by incorporating additional assurance activities such as deeper vulnerability analysis, , and evidence of development lifecycle controls. EALs provide a metric of evaluation thoroughness rather than intrinsic product strength, meaning a higher EAL offers greater confidence in the accuracy of the TOE's claims but does not imply superiority over lower-EAL products with more robust functional protections. The assurance requirements for each EAL are detailed in Common Criteria Part 3, encompassing classes such as security target evaluation, development evidence, testing, , and guidance documentation. Lower levels (EAL1–EAL4) emphasize methodical testing and review suitable for commercial IT products, while upper levels (EAL5–EAL7) require semi-formal or formal modeling, often using mathematical proofs, and are typically reserved for high-assurance systems like those in or applications. Evaluations at EAL4 or below are mutually recognized under the Common Criteria Recognition Arrangement (CCRA), facilitating international acceptance up to that threshold as of the 2022 updates. The following table summarizes the seven EALs, including their core characterization and key incremental assurance elements:
EALDesignationKey Characteristics
EAL1Functionally testedBasic testing of functions against specified requirements; minimal developer beyond functional .
EAL2Structurally testedAdds structural code testing, independent vulnerability analysis, and control to EAL1.
EAL3Methodically tested and checkedIncorporates methodical testing with developer risk analysis and of secure development environment.
EAL4Methodically , tested, and reviewedRequires methodical with detailed reviews, semi-formal testing, and comprehensive vulnerability assessments.
EAL5Semi-formally and testedBuilds on EAL4 with semi-formal and interface , plus enhanced analysis.
EAL6Semi-formally verified and testedAdds semi-formal verification of subsystems and structured analysis for flaws.
EAL7Formally verified and testedDemands formal (mathematical) verification of , implementation consistency, and exhaustive testing against a formal model.
In practice, EAL1–EAL4 certifications predominate for vendor products due to cost and time constraints, with EAL4 serving as a benchmark for robust commercial assurance since the standard's 1999 international adoption; higher levels, achieved in fewer than 1% of certifications as of , incur significantly elevated expenses from and independent validation.

Certification Process

The Common Criteria certification process entails a rigorous, third-party evaluation of an IT product's claims, overseen by national schemes participating in the Common Criteria Recognition Arrangement (CCRA). Developers begin by selecting a national certification body—such as the National Information Assurance Partnership (NIAP) in the United States or the Bundesamt für Sicherheit in der Informationstechnik (BSI) in —and engaging an accredited testing laboratory to conduct the assessment. The developer defines the Target of Evaluation (TOE), which delineates the specific product components subject to scrutiny, selects an (EAL) from EAL1 (functionally tested) to EAL7 (formally verified design and tested), and authors a Security Target (ST) document outlining the TOE's security functional requirements, assurance measures, and threat mitigations. Conformance to a Protection Profile (PP), if applicable, standardizes requirements for particular product types, such as operating systems or cryptographic modules, by incorporating predefined security objectives. The developer and laboratory collaborate on an Evaluation Work Plan (EWP), approved by the certification body, which details testing scope, evidence requirements, and timelines. Evaluation proceeds iteratively per the Common Methodology for Information Technology Security Evaluation (CEM version 3.1 Release 7, updated as of 2022), encompassing assurance classes like security target evaluation, development, guidance documents, life-cycle support, testing, vulnerability assessment, and site security. The laboratory examines design documentation, performs independent testing, identifies vulnerabilities, and issues activity reports (ARs) and observation reports (e.g., for evaluator evidence or developer issues) throughout, ensuring all claims receive verdicts of pass, fail, or inconclusive. The laboratory compiles findings into an Evaluation Technical Report (ETR), submitted to the body alongside developer-supplied evidence. The body independently validates the ETR, verifies compliance with Common Criteria parts 1-3, and addresses any discrepancies through developer responses or re-evaluation. If the TOE satisfies the ST claims at the chosen EAL, the body issues a certification report and certificate, valid for the specific TOE version and typically two years, after which recertification may be required for updates. Certificates are published on the international Common Criteria portal, enabling mutual recognition across CCRA members for evaluations up to EAL4 (or EAL2 for certain collaborative schemes as of 2023), facilitating global procurement without redundant testing. Higher EALs beyond EAL4 lack automatic mutual recognition due to varying national sensitivities for and penetration testing depth.

Historical Development

Origins in National Standards

The Common Criteria originated from efforts to harmonize disparate national frameworks for evaluating security, which had proliferated in the and early 1990s, complicating cross-border recognition of certified products. In the United States, the (TCSEC), developed by the under the Department of Defense, provided the foundational model; its initial version was released on August 15, 1983, with the final edition published on December 26, 1985, as DoD 5200.28-STD. The TCSEC categorized systems into four divisions (D through A) based on increasing levels of assurance against unauthorized disclosure, emphasizing verification, testing, and for higher classes like A1. European nations pursued a separate approach with the Security Evaluation Criteria (ITSEC), jointly developed by , , the , and the to address perceived limitations in the TCSEC's focus on . ITSEC version 1.0 emerged around 1990, evolving to version 1.2 by June 28, 1991, which decoupled functional security requirements (F levels from F1 to F10) from assurance levels (E0 to E6), allowing more flexible evaluations of , , and alongside . Canada's Trusted Computer Product Evaluation Criteria (CTCPEC), published in version 3.0 on January 1, 1993, by the , closely mirrored the TCSEC structure but incorporated adaptations for broader product types and levels A1 through C2. These standards, while advancing secure system evaluation domestically, lacked mutual acceptance internationally, prompting collaborative alignment in June 1993 among sponsoring bodies from (CTCPEC), the (TCSEC), and (ITSEC, including French criteria). Governments of , , , the , the , and the formalized this effort, producing Common Criteria version 1.0 in 1994 as a unified, vendor-neutral framework that retained core elements like assurance levels while introducing protection profiles for reusable requirements. This synthesis addressed redundancies and gaps, such as ITSEC's functional-assurance split influencing CC's structure, without endorsing any single national model's assumptions as universally optimal.

Standardization and Initial Adoption

The Common Criteria emerged from efforts to harmonize disparate national security evaluation standards, including the U.S. (TCSEC), the European Information Technology Security Evaluation Criteria (ITSEC), and the Canadian (CTCPEC). In 1991, representatives from , , , the , the , and the initiated collaborative work to develop a unified framework, culminating in the release of Common Criteria version 1.0 in June 1994. This initial version provided a common language for specifying security requirements and assurance levels, facilitating cross-border evaluations without requiring redundant testing. To achieve broader international legitimacy, the Common Criteria document was submitted to the (ISO) and the (IEC) for adoption as a formal standard. This process aligned the criteria with ISO procedures, resulting in its publication as ISO/IEC 15408 (parts 1, 2, and 3) in 1999, concurrent with the release of Common Criteria version 2.1. The standardization emphasized modular security functional requirements and graduated assurance levels, enabling vendors to target specific threats while allowing evaluators to assess implementation rigor consistently. Initial adoption was driven by the founding nations, who established the Common Criteria Recognition Arrangement (CCRA) in May 1999 to promote mutual acceptance of certifications up to 4 (EAL4). This arrangement, signed by , , , , , , , , the Netherlands, , , the United Kingdom, the , and others, reduced barriers for IT products in and commercial markets. Early certifications, such as those for operating systems and firewalls in the late , demonstrated practical uptake, though adoption was initially limited to high-security sectors due to the resource-intensive evaluation process. By 2000, over 50 certifications had been issued under the framework, signaling growing vendor participation despite the absence of mandatory requirements in most jurisdictions.

Versions and Subsequent Updates

The Common Criteria was initially released as version 1.0 in 1994 by a collaboration of governments from , , , , , and , aiming to harmonize disparate national criteria such as the European ITSEC, U.S. TCSEC (Orange Book), and Canadian CTCPEC. Version 2.1 followed in 1999, marking the standard's adoption as the international ISO/IEC 15408 and expanding its scope for broader endorsement, with refinements to functional requirements and assurance levels to support more consistent international certifications. Subsequent minor updates, including versions 2.2 and 2.3, focused on clarifying methodologies and addressing practical issues in applying the framework to diverse IT products. Common Criteria version 3.1, introduced in the mid-2000s with progressive revisions up to Revision 5, served as the dominant iteration for over a decade, aligning with ISO/IEC 15408:2009 and incorporating enhancements such as extended assurance packages for higher evaluation levels, improved guidance on vulnerability assessments, and adaptations for like networked systems. These revisions emphasized rigor in evidence collection and testing while maintaining for protection profiles developed under prior versions. In 2022, the Common Criteria framework advanced to version 2022 (CC2022), published as ISO/IEC 15408:2022 across five parts covering concepts, functional requirements, assurance requirements, extended components, and methodology linkages. This update, accompanied by revisions to the Common Evaluation Methodology (ISO/IEC 18045:2022), introduced new evaluation activities for contemporary threats like risks and environments, streamlined assurance derivations for profiles, and deprecated certain legacy elements to enhance efficiency without compromising security claims. Transition provisions allow evaluations under CC 3.1 Revision 5 to continue, with security targets based on CC 3.1-certified profiles accepted under CC2022 until December 31, 2027, facilitating gradual adoption.

International Structure

Accredited Testing Laboratories

Accredited Testing Laboratories (ATLs), also known as Common Criteria Testing Laboratories (CCTLs) in certain jurisdictions like the , are independent third-party facilities authorized to conduct security evaluations of products under the Common Criteria (CC) standard (ISO/IEC 15408). These laboratories perform detailed assessments to verify that products meet the security functional and assurance requirements defined in Protection Profiles or Security Targets, including vulnerability analysis, testing of cryptographic modules, and review of design documentation. Their evaluations form the technical basis for decisions by national schemes, ensuring impartiality as vendors select and contract with labs rather than schemes directly performing tests. Accreditation for ATLs requires compliance with ISO/IEC 17025 general requirements for testing competence, supplemented by CC-specific criteria such as proficiency in methodologies and ongoing proficiency testing. In the United States, the National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits CCTLs, which must also satisfy additional requirements from the National Information Assurance Partnership (NIAP) for Common Criteria and Validation Scheme (CCEVS) participation. As of recent listings, NIAP has approved nine such labs, including facilities operated by entities like and Gossamer Security, capable of handling up to (EAL) 4 or higher depending on scope. Internationally, under the Common Criteria Recognition Arrangement (CCRA), labs are licensed by participating national certification bodies, with over 50 licensed facilities worldwide as tracked by the Common Criteria Portal, enabling mutual recognition of across 30+ member countries. The process at accredited labs follows a structured : upon vendor submission of a Security Target and product evidence, labs conduct design reviews, , and penetration testing per the CC methodology, culminating in an Evaluation Technical Report (ETR) submitted to the certification body for validation. Labs maintain independence through contractual separations from vendors and undergo regular audits, peer reviews, and proficiency assessments to uphold evaluation rigor; for instance, NVLAP requires annual commercial evaluations and site visits. This accreditation framework aims to foster confidence in evaluation outcomes, though critics note variability in lab capacity and potential bottlenecks in high-demand areas like cryptographic validations.

Mutual Recognition Arrangement

The Common Criteria Recognition Arrangement (CCRA) is an international agreement among participating governments to mutually recognize evaluation certificates for products and protection profiles issued under the Common Criteria framework. Established to promote consistent evaluation standards and reduce redundant testing, the CCRA enables signatory nations to accept certified products for and use without requiring additional national evaluations, provided the certificates meet specified criteria. This arrangement advances objectives such as enhancing confidence in IT , increasing the availability of evaluated products, and improving the efficiency and cost-effectiveness of processes. The CCRA originated from efforts to harmonize national IT security evaluation schemes in the late , with the foundational Arrangement on the Recognition of Common Criteria Certificates signed on May 23, 2000, by initial participants including , , , , , , the , , , , the , and the . Subsequent updates have expanded membership and refined procedures, including periodic reviews by a Management Committee to assess compliance, admit new certification bodies, and adapt to technological advancements. As of 2025, the CCRA comprises 40 members: 18 authorizing members capable of issuing mutually recognized certificates—, , , , , , , , , , , , Republic of Korea, , , , , and the —and 22 consuming members that recognize but do not issue such certificates, including , , , , , , , , , , , , , , , , , and the . Recent developments include 's acceptance as a consuming participant and discussions on coexistence with the European Union's Common Criteria scheme (EUCC). Under the CCRA, mutual recognition applies to certificates up to Evaluation Assurance Level 4 (EAL4) or equivalent assurance packages, allowing products evaluated by accredited laboratories and validated by an authorizing participant's certification body to be accepted across member nations for relevant applications. Certificates must adhere to CCRA operating procedures, including oversight by national schemes, periodic laboratory assessments, and provisions for assurance continuity in product updates. Higher assurance levels beyond EAL4 are not automatically recognized and may necessitate supplementary evaluations or bilateral agreements. The arrangement is overseen by bodies such as the United States' National Information Assurance Partnership (NIAP) and Germany's Federal Office for Information Security (BSI), ensuring evaluations meet rigorous, repeatable standards.

Practical and Economic Aspects

Certification Requirements

Certification of an IT product under Common Criteria necessitates sponsorship by the developer, who must define a Security Target (ST) documenting the Target of Evaluation (TOE), its security objectives, functional requirements from Part 2 of ISO/IEC 15408, and assurance requirements from Part 3 up to a specified . The ST may claim conformance to one or more Protection Profiles (PPs), which establish reusable sets of security requirements for specific technology types, ensuring the TOE addresses predefined threats and policies. Evaluations must follow the Common Methodology for IT Security Evaluation (CEM), applying testing and analysis scaled to the claimed , with requiring basic functional testing and documentation review, progressing to 's formally verified design and testing. The developer selects a laboratory accredited by a national certification scheme participating in the Common Criteria Recognition Arrangement (CCRA), comprising 31 member countries as of 2021, to conduct the independent evaluation. Laboratories verify TOE implementation through source code review, configuration testing, vulnerability analysis, and evidence of development processes, producing an Evaluation Technical Report (ETR) that details findings and any vulnerabilities. For U.S. evaluations, conformance to NIAP-approved PPs is mandatory, extending to all requirements within the PP and any extended packages. Upon ETR completion, the laboratory submits it to the national certification body (e.g., or ), which reviews for completeness, independence, and adherence to documents before issuing a certificate. Certificates specify the version, ST or PP claims, EAL achieved, and validity period—typically up to five years from issuance, contingent on no significant changes and developer reporting. CCRA mutual recognition applies to certificates up to EAL4 (or equivalent PP-based assurances), facilitating cross-border without re-evaluation. Additional requirements include site security audits for higher EALs (EAL4+), where the developer's facilities undergo to protect sensitive TOE data, and assurance continuity processes for minor updates to avoid full re-evaluation. All evaluations conform to Common Criteria version 3.1 Release 5, the current iteration since 2017, incorporating updates to PPs and methodology for emerging threats like mobile devices and services. Non-conformance in any or SAR results in denial, emphasizing rigorous, evidence-based validation over self-attestation.

Costs and Timeframes

The costs associated with Common Criteria certification encompass laboratory evaluation fees, consulting services, internal development and efforts, and potential remediation of identified , often totaling in the hundreds of thousands of U.S. dollars for mid-level assurances. Higher Evaluation Assurance Levels (EALs) demand more rigorous testing, , and provision, escalating expenses; for instance, EAL4 evaluations frequently range from $300,000 to $750,000, reflecting increased scrutiny of design and implementation. These figures exclude such as delayed entry or opportunity costs from , which can amplify the financial burden for vendors pursuing . Timeframes for certification similarly scale with EAL and product scope, typically spanning 6 to 24 months from initial preparation to issuance of the certificate. Preparation phases, including Security Target development and evidence assembly, can consume 3 to 6 months, followed by formal evaluation lasting 4 to 12 months depending on the level; EAL2 processes often conclude in 4 to 6 months, EAL3 in 6 to 9 months, and EAL4 in 7 to 12 months or longer due to methodical design reviews and penetration testing. Delays frequently arise from iterative flaw remediation or coordination with certification bodies, with overall timelines averaging one year for many conformance claims.
EAL LevelTypical TimeframeTypical Cost Range (USD)
EAL24-6 months80,00080,000-150,000
EAL36-9 months120,000120,000-200,000
EAL47-12 months175,000175,000-750,000
These estimates derive from industry evaluations and may vary by , laboratory efficiency, and , with U.S. National Partnership (NIAP) validations often aligning to similar durations but emphasizing operational environment specifics. Vendors report that upfront investment in reusable protection profiles can mitigate repeat costs for product variants, though remains a resource-intensive barrier primarily feasible for enterprises targeting .

Efficacy and Criticisms

Claimed Benefits of Certification

Certification under the Common Criteria framework is claimed to provide vendors and users with assurance that IT products have undergone a standardized, rigorous of their functionality and assurance measures, fostering confidence in their ability to mitigate specified threats. This process, aligned with ISO/IEC 15408, involves independent testing by accredited laboratories against predefined Protection Profiles or Security Targets, purportedly ensuring consistent and repeatable assessments across diverse products like operating systems, cryptographic modules, and network devices. Proponents argue that such evaluations reveal vulnerabilities through , , and testing, thereby enabling informed decisions in high-stakes environments. A primary asserted advantage is the facilitation of international via the Common Criteria Recognition (CCRA), established in 1999 and encompassing over 30 participating countries as of 2023, which mandates mutual acceptance of certificates up to 4 (or higher for specific schemes). This arrangement eliminates the need for duplicate certifications in signatory nations, reducing evaluation redundancies and trade barriers for secure ICT products, as evidenced by streamlined exports of certified hardware and software to and regulated sectors worldwide. For instance, a product certified in the United States by the National Information Assurance Partnership (NIAP) is recognized in under schemes like those of the German (BSI), purportedly accelerating global deployment while maintaining baseline security equivalence. Higher EAL certifications, ranging from EAL1 () to EAL7 (formally verified design and testing), are said to deliver escalating degrees of scrutiny, with EAL4+ often cited as sufficient for moderate-risk applications yet indicative of enhanced reliability through comprehensive vulnerability analysis and . Advocates, including laboratories, claim this builds stakeholder trust by signaling superior development practices and flaw detection, distinguishing certified products in competitive bids for defense, , and contracts. Compliance with Common Criteria is frequently mandated by regulations such as the U.S. Department of Defense's Approved Products List or the European Union's cybersecurity requirements, allegedly lowering acquisition risks and supporting long-term operational security. The framework is further promoted for promoting transparency in claims, as mandates detailed public of threats addressed, countermeasures implemented, and residual risks, purportedly minimizing misunderstandings between developers, evaluators, and end-users. This structured approach is asserted to balance assurances with practical costs, aiding in evaluations without overemphasizing unattainable perfection.

Empirical Limitations and Failures

Despite rigorous evaluation processes, products certified under Common Criteria have repeatedly exhibited critical vulnerabilities post-certification, undermining claims of enhanced security assurance. For instance, the ROCA vulnerability, disclosed in October 2017, affected RSA key generation in Infineon Technologies smart cards certified to EAL4+ and higher levels, enabling attackers to recover private keys from public keys due to flawed prime number generation algorithms present since at least 2012. Similarly, the Minerva attack, identified in 2019, exploited timing side-channels in ECDSA implementations on programmable smart cards certified under Common Criteria, allowing nonce recovery and private key extraction through loop-bound leaks that evaded standard testing. The TPM-Fail vulnerabilities, presented at USENIX Security 2020, demonstrated timing and lattice attacks on Trusted Platform Modules, including Infineon SLB9670 chips certified to EAL4+, which permitted extraction of hundreds of RSA bits and full key recovery in practical scenarios despite evaluations intended to detect such leakages. Large-scale empirical analyses of certified products reveal persistent gaps, with no demonstrable between higher Assurance Levels (EALs) and reduced incidence. A 2024 study mapping entries to over 3,000 Common Criteria certificates found that high-assurance products (EAL4+) hosted critical flaws like private key recoveries, attributing this to 's focus on fixed configurations that fail to capture real-world deployments or evolving threats. processes permit known vulnerabilities if deemed non-exploitable in the evaluated setup, yet these often propagate via component reuse across products, transmitting flaws without reevaluation. U.S. Government Accountability Office assessments have highlighted a broader lack of for Common Criteria's effectiveness in bolstering IT , noting in 2006 the absence of performance metrics or data linking certifications to reduced risks in federal systems. Industry surveys echo this, with vendors and experts reporting that the framework provides inadequate detection, as evidenced by post-certification discoveries in scrutinized modules, while its high costs—often exceeding hundreds of thousands of dollars and spanning 12-24 months—yield marginal assurance gains. These failures stem from methodological constraints, including evaluator incentives tied to vendors and insufficient coverage of dynamic attack vectors like side-channels, rendering certifications more indicative of compliance than causal improvements.

Expert Critiques and Debates

Experts have long critiqued the Common Criteria (CC) framework for its bureaucratic inefficiencies and disproportionate costs relative to security gains. Jim Yuill, in a 2008 survey of CC literature, documented complaints from vendors and researchers about the process becoming a "paperwork exercise," with evaluations generating hundreds of pages of documentation, such as over 800 pages for a project, diverting resources from actual security improvements. Certification timelines typically span 10 to 24 months and incur costs in the mid-six figures to millions of dollars, excluding smaller vendors and clashing with agile product development cycles where requirements evolve post-evaluation. A U.S. Government Accountability Office (GAO) report cited by Yuill found insufficient evidence that CC enhances government IT security in a cost-effective manner, highlighting systemic flaws in . Debates center on CC's assurance levels (EALs), which critics argue provide illusory confidence rather than rigorous detection. A analysis using structured argumentation identified 121 issues in CC's application to software assurance, including in requirements and failure to mandate evidence that meaningfully boosts confidence, even after multiple reviews. Yuill noted that CC evaluations focus narrowly on specific product configurations and ignore operational environments or non-IT factors like , limiting real-world applicability; vendors have been accused of misleading claims implying blanket government endorsement. Symantec officials, for instance, asserted that protection profiles offer "no confidence or assurance" against emerging threats, as the framework assumes static threats predefined in evaluations. Empirical evidence underscores these limitations, with certified products repeatedly exhibiting critical vulnerabilities post-evaluation. A 2024 study of CC certifications revealed persistent flaws in certified hardware, including private key recovery attacks like ROCA, , and TPM-Fail, indicating that the process does not preclude exploitable weaknesses due to its snapshot nature—evaluating fixed versions and configurations while overlooking deviations in deployment. Experts debate whether CC primarily serves compliance for government procurement rather than causal security enhancement, as non-certified products sometimes demonstrate fewer vulnerabilities in practice, per observational data; proponents counter that it standardizes baseline scrutiny, but critics like Yuill advocate reforms such as modular re-evaluations to address lifecycle mismatches without overhauling the core methodology. These tensions reflect broader causal realism concerns: CC's document-heavy approach may incentivize superficial adherence over first-principles , yet its international mutual recognition sustains its role despite alternatives gaining traction.

Alternatives and Comparisons

National Evaluation Schemes

National evaluation schemes are government-administered programs that evaluate and certify the of IT products and systems within their jurisdictions, predominantly utilizing the Common Criteria (CC) framework to ensure consistency and mutual recognition under the CCRA. As of 2023, the CCRA includes 38 participating nations, with 18 authorizing members responsible for conducting and issuing certificates valid up to 4 (or higher in some cases), while 20 consuming members recognize these without performing independent evaluations. These schemes allow countries to adapt CC methodologies to domestic priorities, such as specific protection profiles for national infrastructure, but variations in laboratory accreditation, evaluation depth, and processing times can influence vendor choices and international preferences. The following table summarizes the 18 authorizing schemes:
CountryScheme NameAcronym/Operator
AustraliaAustralian Information Security Evaluation ProgramAISEP/ACSC
CanadaCanadian Common Criteria SchemeCCCS
FranceAgence Nationale de la Sécurité des Systèmes d'InformationANSSI
GermanyBundesamt für Sicherheit in der InformationstechnikBSI
IndiaIndian Common Criteria Certification SchemeIC3S
ItalyOrganismo di Certificazione della Sicurezza InformaticaOCSI
JapanJapan IT Security Evaluation and Certification SchemeJISEC
MalaysiaCyberSecurity MalaysiaMyCC
NetherlandsNetherlands Scheme for Certification in the Area of IT SecurityNSCIB/TrustCB
NorwaySERTITSERTIT/NSM
PolandNASK National Research InstituteNASK
QatarQatar Common Criteria SchemeQCCS/NCSA
Republic of KoreaIT Security Certification CenterITSCC/NSRI
SingaporeCyber Security Agency of SingaporeCSA
SpainOrganismo de Certificación de la Seguridad de las Tecnologías de la InformaciónCCN
SwedenSwedish Certification Body for IT SecurityCSEC/FMV
TurkeyTSE Common Criteria Certification SchemeTSE
United StatesNational Information Assurance PartnershipNIAP/CCEVS
Prominent examples illustrate scheme operations. The ' NIAP, established in 1999 under the Department of Defense, validates products through accredited Common Criteria Testing Laboratories (CCTLs) and mandates CC certification for IT products used in systems, with over 500 validations listed as of 2023. Germany's BSI scheme, operational since 1999, emphasizes high-assurance evaluations (often EAL4+ or above) and has certified thousands of products, including custom national requirements for smart meters and automotive systems. France's ANSSI, via its certification center, focuses on operational resilience and has issued over 200 certificates since 2000, prioritizing evaluations for critical sectors like defense. These schemes provide alternatives to purely international processes by incorporating local oversight, though mutual recognition limits fragmentation; non-CCRA nations may rely on bilateral agreements or independent national standards, such as the UK's post-Brexit NCSC evaluations aligned but not fully under CCRA. Empirical data from scheme reports indicate evaluation durations averaging 6-18 months, with costs varying by assurance level, underscoring their role in balancing global interoperability with sovereign control.

Commercial and Emerging Methods

Commercial methods for IT security assurance frequently leverage industry-specific standards evaluated by private-sector laboratories and certification bodies, bypassing the government-mandated rigor of Common Criteria. For instance, the PCI Data Security Standard (PCI DSS) employs qualified security assessors—commercial auditors approved by the PCI Security Standards Council—to validate controls for environments, emphasizing , access controls, and through on-site or remote assessments rather than formal assurance levels. This approach prioritizes operational compliance and annual revalidation, typically completing in months versus CC's multi-year timelines for equivalent scrutiny. Similarly, ISO/IEC 27001 certifications for systems are issued by accredited commercial auditors worldwide, focusing on organizational processes like treatment and continual , which provide broader but less product-specific assurance than CC's technical evaluations. In industrial sectors, ISA/IEC 62443 serves as a prominent commercial alternative, defining security requirements for automation and control systems across 14 substandards that address components, systems, and operations. Evaluations under this framework, such as the ISASecure conformance program, are conducted by independent commercial labs like UL Solutions, incorporating risk assessments, secure development lifecycles, and capability maturity modeling tailored to . Distinct from CC's product-centric, high-assurance focus on functional and assurance requirements per ISO/IEC 15408, ISA/IEC 62443 enables holistic, lifecycle-based certifications with mutual recognition via IECEE schemes, making it more adaptable for commercial and where downtime costs outweigh exhaustive lab testing. Emerging methods address gaps in CC's applicability to fast-evolving domains like IoT, exemplified by the Security Evaluation Standard for IoT Platforms (SESIP), a GlobalPlatform released in version 1.0 around 2021. SESIP adapts CC-derived security functional requirements into reusable profiles for platforms and components, defining three assurance levels (Method 1 for basic, up to Method 3 for high) that support compositionality—allowing certified elements to be integrated without full re-evaluation. This reduces evaluation costs by up to 50-70% for complex IoT ecosystems, per industry analyses, by emphasizing developer-accessible documentation and alignment with sector regulations like EN 303 645 or ISO/SAE 21434, while labs such as SGS and offer SESIP certifications as a complement or faster path to market entry. Adoption has grown through partnerships with schemes like PSA Certified, enabling scalable assurance for supply chains where CC's granularity proves inefficient. These commercial and emerging approaches, while varying in depth, often demonstrate empirical advantages in speed and cost—e.g., ISA/IEC 62443 certifications averaging 6-12 months versus CC EAL4's 18-24 months—but critics note they may underemphasize adversarial robustness testing central to CC, relying instead on self-reported evidence or lighter audits.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.