Recent from talks
Nothing was collected or created yet.
Authenticator
View on WikipediaAn authenticator is a means used to confirm a user's identity,[1][2] that is, to perform digital authentication. A person authenticates to a computer system or application by demonstrating that he or she has possession and control of an authenticator.[3][4] In the simplest case, the authenticator is a common password.
Using the terminology of the NIST Digital Identity Guidelines,[3] the party to be authenticated is called the claimant while the party verifying the identity of the claimant is called the verifier. When the claimant successfully demonstrates possession and control of one or more authenticators to the verifier through an established authentication protocol, the verifier is able to infer the claimant's identity.
Classification
[edit]Authenticators may be characterized in terms of secrets, factors, and physical forms.
Authenticator secrets
[edit]Every authenticator is associated with at least one secret that the claimant uses to demonstrate possession and control of the authenticator. Since an attacker could use this secret to impersonate the user, an authenticator secret must be protected from theft or loss.
The type of secret is an important characteristic of the authenticator. There are three basic types of authenticator secret: a memorized secret and two types of cryptographic keys, either a symmetric key or a private key.
Memorized secret
[edit]A memorized secret is intended to be memorized by the user. A well-known example of a memorized secret is the common password, also called a passcode, a passphrase, or a personal identification number (PIN).
An authenticator secret known to both the claimant and the verifier is called a shared secret. For example, a memorized secret may or may not be shared. A symmetric key is shared by definition. A private key is not shared.
An important type of secret that is both memorized and shared is the password. In the special case of a password, the authenticator is the secret.
Cryptographic key
[edit]A cryptographic authenticator is one that uses a cryptographic key. Depending on the key material, a cryptographic authenticator may use symmetric-key cryptography or public-key cryptography. Both avoid memorized secrets, and in the case of public-key cryptography, there are no shared secrets as well, which is an important distinction.
Examples of cryptographic authenticators include OATH authenticators and FIDO authenticators. The name OATH is an acronym from the words "Open AuTHentication" while FIDO stands for Fast IDentity Online. Both are the results of an industry-wide collaboration to develop an open reference architecture using open standards to promote the adoption of strong authentication.
By way of counterexample, a password authenticator is not a cryptographic authenticator. See the #Examples section for details.
Symmetric key
[edit]A symmetric key is a shared secret used to perform symmetric-key cryptography. The claimant stores their copy of the shared key in a dedicated hardware-based authenticator or a software-based authenticator implemented on a smartphone. The verifier holds a copy of the symmetric key.
Public-private key pair
[edit]A public-private key pair is used to perform public-key cryptography. The public key is known to (and trusted by) the verifier while the corresponding private key is bound securely to the authenticator. In the case of a dedicated hardware-based authenticator, the private key never leaves the confines of the authenticator.
Authenticator factors and forms
[edit]An authenticator is something unique or distinctive to a user (something that one has), is activated by either a PIN (something that one knows), or is a biometric ("something that is unique to oneself"). An authenticator that provides only one of these factors is called a single-factor authenticator whereas a multi-factor authenticator incorporates two or more factors. A multi-factor authenticator is one way to achieve multi-factor authentication. A combination of two or more single-factor authenticators is not a multi-factor authentication, yet may be suitable in certain conditions.
Authenticators may take a variety of physical forms (except for a memorized secret, which is intangible). One can, for example, hold an authenticator in one's hand or wear one on the face, wrist, or finger.[5][6][7]
It is convenient to describe an authenticator in terms of its hardware and software components. An authenticator is hardware-based or software-based depending on whether the secret is stored in hardware or software, respectively.
An important type of hardware-based authenticator is called a security key,[8] also called a security token (not to be confused with access tokens, session tokens, or other types of security tokens). A security key stores its secret in hardware, which prevents the secret from being exported. A security key is also resistant to malware since the secret is at no time accessible to software running on the host machine.
A software-based authenticator (sometimes called a software token) may be implemented on a general-purpose electronic device such as a laptop, a tablet computer, or a smartphone. For example, a software-based authenticator implemented as a mobile app on the claimant's smartphone is a type of phone-based authenticator. To prevent access to the secret, a software-based authenticator may use a processor's trusted execution environment or a Trusted Platform Module (TPM) on the client device.
A platform authenticator is built into a particular client device platform, that is, it is implemented on device. In contrast, a roaming authenticator is a cross-platform authenticator that is implemented off device. A roaming authenticator connects to a device platform via a transport protocol such as USB.
Examples
[edit]The following sections describe narrow classes of authenticators. For a more comprehensive classification, see the NIST Digital Identity Guidelines.[9]
Single-factor authenticators
[edit]To use an authenticator, the claimant must explicitly indicate their intent to authenticate. For example, each of the following gestures is sufficient to establish intent:
- The claimant types a password into a password field, or
- The claimant places their finger on a fingerprint reader, or
- The claimant presses a button to indicate approval
The latter is called a test of user presence (TUP). To activate a single-factor authenticator (something that one has), the claimant may be required to perform a TUP, which avoids unintended operation of the authenticator.
A password is a secret that is intended to be memorized by the claimant and shared with the verifier. Password authentication is the process whereby the claimant demonstrates knowledge of the password by transmitting it over the network to the verifier. If the transmitted password agrees with the previously shared secret, user authentication is successful.
OATH OTP
[edit]
One-time passwords (OTPs) have been used since the 1980s.[citation needed] In 2004, an Open Authentication Reference Architecture for the secure generation of OTPs was announced at the annual RSA Conference.[10][11] The Initiative for Open Authentication (OATH) launched a year later.[citation needed] Two IETF standards grew out of this work, the HMAC-based One-time Password (HOTP) algorithm and the Time-based One-time Password (TOTP) algorithm specified by RFC 4226 and RFC 6238, respectively. By OATH OTP, we mean either HOTP or TOTP. OATH certifies conformance with the HOTP and TOTP standards.[12]
A traditional password (something that one knows) is often combined with a one-time password (something that one has) to provide two-factor authentication.[13] Both the password and the OTP are transmitted over the network to the verifier. If the password agrees with the previously shared secret, and the verifier can confirm the value of the OTP, user authentication is successful.
One-time passwords are generated on demand by a dedicated OATH OTP authenticator that encapsulates a secret that was previously shared with the verifier. Using the authenticator, the claimant generates an OTP using a cryptographic method. The verifier also generates an OTP using the same cryptographic method. If the two OTP values match, the verifier can conclude that the claimant possesses the shared secret.
A well-known example of an OATH authenticator is Google Authenticator,[14] a phone-based authenticator that implements both HOTP and TOTP.
Mobile Push
[edit]A mobile push authenticator is essentially a native app running on the claimant's mobile phone. The app uses public-key cryptography to respond to push notifications. In other words, a mobile push authenticator is a single-factor cryptographic software authenticator. A mobile push authenticator (something that one has) is usually combined with a password (something that one knows) to provide two-factor authentication. Unlike one-time passwords, mobile push does not require a shared secret beyond the password.
After the claimant authenticates with a password, the verifier makes an out-of-band authentication request to a trusted third party that manages a public-key infrastructure on behalf of the verifier. The trusted third party sends a push notification to the claimant's mobile phone. The claimant demonstrates possession and control of the authenticator by pressing a button in the user interface, after which the authenticator responds with a digitally signed assertion. The trusted third party verifies the signature on the assertion and returns an authentication response to the verifier.
The proprietary mobile push authentication protocol runs on an out-of-band secondary channel, which provides flexible deployment options. Since the protocol requires an open network path to the claimant's mobile phone, if no such path is available (due to network issues, e.g.), the authentication process can not proceed.[13]
FIDO U2F
[edit]A FIDO Universal 2nd Factor (U2F) authenticator (something that one has) is a single-factor cryptographic authenticator that is intended to be used in conjunction with an ordinary web password. Since the authenticator relies on public-key cryptography, U2F does not require an additional shared secret beyond the password.
To access a U2F authenticator, the claimant is required to perform a test of user presence (TUP), which helps prevent unauthorized access to the authenticator's functionality. In practice, a TUP consists of a simple button push.
A U2F authenticator interoperates with a conforming web user agent that implements the U2F JavaScript API.[15] A U2F authenticator necessarily implements the CTAP1/U2F protocol, one of the two protocols specified in the FIDO Client to Authenticator Protocol.[16]
Unlike mobile push authentication, the U2F authentication protocol runs entirely on the front channel. Two round trips are required. The first round trip is ordinary password authentication. After the claimant authenticates with a password, the verifier sends a challenge to a conforming browser, which communicates with the U2F authenticator via a custom JavaScript API. After the claimant performs the TUP, the authenticator signs the challenge and returns the signed assertion to the verifier via the browser.
Multi-factor authenticators
[edit]To use a multi-factor authenticator, the claimant performs full user verification. The multi-factor authenticator (something that one has) is activated by a PIN (something that one knows), or a biometric (something that is unique to oneself"; e.g. fingerprint, face or voice recognition), or some other verification technique.[3] ,
ATM card
[edit]To withdraw cash from an automated teller machine (ATM), a bank customer inserts an ATM card into a cash machine and types a Personal Identification Number (PIN). The input PIN is compared to the PIN stored on the card's chip. If the two match, the ATM withdrawal can proceed.
Note that an ATM withdrawal involves a memorized secret (i.e., a PIN) but the true value of the secret is not known to the ATM in advance. The machine blindly passes the input PIN to the card, which compares the customer's input to the secret PIN stored on the card's chip. If the two match, the card reports success to the ATM and the transaction continues.
An ATM card is an example of a multi-factor authenticator. The card itself is something that one has while the PIN stored on the card's chip is presumably something that one knows. Presenting the card to the ATM and demonstrating knowledge of the PIN is a kind of multi-factor authentication.
Secure Shell
[edit]Secure Shell (SSH) is a client-server protocol that uses public-key cryptography to create a secure channel over the network. In contrast to a traditional password, an SSH key is a cryptographic authenticator. The primary authenticator secret is the SSH private key, which is used by the client to digitally sign a message. The corresponding public key is used by the server to verify the message signature, which confirms that the claimant has possession and control of the private key.
To avoid theft, the SSH private key (something that one has) may be encrypted using a passphrase (something that one knows). To initiate a two-factor authentication process, the claimant supplies the passphrase to the client system.
Like a password, the SSH passphrase is a memorized secret but that is where the similarity ends. Whereas a password is a shared secret that is transmitted over the network, the SSH passphrase is not shared, and moreover, use of the passphrase is strictly confined to the client system. Authentication via SSH is an example of passwordless authentication since it avoids the transmission of a shared secret over the network. In fact, SSH authentication does not require a shared secret at all.
FIDO2
[edit]
The FIDO U2F protocol standard became the starting point for the FIDO2 Project, a joint effort between the World Wide Web Consortium (W3C) and the FIDO Alliance. Project deliverables include the W3C Web Authentication (WebAuthn) standard and the FIDO Client to Authenticator Protocol (CTAP).[17] Together WebAuthn and CTAP provide a strong authentication solution for the web.
A FIDO2 authenticator, also called a WebAuthn authenticator, uses public-key cryptography to interoperate with a WebAuthn client, that is, a conforming web user agent that implements the WebAuthn JavaScript API.[18] The authenticator may be a platform authenticator, a roaming authenticator, or some combination of the two. For example, a FIDO2 authenticator that implements the CTAP2 protocol[16] is a roaming authenticator that communicates with a WebAuthn client via one or more of the following transport options: USB, near-field communication (NFC), or Bluetooth Low Energy (BLE). Concrete examples of FIDO2 platform authenticators include Windows Hello[19] and the Android operating system.[20]
A FIDO2 authenticator may be used in either single-factor mode or multi-factor mode. In single-factor mode, the authenticator is activated by a simple test of user presence (e.g., a button push). In multi-factor mode, the authenticator (something that one has) is activated by either a PIN (something that one knows) or a biometric ("something that is unique to oneself").
Security code
[edit]First and foremost, strong authentication begins with multi-factor authentication. The best thing one can do to protect a personal online account is to enable multi-factor authentication.[13][21] There are two ways to achieve multi-factor authentication:
- Use a multi-factor authenticator
- Use a combination of two or more single-factor authenticators
In practice, a common approach is to combine a password authenticator (something that one knows) with some other authenticator (something that one has) such as a cryptographic authenticator.
Generally speaking, a cryptographic authenticator is preferred over an authenticator that does not use cryptographic methods. All else being equal, a cryptographic authenticator that uses public-key cryptography is better than one that uses symmetric-key cryptography since the latter requires shared keys (which may be stolen or misused).
Again all else being equal, a hardware-based authenticator is better than a software-based authenticator since the authenticator secret is presumably better protected in hardware. This preference is reflected in the NIST requirements outlined in the next section.
NIST authenticator assurance levels
[edit]NIST defines three levels of assurance with respect to authenticators. The highest authenticator assurance level (AAL3) requires multi-factor authentication using either a multi-factor authenticator or an appropriate combination of single-factor authenticators. At AAL3, at least one of the authenticators must be a cryptographic hardware-based authenticator. Given these basic requirements, possible authenticator combinations used at AAL3 include:
- A multi-factor cryptographic hardware-based authenticator
- A single-factor cryptographic hardware-based authenticator used in conjunction with some other authenticator (such as a password authenticator)
See the NIST Digital Identity Guidelines for further discussion of authenticator assurance levels.[9]
Restricted authenticators
[edit]Like authenticator assurance levels, the notion of a restricted authenticator is a NIST concept.[3] The term refers to an authenticator with a demonstrated inability to resist attacks, which puts the reliability of the authenticator in doubt. Federal agencies mitigate the use a restricted authenticator by offering subscribers an alternative authenticator that is not restricted and by developing a migration plan in the event that a restricted authenticator is prohibited from use at some point in the future.
Currently, the use of the public switched telephone network is restricted by NIST. In particular, the out-of-band transmission of one-time passwords (OTPs) via recorded voice messages or SMS messages is restricted. Moreover, if an agency chooses to use voice- or SMS-based OTPs, that agency must verify that the OTP is being transmitted to a phone and not an IP address since Voice over IP (VoIP) accounts are not routinely protected with multi-factor authentication.[9]
Comparison
[edit]It is convenient to use passwords as a basis for comparison since it is widely understood how to use a password.[22] On computer systems, passwords have been used since at least the early 1960s.[23][24] More generally, passwords have been used since ancient times.[25]
In 2012, Bonneau et al. evaluated two decades of proposals to replace passwords by systematically comparing web passwords to 35 competing authentication schemes in terms of their usability, deployability, and security.[26] (The cited technical report is an extended version of the peer-reviewed paper by the same name.[27]) They found that most schemes do better than passwords on security while every scheme does worse than passwords on deployability. In terms of usability, some schemes do better and some schemes do worse than passwords.
Google used the evaluation framework of Bonneau et al. to compare security keys to passwords and one-time passwords.[28] They concluded that security keys are more usable and deployable than one-time passwords, and more secure than both passwords and one-time passwords.
See also
[edit]References
[edit]- ^ "National Information Assurance (IA) Glossary" (PDF). Committee on National Security Systems. 26 April 2010. Archived (PDF) from the original on 2022-10-09. Retrieved 31 March 2019.
- ^ "Glossary of Telecommunication Terms". Institute for Telecommunication Sciences. 7 August 1996. Retrieved 31 March 2019.
- ^ a b c d Grassi, Paul A.; Garcia, Michael E.; Fenton, James L. (June 2017). "NIST Special Publication 800-63-3: Digital Identity Guidelines". National Institute of Standards and Technology (NIST). doi:10.6028/NIST.SP.800-63-3. Retrieved 5 February 2019.
{{cite journal}}: Cite journal requires|journal=(help) - ^ Lindemann, Rolf, ed. (11 April 2017). "FIDO Technical Glossary". FIDO Alliance. Retrieved 26 March 2019.
- ^ Bianchi, Andrea; Oakley, Ian (2016). "Wearable authentication: Trends and opportunities" (PDF). It - Information Technology. 58 (5): 255–262. doi:10.1515/itit-2016-0010. S2CID 12772550. Archived (PDF) from the original on 2022-10-09.
- ^ Stein, Scott (26 July 2018). "Why can't Wear OS smartwatches be security keys too?". CNET. Retrieved 31 March 2019.
- ^ Williams, Brett (27 June 2017). "This smart ring gives you instant mobile payments with beefed up security". Mashable. Retrieved 31 March 2019.
- ^ Shikiar, Andrew (7 December 2016). "Case Study: Google Security Keys Work". FIDO Alliance. FIDO Alliance. Retrieved 26 March 2019.
- ^ a b c Grassi, Paul A.; Fenton, James L.; Newton, Elaine M.; Perlner, Ray A.; Regenscheid, Andrew R.; Burr, William E.; Richer, Justin P. (2017). "NIST Special Publication 800-63B: Digital Identity Guidelines: Authentication and Lifecycle Management". National Institute of Standards and Technology (NIST). doi:10.6028/NIST.SP.800-63b. Retrieved 5 February 2019.
{{cite journal}}: Cite journal requires|journal=(help) - ^ Kucan, Berislav (24 February 2004). "Open Authentication Reference Architecture Announced". Help Net Security. Retrieved 26 March 2019.
- ^ "OATH Specifications and Technical Resources". Initiative for Open Authentication. Retrieved 26 March 2019.
- ^ "OATH Certification". The Initiative for Open Authentication (OATH). Retrieved 3 February 2019.
- ^ a b c Hoffman-Andrews, Jacob; Gebhart, Gennie (22 September 2017). "A Guide to Common Types of Two-Factor Authentication on the Web". Electronic Frontier Foundation. Retrieved 26 March 2019.
- ^ "Google Authenticator". GitHub. Retrieved 3 February 2019.
- ^ Balfanz, Dirk; Birgisson, Arnar; Lang, Juan, eds. (11 April 2017). "FIDO U2F JavaScript API". FIDO Alliance. Retrieved 22 March 2019.
- ^ a b Brand, Christiaan; Czeskis, Alexei; Ehrensvärd, Jakob; Jones, Michael B.; Kumar, Akshay; Lindemann, Rolf; Powers, Adam; Verrept, Johan, eds. (30 January 2019). "Client to Authenticator Protocol (CTAP)". FIDO Alliance. Retrieved 22 March 2019.
- ^ "FIDO2: Moving the World Beyond Passwords". FIDO Alliance. Retrieved 30 January 2019.
- ^ Balfanz, Dirk; Czeskis, Alexei; Hodges, Jeff; Jones, J.C.; Jones, Michael B.; Kumar, Akshay; Liao, Angelo; Lindemann, Rolf; Lundberg, Emil (eds.). "Web Authentication: An API for accessing Public Key Credentials Level 1". World Wide Web Consortium (W3C). Retrieved 30 January 2019.
- ^ Simons, Alex (November 20, 2018). "Secure password-less sign-in for your Microsoft account using a security key or Windows Hello". Microsoft. Retrieved 6 March 2019.
- ^ "Android Now FIDO2 Certified, Accelerating Global Migration Beyond Passwords". BARCELONA: FIDO Alliance. February 25, 2019. Retrieved 6 March 2019.
- ^ "Two-factor authentication (2FA); new guidance from the NCSC". National Cyber Security Centre (NCSC). 8 Aug 2018.
- ^ Hunt, Troy (5 November 2018). "Here's Why [Insert Thing Here] Is Not a Password Killer". Retrieved 24 March 2019.
- ^ McMillan, Robert (27 January 2012). "The World's First Computer Password? It Was Useless Too". Wired magazine. Retrieved 22 March 2019.
- ^ Hunt, Troy (26 July 2017). "Passwords Evolved: Authentication Guidance for the Modern Era". Retrieved 22 March 2019.
- ^ Malempati, Sreelatha; Mogalla, Shashi (2011-07-31). "An Ancient Indian Board Game as a Tool for Authentication" (PDF). International Journal of Network Security & Its Applications. 3 (4): 154–163. doi:10.5121/ijnsa.2011.3414.
- ^ Bonneau, Joseph; Herley, Cormac; Oorschot, Paul C. van; Stajano, Frank (2012). "The Quest to Replace Passwords: A Framework for Comparative Evaluation of Web Authentication Schemes". Technical Report - University of Cambridge. Computer Laboratory. Cambridge, UK: University of Cambridge Computer Laboratory. doi:10.48456/tr-817. ISSN 1476-2986. Retrieved 22 March 2019.
- ^ Bonneau, Joseph; Herley, Cormac; Oorschot, Paul C. van; Stajano, Frank (2012). The Quest to Replace Passwords: A Framework for Comparative Evaluation of Web Authentication Schemes. 2012 IEEE Symposium on Security and Privacy. San Francisco, CA. pp. 553–567. CiteSeerX 10.1.1.473.2241. doi:10.1109/SP.2012.44.
- ^ Lang, Juan; Czeskis, Alexei; Balfanz, Dirk; Schilder, Marius; Srinivas, Sampath (2016). "Security Keys: Practical Cryptographic Second Factors for the Modern Web" (PDF). Financial Cryptography and Data Security 2016. Archived (PDF) from the original on 2022-10-09. Retrieved 26 March 2019.
Authenticator
View on GrokipediaFundamentals
Definition and Purpose
An authenticator is a digital or physical object, secret, or biometric trait that serves as a mechanism to prove possession and control of one or more authentication factors, thereby confirming a user's identity in digital systems.[3] According to NIST guidelines, authenticators enable the verification of a subscriber's identity by demonstrating control over these factors during authentication protocols.[4] As of July 2025, NIST's SP 800-63-4 provides the current guidelines, incorporating advancements such as syncable authenticators for multi-device use.[1] The primary purpose of an authenticator is to provide reliable evidence that binds a digital identity to a specific individual, thereby mitigating risks such as impersonation and unauthorized access in applications like online banking, email services, and network systems.[4] The concept of authenticators has evolved significantly since the introduction of simple passwords in the 1960s, when MIT researcher Fernando Corbató implemented the first password-based system for a time-sharing computer to manage user access among multiple individuals.[5] This marked the shift from physical to digital identity verification, addressing the need for controlled resource sharing in early computing environments. By the late 1980s, authentication systems advanced toward more robust network protocols, with a key milestone being the development of Kerberos during the 1980s at MIT's Project Athena, with a key paper published in 1988, which introduced ticket-based authentication using symmetric cryptography to secure client-server interactions without transmitting passwords over the network.[6] Over subsequent decades, the limitations of single passwords—such as vulnerability to guessing and reuse—drove the transition to multi-layered systems incorporating diverse authenticators for enhanced security. The basic authentication process involving an authenticator typically unfolds in three core steps: first, the user (claimant) submits the authenticator, such as entering a secret or presenting a token, through a secure channel to the verifying system.[3] The verifier then authenticates the submission by comparing it against stored or generated references, such as a hashed secret or time-based code, to confirm validity.[4] Upon successful verification, the system establishes a session, granting the user access while potentially enforcing ongoing protections like session timeouts.[4]Authentication Factors
Authentication factors represent the foundational elements employed to confirm a user's identity during the authentication process, serving as the building blocks for both single-factor and multi-factor systems. These factors are classified based on the distinct attributes they leverage—either information known to the user, physical objects in their possession, or inherent personal characteristics—ensuring that authentication mechanisms can be tailored to varying security requirements. By combining or selecting from these categories, systems achieve appropriate levels of assurance, with single-factor authentication relying on one type and multi-factor authentication requiring at least two distinct types to mitigate risks like credential compromise.[7][8] The first category, known as the knowledge factor or "something you know," involves information that only the legitimate user should possess, such as passwords, personal identification numbers (PINs), or security questions. This factor relies on the user's memory and secrecy maintenance, making it susceptible to phishing or guessing attacks if not managed securely. It forms the basis for many traditional login systems but is rarely used in isolation for high-security contexts due to its vulnerabilities.[8][1] The possession factor, or "something you have," requires the user to present a physical or digital item under their control, such as hardware tokens, smart cards, or one-time password generators. These authenticators verify ownership through unique identifiers or cryptographic proofs, providing resistance against remote impersonation but potential weakness if the item is lost or stolen. Possession-based factors are integral to elevating security in scenarios like remote access.[7][1] The inherence factor, referred to as "something you are," utilizes the user's intrinsic biological or behavioral traits for verification, including physiological biometrics like fingerprints, facial recognition, or iris scans, as well as behavioral biometrics such as gait analysis or keystroke dynamics. These methods offer convenience and difficulty in replication but raise privacy concerns and can be affected by environmental changes or spoofing attempts. Inherence factors are probabilistic in nature, contrasting with the deterministic outcomes of other categories.[8][1] Emerging hybrid factors blend elements from multiple traditional categories to enhance adaptability and continuous verification, with behavioral biometrics serving as a prominent example by analyzing dynamic patterns like typing rhythm or mouse movements, which can incorporate contextual possession data for more robust authentication. These combinations, while often aligned with inherence, allow for seamless integration in multi-factor setups without requiring explicit user actions.[1][9] This tripartite classification of factors underpins the design of authentication systems, enabling the prerequisite evaluation of security needs where single-factor approaches suffice for low-risk environments, while multi-factor configurations—mandating distinct factor types—provide layered defenses essential for protecting sensitive digital identities.[7][4]Classification
Knowledge-Based Authenticators
Knowledge-based authenticators, often categorized as "something you know," are security mechanisms that verify a user's identity through information only the legitimate user is expected to recall and keep secret. These authenticators emphasize the memorization of unique data, making them one of the oldest and most ubiquitous forms of authentication in digital systems.[1] The primary types of knowledge-based authenticators include memorized secrets, such as static passwords and personal identification numbers (PINs). Passwords are fixed strings chosen by the user, while passphrases consist of longer sequences of words or characters intended for easier memorization yet higher security. Security questions are not permitted as memorized secrets.[1] While symmetric keys may be derived from user-memorized passphrases using password-based key derivation functions (PBKDFs) like PBKDF2, which incorporate a salt and iteration count to enhance security against brute-force attacks, the resulting cryptographic authenticators are classified as possession-based.[10] Knowledge-based authenticators offer notable strengths, including low cost and ease of deployment, as they require no specialized hardware or infrastructure beyond standard input interfaces. However, their weaknesses are significant: they are highly susceptible to phishing attacks, where users disclose secrets to fraudulent sites; shoulder surfing, in which an observer visually captures the input; and password cracking methods like dictionary attacks, which systematically test common words or patterns from predefined lists.[1][1][11] Best practices for implementing knowledge-based authenticators focus on enhancing secrecy and resistance to guessing. Verifiers typically enforce password complexity requirements, such as a minimum length of 15 characters for single-factor authentication and 8 characters for multi-factor authentication, permitting a maximum of at least 64 characters to favor longer, more secure options over rigid composition rules. Entropy provides a quantitative measure of a secret's strength, calculated as the base-2 logarithm of the total possible combinations (e.g., for a charset of size C and length L, entropy ≈ L × log₂(C) bits), guiding the design of secrets that offer sufficient unpredictability against exhaustive search.[1][1]Possession-Based Authenticators
Possession-based authenticators verify a user's identity by requiring physical control of a tangible object or device that holds a secret key or generates unique authentication data, embodying the "something you have" factor in authentication frameworks.[1] These authenticators rely on proof of possession through cryptographic protocols, ensuring that only the holder of the device can produce valid responses to authentication challenges.[1] Common implementations include hardware tokens, smart cards, mobile devices, and syncable authenticators, each designed to resist unauthorized replication or remote exploitation. Syncable authenticators, such as passkeys, allow cryptographic keys to be exported and synchronized across multiple devices while maintaining security.[1] Hardware tokens, such as USB security keys, function as portable cryptographic devices that plug into a computer or connect via NFC to complete authentication. These keys, compliant with standards like FIDO2, generate public-key cryptography responses to server challenges without exposing private keys, providing phishing-resistant authentication. For instance, YubiKey models support FIDO U2F and FIDO2 protocols, allowing seamless integration with services like Google or Microsoft accounts. Smart cards, exemplified by EMV chip cards used in payment systems, embed microprocessors that store encrypted data and perform on-chip computations for transaction authentication. During use, the card generates dynamic cryptograms verified by the issuer, preventing counterfeit fraud through chip-and-PIN or chip-and-signature methods. Mobile devices serve as possession factors when enrolled in authentication systems, leveraging built-in hardware like secure enclaves to bind secrets to the physical phone.[1] One-time passwords (OTPs) are a key mechanism in possession-based authenticators, generated by devices or apps to provide short-lived codes for verification. Event-based OTPs follow the HOTP algorithm, which uses a shared symmetric key and an incrementing counter to produce a 6- to 8-digit code via HMAC-SHA1 hashing, ensuring synchronization between the token and server despite potential event skips.[12] Time-based OTPs, or TOTP, extend this by incorporating the current Unix time divided into 30-second intervals as the dynamic input, also employing HMAC-SHA1 for code generation to enable clock-tolerant validation on the server side.[13] Both HOTP and TOTP rely on the HMAC construction, which applies a cryptographic hash function like SHA-1 to a message authenticated with a secret key, producing a message authentication code resistant to forgery without the key.[14] These OTPs are typically displayed on hardware tokens or generated in apps like Google Authenticator for entry during login. Mobile push notifications enhance possession-based authentication by sending real-time approval requests to a user's enrolled smartphone, requiring physical interaction to confirm. In systems like Duo Security, the app receives an encrypted push via a secure channel, prompting the user to tap "Approve" on their device, which responds with a cryptographic assertion tied to the device's possession.[15] This method combines possession with out-of-band verification, often using protocols like WebAuthn for added security. Possession-based authenticators excel in resisting remote attacks, such as phishing or credential stuffing, because authentication requires physical access to the device, which cannot be mimicked over the network.[1] Hardware implementations at NIST's Authenticator Assurance Level 3 (AAL3) further bolster this by mandating tamper-resistant designs that protect against key extraction.[1] However, vulnerabilities arise from loss or theft of the authenticator, potentially allowing unauthorized access if not paired with additional factors like knowledge-based elements (e.g., passwords in multi-factor setups).[1] Secure recovery processes, such as re-enrollment with identity proofing, are essential to mitigate these risks, though they introduce user friction and dependency on backup mechanisms.[1]Inherence-Based Authenticators
Inherence-based authenticators, also known as biometrics, leverage unique physiological or behavioral traits inherent to an individual to verify identity, distinguishing them from knowledge or possession factors by relying on immutable or habitual personal characteristics.[16] These systems measure and compare traits against stored templates during authentication, providing a seamless user experience without the need for passwords or tokens. Common applications include access control in smartphones, border security, and financial services, where biometrics enhance security by binding authentication to the user's body or actions. NIST SP 800-63-4 includes controls to mitigate injection attacks and forged media, such as deepfakes, through requirements for liveness detection in biometric systems.[1][17] Physiological biometrics focus on static physical attributes, such as fingerprints, iris patterns, facial features, and voice patterns. Fingerprint recognition analyzes ridge patterns on the fingertips, often using minutiae-based algorithms that extract endpoint and bifurcation points for matching.[18] Iris scans capture the unique trabecular meshwork in the eye's iris using infrared imaging, while facial recognition employs neural networks to compare key landmarks like distances between eyes and nose.[17] Voice patterns, treated as physiological when focusing on timbre and spectral features, authenticate via waveform analysis. Performance is evaluated using metrics like false acceptance rate (FAR), the probability of incorrectly accepting an imposter, and false rejection rate (FRR), the probability of denying a legitimate user; for instance, modern facial systems like Apple Face ID achieve FARs around 1 in 1,000,000, though FRR can vary from 0.2% to 0.5% depending on demographics.[17] Behavioral biometrics, in contrast, monitor dynamic user habits for continuous authentication, analyzing patterns like keystroke dynamics (timing and pressure of key presses), gait analysis (walking stride via accelerometers), and mouse movements (speed, trajectory, and click patterns).[19] These serve as ongoing verifiers rather than one-time checks, detecting anomalies in real-time during sessions, such as deviations in typing rhythm that could indicate unauthorized access.[20] The enrollment process for inherence-based systems involves capturing multiple samples of the trait to create a mathematical template, which represents derived features rather than raw data to protect privacy.[17] For fingerprints, this includes scanning several fingers to generate a minutiae set, while facial enrollment requires high-resolution images (e.g., at least 640x480 pixels) for robust feature extraction. Matching occurs by comparing a live sample's template against the stored one using algorithms like correlation for iris or deep learning for faces, yielding a similarity score above a threshold for acceptance.[18] Templates are stored as hashed or encrypted representations, not raw biometric data, to prevent reconstruction; for example, symmetric hash functions convert minutiae into irreversible values for secure database storage.[21] Strengths of inherence-based authenticators include high convenience, as users need only present themselves, and resistance to forgery compared to shared secrets, with multimodal combinations (e.g., face and iris) reducing error rates by up to 31% in identification tasks.[17] However, weaknesses encompass privacy risks from sensitive data collection, vulnerability to spoofing attacks like fake fingerprints or photos (mitigated by liveness detection), and variability due to factors such as aging, which can increase mismatch rates over a decade, or environmental changes affecting traits like voice.[22][17] Demographic biases in algorithms also lead to higher FRRs for certain groups, such as women or older individuals, underscoring the need for equitable testing.[17]Multi-Factor Authentication
Principles of Multi-Factor Systems
Multi-factor authentication (MFA) systems fundamentally rely on combining two or more distinct authentication factors to verify user identity, providing a higher level of assurance than single-factor methods by mitigating the risk of compromise through any one vector. This core principle embodies the defense-in-depth strategy, layering multiple security controls to create overlapping protections that adversaries must breach sequentially. As defined by the National Institute of Standards and Technology (NIST), MFA achieves this through either a single device or process supplying multiple factors or a combination of separate authenticators from different categories, such as knowledge, possession, and inherence.[8][23] Central to effective MFA is the independence of these factors, where the security of one does not depend on the security of another, ensuring that breaching a single element—like obtaining a password—does not grant access without additional verification. For instance, pairing a memorized secret with a physical token requires an attacker to overcome unrelated barriers, exponentially raising the difficulty of unauthorized entry. This separation of factors is emphasized in security guidelines as essential for maintaining robust protection against targeted attacks.[24][25] Adaptive MFA extends these principles by incorporating risk-based evaluation, dynamically scaling authentication demands according to contextual signals like user location, device familiarity, or transaction sensitivity. In routine, low-risk interactions, basic factors may suffice, but elevated risks prompt additional steps to "step up" verification, balancing security with usability. NIST Special Publication 800-53 specifies adaptive authentication mechanisms that adjust strength based on the sensitivity of accessed resources, enabling tailored assurance without uniform rigidity.[26][27] The historical rise of MFA principles traces to the 1990s, with AT&T patenting early two-factor methods in 1995 (granted 1998), though practical adoption surged in the mid-2000s amid escalating data breaches that underscored single-factor vulnerabilities. High-profile incidents in the early 2000s prompted broader implementation as part of evolving security frameworks, including protocols that integrated MFA to counter credential theft.[28][29]Integration and Implementation
Multi-factor authentication (MFA) systems can be deployed using in-band or out-of-band models, each with distinct advantages and trade-offs in security and usability. In-band deployment involves using the same communication channel or device for both primary and secondary factors, such as generating a time-based one-time password (TOTP) via an authenticator app on the user's mobile device during login. This approach offers low latency since no additional channel is required, enabling near-instant verification after setup, but it introduces risks if the device is compromised, as both factors could be accessed simultaneously.[30][31] In contrast, out-of-band deployment separates the secondary factor into a distinct channel, such as sending a one-time code via SMS or a push notification to a registered mobile device. This model enhances security by preventing a single channel compromise from exposing all factors, making it more resistant to certain phishing or man-in-the-middle attacks, though it may incur higher latency due to network dependencies—SMS delivery can take seconds to minutes, while push notifications are typically near-instant but require an internet connection. Out-of-band methods like push approvals also improve context awareness, allowing users to verify login details like location before approving.[1][31] User experience in MFA deployment often involves balancing security with convenience to minimize friction, which can lead to user resistance or abandonment. Step-up prompts, or risk-based authentication, address this by triggering additional factors only for high-risk activities, such as logins from new devices or locations, rather than every session; this reduces overall prompts while maintaining protection, with reauthentication intervals varying by assurance level (e.g., every 12 hours for moderate-risk access). Recovery mechanisms further mitigate lockout risks, such as providing users with a set of single-use backup codes during initial setup, which can be printed or stored securely for use when primary factors are unavailable; these codes should be time-limited and revocable to prevent reuse.[1][30] A key pitfall in MFA implementation is the single point of failure when factors are shared across the same device or channel, such as relying solely on a mobile phone for both possession and knowledge elements, which can result in total lockout if the device is lost or compromised. This vulnerability is exacerbated in shared environments where multiple users access the same authenticator. Solutions include adopting hardware-bound keys, which cryptographically tie the authentication factor to a specific physical device, ensuring it cannot be easily duplicated or transferred; these provide higher assurance levels by resisting remote attacks and requiring physical presence for verification.[32][1] Enterprise adoption of MFA illustrates these integration principles effectively, as seen in Google's 2-Step Verification (2SV), launched in 2011 to add a secondary factor to password-based logins. By 2022, Google had auto-enrolled over 150 million personal accounts and mandated 2SV for more than 2 million YouTube creators, resulting in a 50% reduction in successful account compromises among enabled users; as of November 2024, approximately 70% of active Google accounts use 2SV or equivalent MFA, with plans to mandate MFA for all Google Cloud accounts by the end of 2025. This deployment combined out-of-band options like SMS and push with in-band TOTP apps, while incorporating step-up prompts and backup codes to manage user friction and recovery.[33][34][35]Standards and Protocols
NIST Digital Identity Guidelines
The NIST Special Publication (SP) 800-63 series, titled Digital Identity Guidelines, establishes a comprehensive framework for secure digital identity management, encompassing identity proofing, authentication, federation, and related processes for interactions with government information systems.[36] This series, revised to version 4 and finalized on July 31, 2025, supersedes the 2017 revision (updated in 2020 as SP 800-63-3) and addresses evolving threats by introducing risk-based evaluations, enhanced support for remote processes, and stricter controls on vulnerable methods.[37] Specifically, SP 800-63B focuses on authentication and authenticator requirements, while SP 800-63A covers identity proofing, including updates for remote biometrics with anti-spoofing measures to counter deepfakes and injection attacks.[38][1] Central to the guidelines are the Authenticator Assurance Levels (AALs), which define escalating levels of confidence in an authentication event based on the strength and security of the authenticators used. AAL1 offers basic assurance through single-factor methods, such as memorized secrets (e.g., passwords) or single-factor one-time passwords, suitable for low-risk scenarios where compromise would have limited impact.[39] AAL2 requires multi-factor authentication incorporating a possession-based factor, such as a software or hardware token generating a time-based one-time password or a cryptographic challenge-response, to provide high confidence against unauthorized access.[39] AAL3 demands the highest assurance via multi-factor methods using hardware-based cryptographic authenticators that are resistant to phishing and tampering, ensuring very high confidence in the claimant's control of the authenticator bound to their account.[39] Key requirements emphasize security and usability across levels, with AAL3 mandating tamper-resistant hardware modules (e.g., secure elements) and protocols that prevent phishing, such as public key cryptography where the authenticator proves possession without revealing secrets.[39] For memorized secrets at AAL1, the guidelines impose limits like prohibiting shared secrets across accounts, enforcing a minimum length of 15 characters without mandatory composition rules, and requiring resistance to common attacks like dictionary or brute-force attempts (e.g., via blacklists of compromised passwords), while advising against reuse or predictable patterns.[1] Credential service providers must also verify authenticator integrity, manage lifecycle events like revocation, and ensure no single point of failure in the authentication process.[1] The 2025 revision (SP 800-63-4) introduces significant updates to promote phishing-resistant authenticators, such as those leveraging public key mechanisms, as a preferred option for AAL2 and mandatory for AAL3, reflecting advancements in standards like FIDO for complementary protocol implementation.[39] It further restricts out-of-band authenticators like SMS-based one-time passwords (OTPs) at AAL2 due to vulnerabilities such as SIM swapping attacks, requiring providers to offer alternatives, inform users of risks, and implement mitigations like rate limiting if used.[1] These changes aim to align with modern threat landscapes while facilitating compliance for federal agencies and private sector entities handling sensitive digital identities.[37]FIDO Alliance Specifications
The FIDO Alliance develops open standards for secure, phishing-resistant authentication using public key cryptography, enabling authenticators that generate unique key pairs bound to specific relying parties, thereby preventing credential reuse across sites.[40] These specifications emphasize passwordless and multi-factor approaches, reducing reliance on shared secrets like passwords.[40] Earlier FIDO specifications include the Universal Authentication Framework (UAF) and Universal 2nd Factor (U2F). UAF supports passwordless authentication by allowing users to register public-private key pairs on their devices, using local authenticators such as biometrics or PINs for sign-ins, with each key pair uniquely tied to a service to resist phishing attacks.[41] U2F, now integrated as Client to Authenticator Protocol 1 (CTAP1), provides a second-factor enhancement to password-based logins via hardware tokens or embedded authenticators connected over USB, NFC, or Bluetooth Low Energy (BLE), employing public key cryptography where the private key remains securely on the device and never leaves it, ensuring resistance to man-in-the-middle and phishing exploits.[42] Both UAF and U2F leverage elliptic curve cryptography for key generation, promoting strong second- or single-factor authentication without transmitting sensitive data over the network.[40] FIDO2 builds on these foundations as the core modern standard, comprising the Client to Authenticator Protocol (CTAP) from the FIDO Alliance and the Web Authentication (WebAuthn) API standardized by the W3C. CTAP defines the communication protocol between a client platform (such as a browser or OS) and external or embedded authenticators, supporting transports like USB, NFC, and BLE for operations including key generation, signing, and credential management.[43] WebAuthn provides a browser-based JavaScript API that integrates with CTAP to enable web applications to request authentication, allowing users to authenticate via public key operations without passwords, while ensuring origin-bound keys prevent cross-site phishing.[40] The latest CTAP version, 2.2, released in July 2025, enhances support for multi-device interactions and credential migration, maintaining backward compatibility with U2F.[43] Together, FIDO2 enables both passwordless single-factor and multi-factor scenarios, with authenticators handling cryptographic challenges directly.[40] Passkeys represent a key evolution within FIDO2, introduced as synced or device-bound cryptographic credentials that fully replace passwords for authentication. Defined in FIDO2 specifications, passkeys use public key pairs where the private key is secured on the user's device or synced securely across devices via cloud services, allowing sign-ins with biometrics, PINs, or patterns while binding credentials to specific domains for phishing resistance.[44] In May 2022, Apple, Google, and Microsoft committed to broad support for passkeys through iCloud Keychain, Google Password Manager, and Microsoft accounts, respectively, enabling cross-platform syncing and device-bound options.[45] By 2024, adoption had reached 53% of surveyed users enabling passkeys on at least one account, with synced implementations allowing seamless use across ecosystems.[44] As of 2025, major platforms have integrated passkeys as the default for passwordless flows, with Apple, Google, and Microsoft driving global rollout, including support from payment networks like Visa, resulting in doubled usage on high-traffic sites.[46] To address quantum computing threats, the FIDO Alliance is developing extensions for post-quantum cryptography in its specifications, focusing on quantum-safe algorithms to protect key pairs in authenticators. A 2024 white paper outlines initiatives for transitioning FIDO protocols to post-quantum resistant schemes, emphasizing the need for hybrid or fully quantum-safe public key systems without disrupting existing deployments.[47] As of 2025, emerging research demonstrates implementations of lattice-based signatures, such as Module-Lattice-Based Digital Signature Algorithm (ML-DSA) based on CRYSTALS-Dilithium, integrated into FIDO2 for authenticator protocols, with drafts exploring these for credential generation and verification to counter harvest-now-decrypt-later attacks.[48] These extensions aim to maintain FIDO's phishing resistance while ensuring long-term security against quantum adversaries.[47]Examples
Hardware-Based Examples
Hardware-based authenticators encompass physical devices that provide possession-based verification through unique cryptographic capabilities, often integrated into multi-factor authentication schemes. Security keys, such as the YubiKey from Yubico, support FIDO U2F and FIDO2 protocols for phishing-resistant authentication, featuring USB-A interfaces for desktop connections and NFC for mobile compatibility.[49] Similarly, Nitrokey's 3 series hardware keys enable FIDO2 functionality via USB-A or USB-C ports, with NFC support for contactless operations on compatible devices.[50] These keys generate public-key credentials stored securely on the device, preventing extraction of private keys and enhancing protection against remote attacks.[51] Smart cards represent another category of hardware authenticators, embedding microprocessors for secure data processing in possession-based systems. EMV-compliant chip-and-PIN cards, used in ATM and payment terminals, incorporate dynamic one-time codes generated by the card's chip during transactions, requiring physical insertion and PIN entry to authorize access. In government contexts, Common Access Cards (CAC) for U.S. Department of Defense personnel and Personal Identity Verification (PIV) cards for federal employees serve as smart cards that facilitate secure access to facilities and information systems through certificate-based authentication.[52][53] These cards store digital certificates on an integrated chip, enabling multi-factor verification when combined with PINs, and comply with federal standards for identity management.[54] Dedicated hardware tokens, like the RSA SecurID series, provide time-synchronous one-time password generation for possession-based authentication, where the device displays a code valid for approximately 60 seconds. Models such as the SecurID 700 feature an integral battery with a typical lifespan of three years, after which the token expires and ceases authentication.[55] Synchronization between the token and the authentication server, such as RSA Authentication Manager, occurs automatically during successful logins or manually via administrative resynchronization to align internal clocks if drift occurs.[56] In real-world applications, hardware keys enable secure remote access protocols like SSH, where devices such as YubiKeys store FIDO2-resident keys for passwordless authentication, requiring physical presence via USB or NFC to sign challenges and verify user identity without exposing secrets.[57] This integration supports multi-factor setups by combining the key's cryptographic proof with additional factors, reducing risks in distributed environments.Software and App-Based Examples
Software-based authenticators implement one-time password (OTP) generation and multi-factor authentication (MFA) mechanisms through mobile and desktop applications, leveraging standards like OATH for secure token production. These tools typically enroll via QR code scanning to share secrets between the service and app, enabling time-synchronized or event-based codes without requiring physical hardware.[58][59] Google Authenticator is a prominent TOTP (Time-based One-Time Password) app developed by Google, supporting enrollment by scanning a QR code that encodes the shared secret key during setup for services like Google Workspace or personal accounts. It generates 6-digit codes every 30 seconds based on the current time and secret, adhering to RFC 6238 specifications. Since 2023, Google Authenticator has included cloud backup via Google Account sign-in, allowing synchronized accounts across devices while maintaining local storage for security.[58][13] Authy, provided by Twilio, similarly supports TOTP for 2FA across platforms like Amazon and Dropbox, with QR code scanning for straightforward enrollment and automatic token capture. Its key feature is encrypted cloud backups protected by a user-defined password, enabling seamless recovery on new devices without re-scanning all QR codes, thus reducing user friction in multi-device scenarios.[59][60] For mobile push authentication, Duo Security's Duo Mobile app delivers approval-based MFA through push notifications, where users tap "Approve" on their smartphone to confirm login requests, enhancing security over SMS by verifying device possession in real-time. This method integrates with enterprise systems and supports biometric confirmation for added assurance.[15][61] Microsoft Authenticator extends push-based MFA with number-matching prompts in notifications, requiring users to enter a displayed number to approve sign-ins, mitigating man-in-the-middle attacks. It supports both personal and work accounts, generating TOTP codes alongside push approvals for versatile MFA deployment. To enhance security, organizations can block legacy authentication protocols, which do not support MFA, and enforce modern authentication protocols that integrate with Microsoft Authenticator, helping to prevent attacks such as credential stuffing and password spray.[62][63][64][65] Passwordless authentication via FIDO2 leverages WebAuthn APIs in browsers, enabling passkeys—public-key credentials stored on devices for phishing-resistant logins without passwords. On iOS 16 and later, passkeys sync via iCloud Keychain and use biometrics like Face ID for authentication; Android 9 and above supports passkeys through Credential Manager, allowing cross-platform use with platform authenticators.[44][66][67] OATH standards underpin many software authenticators with HOTP (HMAC-based One-Time Password) for event-driven codes and TOTP for time-based ones, both using HMAC-SHA-1 on a shared secret and counter or timestamp. Implementations follow RFC 4226 for HOTP and RFC 6238 for TOTP; a basic pseudocode example for HOTP generation is:K // [Shared secret](/page/Shared_secret) (byte array)
C // Counter (8-byte integer)
T = Truncate(HMAC-SHA-1(K, C)) // Dynamic truncation to 4 bytes
DT = (T & 0x7fffffff) % 10^D // D is number of digits (e.g., 6)
OTP = DT as decimal string
K // [Shared secret](/page/Shared_secret) (byte array)
C // Counter (8-byte integer)
T = Truncate(HMAC-SHA-1(K, C)) // Dynamic truncation to 4 bytes
DT = (T & 0x7fffffff) % 10^D // D is number of digits (e.g., 6)
OTP = DT as decimal string
