Hubbry Logo
AuthorizationAuthorizationMain
Open search
Authorization
Community hub
Authorization
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Authorization
Authorization
from Wikipedia

Authorization or authorisation (see spelling differences), in information security, computer security and IAM (Identity and Access Management),[1] is the function of specifying rights/privileges for accessing resources, in most cases through an access policy, and then deciding whether a particular subject has privilege to access a particular resource. Examples of subjects include human users, computer software and other hardware on the computer. Examples of resources include individual files or an item's data, computer programs, computer devices and functionality provided by computer applications. For example, user accounts for human resources staff are typically configured with authorization for accessing employee records.

Authorization is closely related to access control, which is what enforces the authorization policy by deciding whether access requests to resources from (authenticated) consumers shall be approved (granted) or disapproved (rejected).[2]

Authorization should not be confused with authentication, which is the process of verifying someone's identity.

Overview

[edit]

IAM consists the following two phases: the configuration phase where a user account is created and its corresponding access authorization policy is defined, and the usage phase where user authentication takes place followed by access control to ensure that the user/consumer only gets access to resources for which they are authorized. Hence, access control in computer systems and networks relies on access authorization specified during configuration.

Authorization is the responsibility of an authority, such as a department manager, within the application domain, but is often delegated to a custodian such as a system administrator. Authorizations are expressed as access policies in some types of "policy definition application", e.g. in the form of an access control list or a capability, or a policy administration point e.g. XACML.

Broken authorization is often listed as the number one risk in web applications.[3] On the basis of the "principle of least privilege", consumers should only be authorized to access whatever they need to do their jobs, and nothing more.[4]

"Anonymous consumers" or "guests", are consumers that have not been required to authenticate. They often have limited authorization. On a distributed system, it is often desirable to grant access without requiring a unique identity. Familiar examples of access tokens include keys, certificates and tickets: they grant access without proving identity.


Implementation

[edit]

A widely used framework for authorizing applications is OAuth 2. It provides a standardized way for third-party applications to obtain limited access to a user's resources without exposing their credentials.[5]

In modern systems, a widely used model for authorization is role-based access control (RBAC) where authorization is defined by granting subjects one or more roles, and then checking that the resource being accessed has been assigned at least one of those roles.[5] However, with the rise of social media, Relationship-based access control is gaining more prominence.[6]

Even when access is controlled through a combination of authentication and access control lists, the problems of maintaining the authorization data is not trivial, and often represents as much administrative burden as managing authentication credentials. It is often necessary to change or remove a user's authorization: this is done by changing or deleting the corresponding access rules on the system. Using atomic authorization is an alternative to per-system authorization management, where a trusted third party securely distributes authorization information.

[edit]

Public policy

[edit]

In public policy, authorization is a feature of trusted systems used for security or social control.

Banking

[edit]

In banking, an authorization is a hold placed on a customer's account when a purchase is made using a debit card or credit card.

Publishing

[edit]

In publishing, sometimes public lectures and other freely available texts are published without the approval of the author. These are called unauthorized texts. An example is the 2002 'The Theory of Everything: The Origin and Fate of the Universe' , which was collected from Stephen Hawking's lectures and published without his permission as per copyright law.[citation needed]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Authorization is the process by which a system determines whether a user, program, or process has the right or permission to access a specific or perform a particular action, typically following to verify identity. In , it forms a core component of mechanisms, ensuring that entities can only interact with systems and in authorized ways to prevent unauthorized access, breaches, and misuse. This determination is based on policies that define permissions, often implemented through models that balance , , and administrative efficiency. Authorization is distinct from , which confirms an entity's identity (e.g., via passwords, , or tokens), whereas authorization specifies what that entity is allowed to do once identified. For instance, a user might authenticate successfully but be denied access to sensitive files due to insufficient permissions. Common authorization models include Discretionary Access Control (DAC), where resource owners set permissions; Mandatory Access Control (MAC), enforced by a central based on classifications (e.g., in systems); and (RBAC), which assigns permissions to roles rather than individuals, simplifying management in large organizations. RBAC, standardized by NIST, has become widely adopted for its scalability, allowing permissions to be predefined and users assigned to roles dynamically. These models underpin authorization in various domains, from operating systems and to web applications and cloud services, where frameworks like enable delegated access without sharing credentials. Effective authorization reduces risks associated with the principle of least privilege, limiting access to only what is necessary for tasks, thereby enhancing overall system integrity and confidentiality. As cyber threats evolve, advancements in authorization continue to incorporate (ABAC) and dynamic policies to address complex, context-aware scenarios.

Fundamental Concepts

Definition and Scope

Authorization is the by which a determines whether a subject—such as a user, , or —is permitted to perform a specific action on a , based on established policies or rules. This determination grants or enforces access privileges, ensuring that only authorized interactions occur within a environment. The scope of authorization encompasses both affirmative outcomes, where access is granted, and negative outcomes, where it is denied, to maintain system security and compliance. Central to this process are foundational elements like permissions, which specify allowable actions; roles, which group permissions for assignment to subjects; and privileges, which represent broader rights derived from those permissions. Authorization forms a core component of , the broader framework governing interactions in systems. The concept of authorization traces its origins to early computing systems in the 1970s, particularly through access control lists (ACLs) implemented in the operating system, which enabled fine-grained control over file and resource sharing among multiple users. Over time, it has evolved into a key pillar of modern , integrated into the CIA triad—confidentiality, , and —which underpins protections against unauthorized disclosure, alteration, or disruption of data. Guiding principles of authorization include the principle of least privilege, which mandates granting subjects only the minimum access necessary to accomplish their tasks, thereby minimizing potential damage from errors or compromises. Complementing this is the separation of duties, which distributes conflicting responsibilities across multiple subjects to prevent any single entity from completing a high-risk action independently, reducing risks of fraud or abuse.

Distinction from Authentication

Authentication is the process of verifying the identity of a user, device, or , confirming that they are who or what they claim to be, typically through methods such as passwords, , multi-factor tokens, or digital certificates. This verification step occurs prior to authorization, establishing the subject's legitimacy before assessing their permissions. The primary distinction between and authorization lies in their core questions and sequencing: authentication addresses "who are you?" by validating identity, while authorization answers "what are you allowed to do?" by enforcing access rights and permissions on resources or actions. In standard security workflows, authentication must succeed first; only then does authorization proceed to evaluate and grant or deny specific privileges based on policies, roles, or attributes. These concepts form part of the , , and (AAA) framework, a foundational triad used to manage and access in networks and systems. handles identity verification, authorization manages policy enforcement and privilege assignment post-verification, and (or ) tracks resource usage for compliance and monitoring. This integrated approach ensures comprehensive control, with authorization specifically addressing the risks of over-permissive access after identity is confirmed. Common misconceptions arise when conflating the two processes, such as assuming fully encompasses both; in reality, OAuth 2.0 is an authorization framework for delegating access to resources, often requiring separate mechanisms like for . Similarly, (SSO) systems primarily streamline across multiple applications using shared credentials or tokens, but they do not inherently perform authorization checks, which must be implemented per resource to prevent unauthorized actions. A typical illustrates this sequence: a subject requests access to a protected ; the first the subject by challenging and validating their credentials—if this fails, access is immediately denied. Successful authentication triggers authorization, where the queries policies or attributes to check permissions; if authorized, the action proceeds, but failure results in denial, ensuring no access without both validations.

Technical Frameworks

Access Control Models

Access control models provide the foundational frameworks for implementing authorization in systems, defining how decisions are made about granting or denying access to resources. These models evolved to address varying needs for , flexibility, and manageability, ranging from rigid system-enforced policies to dynamic, attribute-driven evaluations. Key models include (DAC), (MAC), (RBAC), and (ABAC), each suited to different environments such as operating systems, enterprises, and distributed networks. Discretionary Access Control (DAC) allows resource owners to define and manage access permissions for other users, typically through access control lists (ACLs) that specify allowed operations like read, write, or execute. In DAC, the owner retains full discretion over privilege assignments and propagation, enabling flexible, user-centric control without central oversight. This model has been historically prominent in Unix file systems, where file permissions are set via ACLs in i-nodes, using identifiers for owner, group, and others to enforce access based on effective user and group IDs. While DAC offers simplicity for small-scale systems, it is vulnerable to issues like privilege accumulation and trojan horse attacks, as users can inadvertently grant excessive rights. Mandatory Access Control (MAC) enforces access decisions centrally through system-assigned labels on subjects (users or processes) and objects (resources), preventing users from overriding policies regardless of ownership. Policies are based on labels such as clearance levels for subjects and levels for objects, ensuring strict for high- contexts like protection. The Bell-LaPadula model exemplifies MAC for , introducing rules like the simple (no read up) and the *-property (no write down) to prevent unauthorized flows in systems. SELinux implements MAC in kernels via the Flask , applying type and to label processes and files, thereby containing potential breaches from compromised applications. MAC provides robust in sensitive environments but lacks flexibility, making it challenging for dynamic or user-driven scenarios. Role-Based Access Control (RBAC) regulates access by assigning users to roles based on their job functions within an , with permissions then linked to those roles rather than individual users. Core components include users (individuals), roles (job positions), and permissions (approved operations on resources), allowing indirect access through role assignments and hierarchies. The ANSI/INCITS 359-2004 standard formalizes RBAC, defining elements like core RBAC (basic user-role-permission relations) and extensions for , and was revised as INCITS 359-2012 to enhance . RBAC excels in enterprise , efficiently handling access for organizations with over 500 users by reducing administrative overhead and minimizing errors in large-scale permission , as evidenced by industry savings of $1.1 billion from reduced downtime. However, it can suffer from role explosion in complex hierarchies, limiting adaptability to non-role-based contexts. Attribute-Based Access Control (ABAC) makes authorization decisions dynamically by evaluating policies against attributes of the subject (e.g., user role or clearance), resource (e.g., or owner), action, and environment (e.g., time or location). Policies are expressed as rules that combine these attributes to grant or deny access, enabling context-aware and fine-grained control without predefined user mappings. The eXtensible Access Control Markup Language (), an OASIS standard, facilitates ABAC policy expression through components like policy decision points (PDPs) for evaluation and policy enforcement points (PEPs) for application, supporting across systems. ABAC supports external users and complex policies effectively but requires significant resources for attribute management and policy maintenance. The evolution of these models traces back to the 1970s, when DAC and MAC emerged in systems like Multics and early Unix to address basic protection needs, with MAC focusing on military-grade confidentiality via models like Bell-LaPadula (1973). By the 1990s, RBAC was proposed as a scalable alternative to the limitations of DAC and MAC, formalized in 1992 by Ferraiolo and Kuhn, and standardized in 2004 to meet enterprise demands. ABAC further advanced this progression in the 2000s, incorporating dynamic attributes for modern, distributed environments.
ModelProsCons
DACFlexible owner control; simple for small systemsVulnerable to ; high administrative burden in large setups
MACStrong central ; high for Inflexible for dynamic access; complex updates
RBACScalable for enterprises; simplifies role-based administrationRole explosion in hierarchies; static for contextual needs
ABACFine-grained, dynamic policies; supports external usersComplex implementation; resource-intensive maintenance

Implementation in Software Systems

In software systems, authorization is typically implemented through distinct architectural components that separate decision-making from enforcement. The Policy Decision Point (PDP) evaluates access requests against defined policies to render authorization decisions, while the Policy Enforcement Point (PEP) intercepts requests and enforces those decisions by allowing or denying access. These components often integrate into middleware layers, such as API gateways, where the PEP resides at the entry point to services, querying the PDP for real-time evaluations before permitting resource access. This separation enhances modularity, allowing policies to be updated centrally without altering application code. Programming approaches to authorization emphasize explicit checks embedded in code, often using simple conditional logic for permission validation. For instance, a basic permission guard in pseudocode might appear as follows:

if (user.hasPermission("read", resource)) { return resource.[data](/page/Data); } else { throw new AccessDeniedException("Insufficient permissions"); }

if (user.hasPermission("read", resource)) { return resource.[data](/page/Data); } else { throw new AccessDeniedException("Insufficient permissions"); }

This pattern ensures fine-grained control at critical points like method invocations or API endpoints. Libraries streamline these implementations; in provides annotation-based authorization, such as @PreAuthorize("hasRole('ADMIN')"), to declaratively enforce rules during request processing. Similarly, Casbin offers a multi-language library supporting models like RBAC, where policies are defined in a and enforced via an enforcer.Enforce(user, obj, act) call to check access dynamically. Authorization data is commonly stored in databases to manage permissions scalably. In relational databases, a typical for RBAC involves tables linking users to roles and roles to permissions, such as:
TableColumnsPurpose
usersid (PK), usernameStores user identities
rolesid (PK), nameDefines roles like "admin"
permissionsid (PK), action, resourceSpecifies actions on resources
user_rolesuser_id (FK), role_id (FK)Assigns roles to users
role_permissionsrole_id (FK), permission_id (FK)Links permissions to roles
This structure enables efficient querying for access decisions via joins. For dynamic policies, databases like store permissions as flexible documents, accommodating attribute-based rules without rigid schemas. Implementing authorization introduces challenges, particularly performance overhead in real-time systems where frequent checks can introduce latency exceeding milliseconds, impacting in high-throughput applications. In distributed environments like , coordinating authorization across services exacerbates this; service meshes such as Istio address it by embedding Envoy proxies as that enforce policies at the network level, using attributes from requests to query centralized PDPs without per-service overhead. Best practices recommend evaluating centralized versus decentralized authorization based on system scale and security needs; centralized models consolidate policy management in a single PDP for consistency but risk single points of failure, while decentralized approaches distribute enforcement via local caches or proxies to reduce latency, often hybridizing both for environments. Auditing is essential for compliance, with logs capturing authorization events—including user, action, , and outcome—to meet requirements like GDPR's , ensuring immutable storage and restricted access to these records. A practical in web applications contrasts session-based and token-based authorization. Session-based checks store user permissions server-side in a session store (e.g., ), validating them on each request via a , which suits monolithic apps but scales poorly due to stateful storage. Token-based approaches embed permissions in self-contained JWTs sent with requests, enabling stateless verification client-side or at edges, ideal for SPAs or APIs but requiring careful token expiration to mitigate replay risks.

Domain-Specific Applications

In public policy and legal contexts, authorization refers to government-granted permissions that enable individuals, organizations, or entities to perform actions otherwise prohibited by law, such as issuing licenses and permits under frameworks. These authorizations serve as official recognitions of compliance with regulatory requirements, forming a core component of by delegating authority to agencies to approve, monitor, and enforce such permissions while balancing and individual rights. The evolution of authorizations in traces back to the 19th century, when statutes began providing structured independence and accountability to administrative processes amid growing industrialization and government intervention in economic activities. By the early , the expansion of formalized these mechanisms, culminating in the of 1946, which standardized procedures for granting and revoking licenses and approvals. Post-9/11 developments further intensified this framework, with authorizations shifting toward enhanced measures, including expanded military and intelligence powers through acts like the Authorization for Use of Military Force. Key examples illustrate this application in policy-making. In the United States, the , enacted annually since 1961, authorizes appropriations and sets policies for Department of Defense activities, nuclear programs, and military construction, exemplifying of executive authorizations. In the , authorizations under GDPR Article 9 permit the processing of sensitive —such as racial origins or health information—only under strict conditions like explicit or necessity for legal claims, ensuring regulatory compliance in data protection. Legal principles governing authorizations emphasize , requiring fair procedures in granting or revoking permissions to protect against arbitrary government action, as enshrined in the Fifth and Fourteenth Amendments. Judicial review of agency decisions provides a check on these processes; however, the 2024 Supreme Court ruling in overturned the Chevron doctrine, ending automatic deference to agencies' interpretations of ambiguous statutes and empowering courts to independently assess authorization validity. Internationally, UN frameworks mandate state authorizations for resource extraction, particularly on indigenous lands, requiring before approving projects to prevent harm and uphold . Challenges persist in cross-border flows, where divergent national authorization regimes create barriers, increasing compliance costs and hindering global by up to 1.7% of GDP in affected sectors. Current issues highlight concerns over authorization overreach, notably in under the (FISA). Amendments to FISA Section 702, reauthorized in 2024, allow warrantless collection of non-U.S. persons' communications, raising risks for Americans through incidental data capture and lacking robust oversight, as evidenced by documented compliance failures and warrantless queries of U.S. persons' data in Section 702 databases exceeding 200,000 in 2022 (though numbers have since decreased). In systems, these policy authorizations are increasingly implemented via digital platforms to streamline administrative approvals.

Financial and Banking Systems

In financial and banking systems, authorization serves as a critical mechanism for approving transactions while mitigating risks such as , insufficient funds, and illicit activities. Real-time transaction authorization typically occurs through payment networks like VisaNet, where an evaluates factors including account balance availability and fraud indicators before responding to an authorization request from a merchant's acquirer. This process ensures that only legitimate transactions proceed, often within seconds, to facilitate seamless commerce. Regulatory frameworks underpin these authorizations to enforce security and compliance. The Payment Card Industry Data Security Standard (PCI DSS) mandates secure handling of cardholder data during authorization, including for transmission and restrictions on storing sensitive authentication data post-authorization to prevent breaches. Similarly, anti-money laundering (AML) regulations, as outlined by the (FATF), require financial institutions to apply enhanced for high-risk transactions, such as verifying customer identities and transaction purposes before approval. These standards promote standardized, auditable authorization practices across global banking operations. Technological implementations enhance the robustness of authorization in banking. chip protocols, developed by EMVCo, enable secure point-of-sale (POS) authorizations by generating dynamic cryptograms for each transaction, verifying card authenticity and reducing counterfeit risks compared to static data methods. In mobile banking, two-factor authorization—often combining something the user knows (e.g., PIN) with something they have (e.g., device token)—is standard for high-value actions like transfers, as recommended by federal banking regulators to layer defenses against unauthorized access. Risk management models integrate with proactive controls to deny suspicious activities. Velocity checks monitor transaction frequency, such as limiting approvals to a set number per hour from the same account or , flagging potential fraud rings for review or denial. algorithms further support anomaly-based denials by analyzing patterns in transaction —such as unusual locations or amounts—to score risks in real-time, as demonstrated in frameworks for high-value systems where supervised and models achieve high detection rates for outliers. Historically, authorization in banking evolved from manual processes to automated systems. The introduction of magnetic stripe technology in the 1960s, pioneered by , allowed for encoded card data readable by swipe devices, enabling the first widespread electronic authorizations for credit and debit transactions. By the 2010s, a shift to contactless authorization accelerated, driven by NFC-enabled cards and mobile wallets like , which streamlined approvals without physical contact while maintaining security through tokenized data. Global variations reflect diverse infrastructures and challenges. The SWIFT network facilitates international wire transfer authorizations by standardizing secure messaging between over 11,000 institutions, ensuring validated instructions for cross-border payments with built-in compliance checks. In cryptocurrency systems, transaction authorization relies on wallet permissions, such as multi-signature approvals, but faces hurdles like permissionless blockchains' lack of centralized controls, complicating regulatory oversight and fraud prevention in decentralized environments.

Publishing and Media Rights

In the realm of publishing and media rights, authorization fundamentally revolves around the permissions granted by holders to control the and adaptation of creative works. Under international , authors and creators possess the to authorize the of their literary and artistic works in any manner, including direct or indirect in digital or physical forms. This extends to adaptations, such as translations, arrangements, or transformations of the original work, ensuring that unauthorized uses do not infringe upon the creator's economic interests. exceptions, however, serve as implicit non-authorizations, permitting limited uses without permission for purposes like , commentary, or , provided they meet criteria of purpose, , amount, and market effect. Licensing models provide structured frameworks for granting these authorizations, balancing creator control with broader access. licenses, for instance, offer standardized tools where creators can specify permissions; the CC BY-SA license allows distribution, remixing, and adaptation in any medium, but mandates attribution to the original author and requires derivative works to adopt the same license terms. In the music industry, mechanical licenses authorize the reproduction of musical compositions in recordings, such as for sampling, requiring permission from the composition's holders to manufacture and distribute copies, often facilitated through compulsory licensing rates set by law. Digital rights management (DRM) systems enforce these authorizations technologically, particularly for digital media. Adobe Content Server, a widely used DRM tool, enables publishers to control e-book access by encrypting content and tying it to authorized devices or users via Adobe IDs, preventing unauthorized sharing or copying. Despite such measures, challenges persist with piracy circumvention, as tools and methods to bypass DRM protections undermine enforcement, leading to widespread unauthorized distribution and economic losses for rights holders, exacerbated by the DMCA's anti-circumvention provisions that criminalize such acts but struggle against evolving technologies. Historically, the of 1886 established foundational international norms for these authorizations, mandating automatic protection without formalities and exclusive rights for authors across member states, influencing global standards for reproduction and adaptation. In the United States, the of 1998 introduced safe harbors for online platforms, shielding them from liability for user-generated infringements if they promptly remove unauthorized content upon notification, thereby facilitating digital distribution while protecting intermediaries. Industry practices further operationalize these authorizations through dedicated platforms and organizations. Self-publishing services like Amazon Kindle Direct Publishing (KDP) require authors to grant territorial distribution rights, authorizing Amazon to reproduce and sell their works globally in exchange for royalties, with authors retaining ownership but ceding specific usage permissions. Royalty collection societies, such as the American Society of Composers, Authors and Publishers (ASCAP), manage public performance authorizations on behalf of creators, collecting and distributing fees from licensees like broadcasters and venues, ensuring creators receive compensation without direct negotiation. Emerging trends leverage for authorization, particularly with non-fungible tokens (NFTs) enabling verifiable ownership transfers of . NFTs represent unique digital assets on blockchains like , allowing artists to authorize and track sales or licenses of their works, though they do not inherently convey full transfer unless explicitly stated in accompanying smart contracts, addressing issues in while raising questions about perpetual control.

Standards and Protocols

OAuth and Delegated Authorization

OAuth emerged as a framework for delegated authorization, allowing third-party applications to access user resources without sharing credentials. The initial version, , introduced in 2007, relied on signature-based authentication to verify requests, requiring clients to sign each call with cryptographic methods. This approach, formalized in RFC 5849 in 2010, aimed to secure access to services like Twitter's but proved complex for implementation due to its reliance on shared secrets and nonces. In contrast, , published as RFC 6749 by the IETF in October 2012, simplified the process by using bearer tokens—opaque strings that grant access upon presentation—eliminating per-request signing while supporting various grant types for different client scenarios. Key grant types in include the authorization code grant for server-side applications, the implicit grant for client-side scripts (now deprecated), and the client credentials grant for machine-to-machine communication without user involvement. The authorization code grant, the most secure and recommended flow for confidential clients, operates in several steps to mitigate risks like credential exposure. First, the client redirects the user to the authorization server's endpoint with parameters including the client ID, requested scopes, redirect URI, and a state parameter for CSRF protection. The user authenticates and consents at the authorization server, which then redirects back to the client's specified URI with a short-lived authorization . The client subsequently exchanges this —via a backend POST to the token endpoint—for an and optional refresh token, authenticating itself with client credentials if confidential. Security considerations are critical; without proper validation of the state parameter, attackers could exploit (CSRF) by tricking users into authorizing malicious requests. To counter interception attacks, particularly in public clients like mobile apps, the Proof Key for Code Exchange (PKCE) extension—defined in RFC 7636 and published in September 2015—introduces a code challenge derived from a verifier, ensuring only the originating client can exchange the code. Common use cases for OAuth highlight its role in delegated access. In social login scenarios, such as "Sign in with ," users authorize third-party apps to access profile data or email without revealing passwords, enabling seamless integration across services. For API delegation, platforms like the API (now X API) allow developers to request user-specific permissions, such as posting tweets, through scoped tokens that limit exposure. These applications underscore OAuth's focus on fine-grained, revocable access rather than full credential . OAuth 2.0's standardization by the IETF in RFC 6749 established it as the core protocol for web-based authorization, with extensions addressing evolving needs. The ongoing OAuth 2.1 draft, first circulated in 2020 and updated through 2025, consolidates best practices by mandating PKCE for all clients, removing the implicit grant due to security flaws, and enforcing stricter redirect URI validation to close gaps in OAuth 2.0 implementations. Despite its strengths, is strictly an authorization framework and not intended for authentication alone; for identity verification, it pairs with protocols like OpenID Connect. Common vulnerabilities persist, including CSRF in redirect flows if the state parameter is omitted or mismatched, potentially allowing attackers to hijack authorizations. By 2023, OAuth 2.0 had seen widespread adoption, powering delegated access in over 10% of the top 1,000 websites for user login features, with even broader use in API ecosystems across major platforms.

Enterprise and Federated Protocols

In enterprise environments, authorization protocols enable secure access across organizational boundaries by leveraging management, where trust relationships allow users authenticated by one domain to access resources in another without redundant credentials. Key standards include the () 2.0, ratified as an OASIS standard in 2005, which uses XML-based assertions to exchange authentication, attributes, and authorization decisions for (SSO) in enterprise settings. SAML assertions typically include subject confirmations, attribute statements, and authorization decision statements, facilitating in federated systems. Building on web authorization foundations like OAuth 2.0, OpenID Connect (OIDC) 1.0, finalized in 2014 by the Foundation, extends it as an identity layer that incorporates ID tokens in (JWT) format to augment authorization with user identity verification. These ID tokens, signed and optionally encrypted, convey claims such as user identifiers and authentication context, enabling relying parties to make informed authorization decisions in cross-domain scenarios. Federated identity management operates through identity providers (IdPs), which authenticate users and issue assertions, and service providers (SPs), which rely on those assertions to authorize access to protected resources. This model establishes mutual trust via metadata exchange, allowing seamless authorization across domains; for instance, (formerly Azure AD) and serve as IdPs that integrate SAML or OIDC for enterprise-wide, cross-organizational authorization. The evolution of these protocols traces from early efforts like , a Microsoft-led specification from the early 2000s that defined mechanisms for identity and attribute federation across security realms using SOAP-based exchanges. Modern advancements integrate these into zero-trust architectures, as outlined in NIST SP 800-207, which emphasize continuous authorization verification regardless of network location, incorporating federated protocols to enforce at access points. Security in these protocols relies on robust mechanisms such as token signing with asymmetric algorithms like RSA or ECDSA to ensure integrity and authenticity of assertions and tokens, preventing tampering in transit. is handled through methods like 2.0 Token Introspection (RFC 7662), where resource servers query authorization servers to validate token status and active attributes in real-time. In higher education, the InCommon Federation employs SAML to enable secure, attribute-driven authorization for over 750 U.S. higher education institutions and more than 1,000 total participants, allowing students and faculty to access shared research resources across campuses without separate logins. In cloud environments, AWS Identity and Access Management (IAM) roles support federated authorization by assuming roles based on SAML or OIDC assertions from external IdPs, granting temporary permissions to resources like S3 buckets or EC2 instances.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.