Hubbry Logo
search
logo
722941

OpenShift

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia
DeveloperRed Hat
Initial releaseMay 4, 2011; 14 years ago (2011-05-04)
Stable release
4.19 / July 17, 2025; 3 months ago (2025-07-17)[1]
Written inGo, React
Operating systemRed Hat Enterprise Linux or Red Hat Enterprise Linux CoreOS
TypeCloud computing, Platform as a service
Licensecommercial
Websitewww.redhat.com/en/technologies/cloud-computing/openshift Edit this at Wikidata

OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform — a hybrid cloud platform as a service built around Linux containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family's other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to the way that Fedora is upstream of Red Hat Enterprise Linux), Several deployment methods are available including self-managed, cloud native under ROSA (Red Hat OpenShift Service on AWS), ARO (Azure Red Hat OpenShift) and RHOIC (Red Hat OpenShift on IBM Cloud) on AWS, Azure, and IBM Cloud respectively, OpenShift Online as software as a service, and OpenShift Dedicated as a managed service.

The OpenShift Console has developer and administrator oriented views. Administrator views allow one to monitor container resources and container health, manage users, work with operators, etc. Developer views are oriented around working with application resources within a namespace. OpenShift also provides a CLI that supports a superset of the actions that the Kubernetes CLI provides.

History

[edit]

OpenShift originally came from Red Hat's acquisition of Makara, a company marketing a platform as a service (PaaS) based on Linux containers, in November 2010.[2][3][4] OpenShift was announced in May 2011 as proprietary technology and did not become open-source until May of 2012.[5] Up until v3, released in June 2015, the container technology and container orchestration technology used custom developed technologies. This changed in v3 with the adoption of Docker as the container technology, and Kubernetes as the container orchestration technology.[6] The v4 product has many other architectural changes - a prominent one being a shift to using CRI-O, as the container runtime (and Podman for interacting with pods and containers), and Buildah as the container build tool, thus breaking the exclusive dependency on Docker.[7]

Architecture

[edit]

The main difference between OpenShift and vanilla Kubernetes is the concept of build-related artifacts. In OpenShift, such artifacts are considered first class Kubernetes resources upon which standard Kubernetes operations can apply. OpenShift's client program, "oc", offers a superset of the standard capabilities bundled in the mainline "kubectl" client program of Kubernetes.[8] Using this client, one can directly interact with the build-related resources using sub-commands (such as "new-build" or "start-build"). In addition to this, an OpenShift-native pod builds technology called Source-to-Image (S2I) is available out of the box, though this is slowly being phased out in favor of Tekton — which is a cloud native way of building and deploying to Kubernetes. For the OpenShift platform, this provides capabilities equivalent to what Jenkins can do.

Some other differences when OpenShift is compared to Kubernetes:

  1. The out-of-the-box install of OpenShift comes with an image repository.
  2. ImageStreams (a sequence of pointers to images which can be associated with deployments) and Templates (a packaging mechanism for application components) are unique to OpenShift and simplify application deployment and management.
  3. The "new-app" command which can be used to initiate an application deployment automatically applies the app label (with the value of the label taken from the --name argument) to all resources created as a result of the deployment. This can simplify the management of application resources.
  4. In terms of platforms, OpenShift used to be limited to Red Hat’s own offerings but by 2020 supports others like AWS, IBM Cloud, vSphere, and bare metal deployments with OpenShift 4.[9]
  5. OpenShift’s implementation of Deployment, called DeploymentConfig is logic-based in comparison to Kubernetes' controller-based Deployment objects.[9] As of v4.5, OpenShift is steering more towards Deployments by changing the default behavior of its CLI.
  6. An embedded OperatorHub. This is a web GUI where users can browse and install a library of Kubernetes Operators that have been packaged for easy lifecycle management. These include Red Hat authored Operators, Red Hat Certified Operators and Community Operators.[10]

OpenShift v4 tightly controls the operating systems used. The "control plane" components have to be running Red Hat CoreOS. This level of control enables the cluster to support upgrades and patches of the control plane nodes with minimal effort. The compute nodes can be running any Linux OS or even Windows.

OpenShift introduced the concept of routes - points of traffic ingress into the Kubernetes cluster. The Kubernetes ingress concept was modeled after this.[11]

OpenShift includes other software such as application runtimes as well as infrastructure components from the Kubernetes ecosystem. For example, for observability needs, Prometheus, Fluentd, Vector, Loki, and Istio (and their dependencies) are included. The Red Hat branding of Istio is called Red Hat Service Mesh, and is based on an opensource project called Maistra, that aligns base Istio to the needs of opensource OpenShift.

Products

[edit]

OpenShift Container Platform

[edit]

OpenShift Container Platform (formerly known as OpenShift Enterprise[12]) is Red Hat's on-premises private platform as a service product, built around application containers powered by CRI-O, with orchestration and management provided by Kubernetes, on Red Hat Enterprise Linux and Red Hat Enterprise Linux CoreOS.[13]

OKD

[edit]

OKD, known until August 2018 as OpenShift Origin[14] (Origin Community Distribution) is the upstream community project used in OpenShift Online, OpenShift Dedicated, and OpenShift Container Platform. Built around a core of Docker container packaging and Kubernetes container cluster management, OKD is augmented by application lifecycle management functionality and DevOps tooling. OKD provides an open source application container platform. All source code for the OKD project is available under the Apache License (Version 2.0) on GitHub.[15][16]

Red Hat OpenShift Online

[edit]

Red Hat OpenShift Online (RHOO) is Red Hat's public cloud application development and hosting service which runs on AWS and IBM Cloud.[17]

Online offered version 2[when?] of the OKD project source code, which is also available under the Apache License Version 2.0.[18] This version supported a variety of languages, frameworks, and databases via pre-built "cartridges" running under resource-quota "gears". Developers could add other languages, databases, or components via the OpenShift Cartridge application programming interface.[19] This was deprecated in favour of OpenShift 3,[20] and was withdrawn on 30 September 2017 for non-paying customers and 31 December 2017 for paying customers.[21]

OpenShift 3 is built around Kubernetes. It can run any Docker-based container, but Openshift Online is limited to running containers that do not require root.[20]

Red Hat OpenShift 4 for IBM Z and IBM LinuxONE supports on-premise, cloud, and hybrid environments.[22][23]

OpenShift Dedicated

[edit]

OpenShift Dedicated (OSD) is Red Hat's managed private cluster offering, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux. It is available on the Amazon Web Services (AWS), IBM Cloud, Google Cloud Platform (GCP) marketplaces since December 2016.[24][25] A managed private cluster offering is also offered on Microsoft Azure under the name Azure Red Hat OpenShift (ARO).[26]

OpenShift Data Foundation

[edit]

OpenShift Data Foundation (ODF) provides cloud native storage, data management and data protection for applications running with OpenShift Container platform in the cloud,[27] on-prem, and in hybrid/multi-cloud environments.

OpenShift Database Access

[edit]

Red Hat OpenShift Database Access (RHODA) is a capability in managed OpenShift Kubernetes environments enabling administrators to set up connections to database-as-a-service offerings from different providers. RHODA is an add-on service to OSD and Red Hat OpenShift Service on AWS (ROSA). RHODA's initial alpha release included support for MongoDB Atlas for MongoDB and Crunchy Bridge for PostgreSQL.[28]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Red Hat OpenShift, developed by Red Hat (an IBM company), is an enterprise-grade, Kubernetes-based container application platform designed to enable developers and organizations to build, modernize, deploy, and manage cloud-native applications at scale across hybrid cloud environments.[1] It provides a unified foundation that integrates container orchestration, CI/CD pipelines, service mesh capabilities, and observability tools, while ensuring security, compliance, and consistency from development to production.[2] Built on open-source technologies, with OKD serving as its community-driven upstream project, OpenShift extends Kubernetes with enterprise features such as automated installations, built-in image registries, and operator-based lifecycle management for applications and infrastructure.[3] The platform's development began in early 2010, following Red Hat's acquisition of Makara, a cloud application deployment company, which accelerated its focus on platform-as-a-service (PaaS) solutions.[4] OpenShift Enterprise 1.0 was publicly released in November 2012 as an open-source PaaS offering built on Red Hat Enterprise Linux, initially supporting application deployment via "gears" and "cartridges" on cloud infrastructures like Amazon Web Services.[4] By 2013, Red Hat began integrating container technologies, joining the Docker community to enhance portability and efficiency.[4] A pivotal shift occurred in 2016 with the launch of OpenShift 3, which adopted Kubernetes as its core orchestration engine, moving away from traditional PaaS models toward container-native architectures.[4] Subsequent milestones have solidified OpenShift's position as a leader in enterprise Kubernetes. In 2018, Red Hat acquired CoreOS, incorporating its Tectonic and etcd technologies to introduce Kubernetes operators for simplified application management.[4] OpenShift 4 arrived in 2019, featuring full-stack automation, Red Hat Enterprise Linux CoreOS for node management, and support for hybrid and multicloud deployments, including managed services like Red Hat OpenShift Service on AWS (ROSA).[4] As of November 2025, with the release of version 4.20 enhancing AI, virtualization, and security features, OpenShift powers workloads in areas such as AI/ML, virtualization, and edge computing, with built-in tools like OpenShift GitOps for declarative deployments and OpenShift Pipelines for automated workflows.[1][5] It has been recognized as a Leader in the 2025 Gartner Magic Quadrant for Container Management for the third consecutive year, underscoring its reliability and adoption by major enterprises.[1]

Overview

Definition and Purpose

OpenShift is a family of containerization software products developed by Red Hat, designed to provide enterprise-grade management of containerized applications built on the Kubernetes orchestration engine.[1][6] Its primary purpose is to enable developers and operators to build, deploy, scale, and manage containerized applications across hybrid cloud environments, thereby unifying DevOps workflows and supporting the full application lifecycle from development to production.[7][3] OpenShift has evolved from an initial Platform as a Service (PaaS) offering focused on application hosting to a comprehensive Kubernetes-based platform that integrates container orchestration with advanced enterprise tools.[4][8] In comparison to base Kubernetes, OpenShift extends the open-source orchestrator by incorporating built-in continuous integration and continuous delivery (CI/CD) pipelines, enhanced security features, and integrated monitoring capabilities to meet enterprise requirements for reliability and compliance.[9][6]

Key Features

OpenShift distinguishes itself from standard Kubernetes through a suite of integrated tools and capabilities designed to enhance developer productivity and operational efficiency. At its core, it builds upon Kubernetes pods and services as fundamental units for application deployment. Key among these are built-in developer tools that streamline the application lifecycle. Source-to-Image (S2I) automates the creation of container images from source code by injecting application code into pre-built builder images, enabling rapid builds without manual Dockerfile management. Integrated continuous integration and continuous delivery (CI/CD) is provided via OpenShift Pipelines, based on the open-source Tekton project, which allows developers to define reusable pipeline tasks for automated workflows, including building, testing, and deploying applications from Git repositories. The Operator framework represents a pivotal feature for managing stateful and complex applications. Operators are software extensions that use Kubernetes custom resources to automate the deployment, configuration, scaling, and maintenance of applications such as databases, acting as domain-specific controllers that reconcile desired states with actual cluster conditions.[10] The Operator Lifecycle Manager (OLM) facilitates the discovery, installation, and upgrading of certified Operators through an integrated catalog, ensuring secure and consistent management across environments. Multitenancy in OpenShift is achieved through enhanced project isolation, leveraging Kubernetes namespaces augmented with OpenShift-specific security context constraints (SCCs) and role-based access control (RBAC) to enforce resource quotas, network policies, and user permissions, thereby allowing multiple teams to share a cluster securely without interference. This setup provides logical separation for workloads while maintaining cluster efficiency. OpenShift supports hybrid cloud deployments by offering a consistent platform experience across on-premises infrastructure, major public clouds like AWS, Azure, and Google Cloud, and edge locations, with unified management tools that abstract underlying differences in infrastructure.[11] For observability, it integrates a comprehensive monitoring and logging stack featuring Prometheus for metrics collection and alerting, Loki for log storage, and visualization through the OpenShift web console, enabling real-time insights into cluster health and application performance without additional setup.[12] Self-service provisioning empowers developers to independently create and manage resources through the intuitive web console, which provides a graphical interface for deploying applications, configuring routes, and scaling services, or via the OpenShift CLI (oc) command-line tool, which offers powerful scripting capabilities for automation and integration with CI/CD pipelines.

History

Origins and Early Development

OpenShift was initially launched by Red Hat on May 5, 2011, as a developer preview of a Platform-as-a-Service (PaaS) solution during the Red Hat Summit in Boston.[13] This early version utilized Linux containers to deploy and manage applications, providing a cloud-based environment that supported multiple programming languages and frameworks, including Ruby on Rails, Java, PHP, and Python.[13] The platform aimed to simplify application development and deployment for developers by offering integrated tools such as Git for source code management and Jenkins for continuous integration (CI), enabling seamless workflows from code commit to hosting without managing underlying infrastructure.[13] At its core, OpenShift targeted cloud environments to reduce the complexity of scaling and maintaining applications, allowing developers to focus on coding rather than server provisioning.[14] The general availability of OpenShift 1.0 arrived in November 2012 with the release of Red Hat OpenShift Enterprise 1.0, marking the platform's transition to a production-ready, on-premise PaaS offering.[15] This version emphasized multi-tenant application hosting, leveraging a gear-based architecture for resource allocation where individual "gears" functioned as isolated, scalable units akin to early container instances, built on Red Hat Enterprise Linux technologies like SELinux for security.[15][4] Gears enabled efficient subdivision of nodes into secure, multi-tenant spaces, supporting shared infrastructure while isolating user applications, and integrated persistent storage options alongside the Git and Jenkins tools for streamlined CI processes.[15] This architecture catered to enterprise needs by providing a hybrid cloud foundation that extended the initial public beta service launched in 2011.[15] To foster community involvement, Red Hat open-sourced the platform's codebase in April 2012 through the OpenShift Origin project, which served as the upstream community edition and encouraged contributions from developers worldwide.[16] This initiative built on the gear model and developer-centric features, allowing external enhancements to the PaaS while maintaining compatibility with Red Hat's commercial offerings, and laid the groundwork for broader adoption in cloud-native development practices.[16]

Transition to Kubernetes

In 2015, Red Hat significantly pivoted OpenShift by integrating Kubernetes as its core orchestration engine in version 3.0, marking a departure from the platform's earlier custom cartridge-based system. This transition replaced the proprietary "gears" and "cartridges"—which handled application deployment and scaling in versions 1 and 2—with Kubernetes primitives such as pods, services, and deployments, enabling more standardized and portable container management.[14][17] Launched at the Red Hat Summit in June 2015 on Kubernetes 0.9 (ahead of its 1.0 release), OpenShift 3.0 introduced Docker as the container runtime, allowing developers to build and deploy applications as container images rather than bundled cartridges.[18] A key aspect of this shift was the role of OpenShift Origin, the open-source upstream project for the commercial OpenShift platform, which facilitated contributions back to Kubernetes development. Red Hat engineers, including early external committers like Clayton Coleman, helped shape Kubernetes features such as namespaces for multi-tenancy, custom resource definitions (CRDs), role-based access control (RBAC), and API aggregation, ensuring OpenShift's enterprise requirements influenced the broader ecosystem.[18] This upstream-downstream model positioned OpenShift Origin (later rebranded as OKD) as a community-driven foundation that extended Kubernetes with PaaS capabilities while advancing the upstream project.[19] The transition introduced several OpenShift-specific enhancements built atop Kubernetes to simplify developer workflows. Routes provided secure external access to services via HTTP/HTTPS with automatic TLS termination, while build configurations automated the creation of container images from source code using strategies like Source-to-Image (S2I). Templates enabled repeatable, parameterized deployments, allowing teams to standardize application setups across environments. These features addressed the limitations of the prior system by supporting atomic updates—where applications could be updated without downtime—and rolling deployments for gradual rollouts with health checks.[20][21] The rationale for adopting Kubernetes stemmed from its alignment with emerging industry standards for container orchestration, fostering greater interoperability and developer adoption. By standardizing on Kubernetes—chosen after evaluating alternatives like Apache MesosRed Hat aimed to support microservices architectures through its robust primitives for stateless and stateful workloads, while enabling hybrid cloud portability across on-premises and public clouds. This move capitalized on Kubernetes' strong community momentum, with Red Hat becoming the second-largest contributor after Google, and its proven scalability from Google's internal Borg system handling billions of deployments weekly. The OpenShift 3.x series, spanning releases from 3.0 to 3.11, emphasized these capabilities, rapidly gaining hundreds of enterprise customers across sectors like finance and retail by providing a "web-scale" platform for distributed applications.[17][20][22]

Major Milestones and Recent Developments

In 2018, the OpenShift community project underwent a significant rebranding from OpenShift Origin to OKD with the release of version 3.10, aiming to better distinguish the upstream community distribution from Red Hat's commercial offerings while maintaining its open-source foundation.[23] A pivotal milestone occurred in July 2019 when IBM completed its $34 billion acquisition of Red Hat, positioning OpenShift as a cornerstone of IBM's hybrid cloud strategy and enabling broader integration across multicloud environments.[24] This move facilitated the transformation of IBM's software portfolio to be cloud-native and optimized for OpenShift, enhancing enterprise adoption for hybrid deployments.[25] The shift to OpenShift 4.x began in 2019, introducing operator-based lifecycle management for automated cluster operations and improved multicluster support to simplify administration across distributed environments. The first general availability release in the OpenShift 4.x series, version 4.1, in July 2019, marked the adoption of CRI-O as the default container runtime, replacing Docker and aligning more closely with Kubernetes standards for better performance and security.[26] Subsequent versions built on this foundation; for instance, OpenShift 4.10, released in March 2022, enhanced edge computing capabilities with support for bare-metal installations, ARM architecture, and simplified deployments at remote sites.[27] In February 2024, OpenShift 4.15 advanced AI integrations by providing general availability for ARM clusters and expanded observability options, while bolstering support for AI/ML workloads through integrations like OpenShift Data Foundation.[28][29] OpenShift 4.19, released in June 2025, introduced two-node cluster configurations with a local arbiter for high availability in resource-constrained environments and extended BGP networking support in OVN-Kubernetes for efficient route advertisement in pod and VM traffic.[30][31] OpenShift 4.20, released in October 2025, further accelerates AI and virtualization innovation, enhances platform security, and improves hybrid cloud capabilities.[32] As of November 2025, recent developments in OpenShift emphasize AI/ML workloads through enhanced pipeline management in OpenShift AI, enabling end-to-end data processing to model serving. Serverless capabilities have advanced with Knative integrations for event-driven architectures and long-running requests tailored to AI use cases. Sustainability features, such as energy-efficient scheduling, have gained prominence to optimize resource utilization and reduce power consumption in hybrid cloud setups.[33][34] In early 2026, Red Hat released OpenShift 4.21, built on Kubernetes 1.34 and RHEL CoreOS 9.6. Key enhancements include support for continuous operation of critical applications through cross-cluster live VM migration (no downtime during maintenance), attribute-based GPU allocation (generally available for profile or capability-based requests), improved observability features in OpenShift and RHACM, and a unified ecosystem menu merging developer and admin views with centralized software catalog and Helm support. Red Hat Advanced Cluster Management for Kubernetes (RHACM) 2.16, included in OpenShift Platform Plus, introduced generally available right-sizing recommendations at cluster, namespace, and VM levels to optimize CPU/memory allocation, reduce waste, and improve stability/efficiency in multicluster environments. RHACM continues to expand support for diverse Kubernetes distributions beyond OpenShift, including those in the Certified Kubernetes Conformance Program. Red Hat was named a Leader in the 2025 Gartner Magic Quadrant for Container Management for the third consecutive year, praised for strong enterprise use cases especially in hybrid environments, though it trails hyperscalers (Google, Microsoft, AWS) in ability to execute and completeness of vision, with some client concerns over costs. OpenShift receives praise for its enterprise-grade security features, consistent hybrid cloud experience across environments, and robust Red Hat support ecosystem. However, some users criticize the subscription-based pricing model as costly compared to open-source alternatives or hyperscaler offerings, and the platform's opinionated design can introduce additional complexity or perceived vendor lock-in for teams preferring more lightweight Kubernetes distributions.

Architecture

Core Components

OpenShift's core components form the foundation of its Kubernetes-based architecture, extending standard Kubernetes elements to provide enterprise-grade container orchestration. At the heart of the platform is the control plane, which manages cluster state and operations, while nodes execute workloads. Key Kubernetes primitives such as pods, services, deployments, and replica sets are augmented with OpenShift-specific features for enhanced management and scalability. Additionally, Operators serve as custom controllers to automate complex application lifecycles, and user interfaces like the web console and oc CLI facilitate interaction with the cluster.[35] The control plane consists of several critical elements that ensure the cluster's reliability and coordination. The API server acts as the front-end for the Kubernetes API, validating and configuring data for resources like pods, services, and replication controllers; it is managed by the OpenShift API Server Operator to handle platform-specific extensions. Etcd provides distributed, consistent key-value storage for all cluster data, including object states and configuration details, and is overseen by the etcd Operator for high availability and backups. The scheduler evaluates resource requirements and assigns pods to suitable nodes based on availability and constraints, while the controller manager runs background processes to reconcile the current state with the desired state, incorporating both Kubernetes and OpenShift controllers for tasks like node management.[36][36] Nodes in an OpenShift cluster are divided into control plane (formerly master) nodes and worker nodes, each optimized for their roles. Control plane nodes host the control plane components and require Red Hat Enterprise Linux CoreOS (RHCOS) as the host operating system to ensure consistency and security updates. Worker nodes run application workloads and can use either RHCOS or Red Hat Enterprise Linux (RHEL) for flexibility in diverse environments. The CRI-O container runtime, a lightweight Kubernetes-native interface, executes containers on nodes, replacing Docker and integrating seamlessly with Kubernetes pods for efficient resource isolation.[37][38] OpenShift builds on Kubernetes primitives with annotations and extensions to support developer workflows. Pods represent the smallest deployable units, encapsulating one or more containers that share storage and network resources, often including init containers for setup tasks. Services provide stable IP addresses and load balancing to expose pods as network endpoints, enabling reliable access to applications. Deployments manage the rollout and scaling of stateless applications by creating replica sets, with OpenShift adding features like DeploymentConfigs for finer-grained control over updates and rollbacks. Replica sets ensure a specified number of pod replicas are running at all times, automatically replacing failed instances to maintain availability.[39][40] Operators extend Kubernetes controllers to manage stateful and complex applications through declarative configurations, encoding operational knowledge into software. Custom Operators, often sourced from the OperatorHub, automate tasks like database provisioning or application upgrades, using custom resources to define behaviors. The Cluster Operator framework oversees platform health, with built-in operators such as the Cluster Version Operator (CVO) for updates and the Machine Config Operator (MCO) for node configurations, ensuring the cluster remains in a consistent, operable state.[41] For user interaction, OpenShift provides the web console, a browser-based graphical interface for visualizing and managing cluster resources, projects, and deployments, offering an intuitive alternative to command-line operations. The oc command-line interface (CLI), a client tool for OpenShift, allows administrators and developers to create, inspect, and update resources via commands like oc apply and oc get, supporting scripting and automation in development pipelines.[42][43]

Networking and Storage

OpenShift's networking architecture leverages the OVN-Kubernetes Container Network Interface (CNI) plugin as the default network provider starting from version 4.9, enabling efficient pod-to-pod communication through a virtualized overlay network based on Open Virtual Network (OVN).[44] This plugin implements Kubernetes network policy support, including both ingress and egress rules, to enforce fine-grained traffic control between pods and services, while also providing built-in load balancing for service endpoints via distributed virtual routers.[44] For scenarios requiring multiple network interfaces on pods, OpenShift integrates the Multus CNI meta-plugin with OVN-Kubernetes, allowing secondary networks such as host-device or SR-IOV to be attached alongside the primary overlay; as of version 4.20, SR-IOV management is namespaced for improved isolation.[45] External exposure of services in OpenShift is primarily handled through Routes, which abstract the underlying service discovery and direct traffic to pods via the cluster's ingress infrastructure.[46] The Ingress Operator deploys HAProxy-based Ingress Controllers to manage HTTP and HTTPS routing, supporting features like TLS termination, path-based routing, and automatic certificate management for secure external access.[47] Egress policies in OVN-Kubernetes further enhance outbound traffic management by allowing administrators to restrict or redirect pod-initiated connections to external destinations, such as through dedicated IP addresses or firewalls.[44] For advanced traffic management, OpenShift integrates with the OpenShift Service Mesh, built on Istio, which introduces sidecar proxies for observability, fault injection, and secure mTLS communication across microservices without altering application code.[48] In 2025, enhancements to OVN-Kubernetes introduced native Border Gateway Protocol (BGP) support for bare-metal deployments, enabling direct advertisement of pod and service routes to upstream routers for optimized underlay integration and reduced latency in large-scale environments.[49] OpenShift's storage subsystem relies on the Container Storage Interface (CSI) standard to integrate diverse storage backends, facilitating dynamic provisioning of persistent volumes (PVs) through storage classes that abstract underlying hardware or cloud resources.[50] Operators like OpenShift Data Foundation (ODF) extend this capability by automating the deployment of CSI drivers for software-defined storage, supporting on-demand volume creation for stateful applications across hybrid environments. As of version 4.20, volume populators are generally available, allowing dynamic population of PVs with data from various sources via dataSourceRef.[51][52] Through CSI and ODF, OpenShift accommodates block storage for high-performance databases, file storage for shared workloads like content management, and object storage for scalable data lakes, with each type provisioned via dedicated drivers that ensure data durability and snapshot capabilities.[53] This modular approach allows seamless integration with external providers, such as AWS EBS or Ceph, while maintaining Kubernetes-native volume lifecycle management.[50]

Security and Management

OpenShift employs Role-Based Access Control (RBAC) to manage permissions, utilizing roles and role bindings to grant access within specific namespaces or cluster-wide, supporting multitenancy by isolating workloads across projects.[54] Predefined roles such as cluster-admin, admin, and edit provide granular control, ensuring users and service accounts adhere to least-privilege principles without allowing direct API access to sensitive resources.[54] Security is further enhanced by SELinux enforcement, which applies mandatory access controls at the kernel level to prevent container escapes and isolate processes from the host operating system on Red Hat Enterprise Linux CoreOS (RHCOS) nodes.[54] Pod Security Standards are implemented through Security Context Constraints (SCCs), which enforce policies on pod creation, restricting capabilities like privileged execution, volume mounts, and SELinux contexts to mitigate common vulnerabilities; a new hostmount-anyuid-v2 SCC was introduced in version 4.20.[54] As of OpenShift 4.20, support for deploying pods and containers into Linux user namespaces is generally available and enabled by default, providing enhanced isolation by mapping container UIDs to a user namespace on the host. Additionally, image scanning integrates Clair via the Red Hat Quay Container Security Operator, automatically detecting known vulnerabilities in container images from sources like RHEL and CentOS during builds and deployments.[52][54] Authentication in OpenShift relies on integration with external providers, including LDAP for directory services, OAuth 2.0 via its built-in server, and identity providers such as GitHub, Google, and LDAP through the OpenID Connect protocol.[54] This setup supports Red Hat Single Sign-On (RH-SSO) with SAML 2.0 for federated identity, enabling secure token-based access while centralizing user management across enterprise environments.[54] OpenShift provides support for secrets management through dedicated operators integrated into its ecosystem. The External Secrets Operator enables the synchronization of secrets from external providers, including Bitwarden via the bitwardenSecretManagerProvider plugin, utilizing certified containers for the Bitwarden SDK Server.[55][56] Additionally, the Vault Secrets Operator, certified on OpenShift, integrates HashiCorp Vault to manage application secrets, allowing secure storage, access control, and automatic rotation of secrets across Kubernetes namespaces.[57] Cluster management leverages operators for automation, with the Cluster Version Operator (CVO) handling rolling updates to maintain security patches and version compliance without downtime.[54] The Machine Config Operator customizes node configurations declaratively, applying changes like kernel parameters or enabling encryption via MachineConfig objects to ensure consistent security postures across the fleet.[54] For multicluster environments, Red Hat Advanced Cluster Management (ACM) enables federation, allowing centralized policy enforcement, observability, and lifecycle management over distributed OpenShift clusters from a single hub.[58] Monitoring capabilities are provided by the Cluster Monitoring Operator, which deploys and manages Prometheus instances to scrape metrics from cluster components, applications, and nodes, supporting custom alerting rules based on thresholds for issues like high CPU usage or certificate expirations.[59] This integration offers real-time dashboards and automated notifications, facilitating proactive security incident response.[59] Compliance features include support for Federal Information Processing Standards (FIPS) 140-2 and 140-3, with validated cryptographic modules on architectures like x86_64, ppc64le, and s390x when enabled on RHCOS nodes.[60] The Compliance Operator automates assessments against standards such as the Center for Internet Security (CIS) OpenShift Container Platform benchmarks, generating reports on compliance gaps and remediation steps to align with regulatory requirements like PCI DSS.[61]

Products and Services

OpenShift Container Platform

OpenShift Container Platform (OCP) is Red Hat's flagship enterprise Kubernetes distribution, provided as a subscription-based offering that delivers a comprehensive platform for developing, deploying, and managing containerized applications across hybrid cloud environments. It includes full enterprise support, encompassing security updates, technical assistance from Red Hat experts, and integration with Kubernetes Operators for automating the lifecycle of applications and platform components. Core features encompass built-in CI/CD pipelines, source-to-image capabilities for rapid application builds, monitoring with Prometheus and Grafana, and consistent experiences for on-premises and cloud deployments.[3] Deployment of OCP supports flexible installation methods tailored to various infrastructures. For cloud environments such as AWS, the Installer-Provisioned Infrastructure (IPI) automates cluster provisioning by leveraging cloud APIs to create and configure resources, including virtual machines, networks, and load balancers, using default or customized configurations via the OpenShift installer tool. In disconnected or air-gapped environments common to regulated industries, the agent-based installer enables offline deployments by generating a bootable ISO image from configuration files, allowing bootstrapping without internet access and supporting architectures like x86_64, ARM, and ppc64le.[62][63] The platform follows a structured lifecycle policy, with at least four minor versions in active support at any time as of November 2025, including full support for versions 4.19 (general availability June 2025) and 4.20 (general availability October 2025), alongside maintenance support for 4.18. Each minor release receives up to four years of total support, comprising six months of full support, 18 months of maintenance support, and optional extended update support for even-numbered releases extending an additional 18 months.[64] OCP is designed for enterprise use cases in hybrid cloud setups, enabling application portability and operational efficiency across on-premises data centers and public clouds while reducing management overhead. It particularly suits regulated industries such as finance and government, where air-gapped installations ensure compliance with security standards by isolating clusters from external networks during deployment and operation.[7][65] Pricing for OCP operates on a core-based subscription model through Red Hat, where entitlements are calculated at a rate of one subscription per physical core or two vCPUs on worker nodes, with options for standard or premium support levels to cover self-managed clusters. Subscriptions provide access to updates, certifications, and indemnity, with reserved instances available for cost optimization in multi-year commitments.[66]

OKD

OKD is the community-driven, open-source distribution of OpenShift, designed for non-commercial use and serving as the upstream codebase for Red Hat's commercial offerings. Originally known as Origin, it was rebranded to OKD in 2018 to better reflect its role as a Kubernetes-based platform optimized for continuous application development and multi-tenant deployments. OKD provides the same core features as the OpenShift Container Platform, including automated builds, deployments, and scaling, but without enterprise-level support or certifications.[23][67][68] Installation of OKD is facilitated through the official website at okd.io and its associated GitHub repositories, where users can download the OpenShift installer tool. It supports the same deployment methods as its commercial counterpart, such as assisted installation on cloud providers like AWS or bare-metal setups, though all maintenance and updates are handled by the community. The process typically involves generating an install configuration file, providing necessary credentials, and running the installer, which can complete a cluster setup in approximately 30 minutes on supported platforms.[67][69] Community governance for OKD is coordinated through GitHub, where contributions, bug reports, and feature proposals are submitted via pull requests and issues in the primary repository. The project adheres to an Apache 2.0 license and aligns its release cycles with upstream Kubernetes versions to maintain compatibility, with major releases like OKD 4.17 corresponding to recent Kubernetes updates. Oversight is provided by the OKD Working Group, which holds bi-weekly meetings to discuss agendas, review proposals, and ensure community-driven development, with notes shared via a dedicated mailing list.[67][70][71] OKD is particularly suited for use cases such as testing application deployments, learning Kubernetes orchestration in a production-like environment, and running small-scale clusters for development teams. It also acts as a foundational layer for organizations building custom Kubernetes distributions tailored to specific needs, enabling experimentation without licensing costs.[67][68] Key limitations of OKD include the absence of formal service level agreements (SLAs), meaning no guaranteed response times or uptime commitments from a vendor. Security errata and patches are issued and maintained solely by the community, requiring users to monitor GitHub and official channels for updates rather than relying on automated commercial notifications.[23][67]

Managed Cloud Offerings

Red Hat OpenShift Dedicated is a fully managed, single-tenant service provided by Red Hat, offering dedicated OpenShift clusters hosted in virtual private clouds on Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).[72] This service handles cluster installation, upgrades, patching, and maintenance, including service level agreements (SLAs) for 99.95% uptime and automated updates to ensure operational reliability.[73] Operators within OpenShift Dedicated manage the platform foundation, eliminating manual interventions for operating systems and control plane applications.[73] Red Hat OpenShift Service on AWS (ROSA) provides a co-managed experience, jointly operated by Red Hat and AWS, with a fully managed control plane and worker nodes for deploying containerized applications.[74] It integrates AWS security features, such as STS (Security Token Service) for credential management, and supports OpenShift version 4.20 as of November 2025, enabling seamless scaling across multiple availability zones.[75] ROSA includes a 99.95% uptime SLA and pay-as-you-go billing tied to AWS infrastructure usage.[76] Azure Red Hat OpenShift (ARO) is a fully managed OpenShift service jointly engineered, operated, and supported by Microsoft and Red Hat, deeply integrated with Azure services for enterprise workloads.[77] It emphasizes hybrid cloud scenarios, allowing secure connections between on-premises environments and Azure resources while maintaining compliance and resilience.[78] The Developer Sandbox serves as a free trial offering, providing a 30-day renewable access to a private project on a shared, multi-tenant OpenShift cluster, pre-configured with developer tools and a browser-based IDE like VS Code.[79] This replaces the deprecated OpenShift Online service from post-2020, offering resource quotas of 14 GB RAM and 40 GB storage without commitment.[80][81] These managed cloud offerings deliver key benefits, including elimination of infrastructure management responsibilities, automated patching and upgrades by Red Hat, and flexible pricing models such as per-cluster subscriptions for Dedicated or usage-based fees for ROSA and ARO.[66][82] This allows organizations to focus on application development and scaling while leveraging the underlying OpenShift platform's Kubernetes foundation.[2]

Storage and Database Services

OpenShift Data Foundation (ODF) is a Ceph-based software-defined storage solution that provides persistent block, file, and object storage for containerized applications on OpenShift.[83] It supports data replication for high availability and snapshots for point-in-time recovery, enabling resilient data management across hybrid environments.[83] ODF achieved general availability in 2020, with ongoing releases aligning to OpenShift versions, including enhancements in 2025 such as improved multicloud capabilities via the NooBaa-based Multicloud Object Gateway (MCG) for object storage federation.[83] Red Hat OpenShift Database Access (RHODA), powered by the RHODA operator, offers managed access to relational databases like PostgreSQL and MySQL from cloud providers, simplifying Database-as-a-Service (DBaaS) deployment.[84] The RHODA operator enables self-service provisioning of database instances, automatic scaling based on workload demands, and integrated backup management to ensure data durability and recovery.[84] ODF integrates with OpenShift through Container Storage Interface (CSI) drivers for dynamic provisioning of persistent volumes and the Rook operator for Ceph cluster orchestration, allowing seamless storage allocation for stateful workloads without manual intervention.[85] These services support key use cases such as running stateful applications like databases and enabling AI data lakes with scalable object storage.[86] ODF's AES-256 encryption for data at rest and in transit aids compliance with regulations like GDPR by protecting sensitive information.[87] In 2025 updates, ODF enhances support for AI vector search through integration with OpenShift AI, facilitating storage for vector databases like pgvector for retrieval-augmented generation workloads.[88]

AI and Virtualization Services

Red Hat OpenShift AI is an enterprise platform designed to manage the full lifecycle of predictive and generative AI models across hybrid cloud environments, enabling data scientists and developers to build, deploy, and scale AI applications. It integrates tools such as the Workbench for Jupyter notebooks to facilitate data preparation and model training, and KServe for efficient model inference and serving. The platform supports optimized runtimes like vLLM for large language models (LLMs), reducing inference costs through hardware acceleration and scalable deployment options. In the 3.0 release in November 2025, OpenShift AI includes enhancements for LLM optimization, building on the AI Inference Server introduced in May 2025 to streamline serving and inferencing of LLMs in production workloads.[89][90][91][92][93] However, Red Hat OpenShift AI 3.0 is a fast release, generally not intended for production upgrade paths. Fast releases are short-lived and only supported until the next fast release, creating an accelerated upgrade cycle that is impractical for most customers. Production upgrades occur between stable releases. Upgrades to OpenShift AI 3.x versions, including 3.0 and 3.2, are not supported, with Red Hat prioritizing a migration path from stable 2.x releases (such as 2.25) to the first stable 3.x release. No documentation mentions switching operator channels from "fast" to "stable".[94][95] OpenShift Virtualization extends the platform's capabilities by leveraging KubeVirt to run virtual machines (VMs) alongside containers within the same Kubernetes cluster, supporting hybrid workloads that combine traditional and cloud-native applications. It enables features such as live migration of VMs for high availability, GPU passthrough for compute-intensive tasks, and seamless integration with OpenShift's orchestration layer. Starting with OpenShift Container Platform 4.15, virtualization is fully integrated as an operator-managed service, allowing administrators to deploy and manage VMs using native Kubernetes custom resources without disrupting containerized workloads. This setup facilitates the unification of VM and container ecosystems, enhancing resource efficiency in diverse environments.[96][97] Complementary services bolster AI and virtualization operations within OpenShift. OpenShift Pipelines, powered by Tekton, automates CI/CD workflows for AI model development, enabling reproducible pipelines from training to deployment. OpenShift Service Mesh, based on Istio, provides traffic management for AI inference workloads, including routing, load balancing, and observability to handle dynamic model serving demands. Additionally, OpenShift GitOps, built on Argo CD, supports declarative management of AI resources, ensuring consistent deployments across clusters through Git-based synchronization. These integrations promote secure, scalable operations for AI traffic and hybrid VM-container scenarios.[98][99][100][101] OpenShift AI and Virtualization address key use cases in hybrid and edge computing, such as deploying AI models for real-time inference at the edge to minimize latency in IoT or manufacturing applications, while maintaining centralized management. They support secure model serving through integrated security features, including API protection and access controls to safeguard sensitive AI workloads against unauthorized access. In compact environments, 2025 developments introduced enhanced two-node support for OpenShift Virtualization via the Two Node OpenShift with Arbiter (TNA) tech preview, enabling resilient VM operations in resource-constrained setups like remote sites without a third witness node.[102][103][104]

Reception

OpenShift receives high praise for its enterprise readiness, with strong user ratings (e.g., 4.6/5 on Gartner Peer Insights) highlighting ease of management, built-in security, and hybrid cloud support. It excels in regulated industries needing governance and compliance. However, some users criticize its cost (subscription per core/node), heavier footprint compared to plain Kubernetes, and opinionated design that enforces Red Hat ways, potentially causing lock-in or breaking community images due to strict policies. In comparisons, OpenShift offers more out-of-the-box integration than upstream Kubernetes but trades off some flexibility; it competes with lighter options like Rancher for multi-cluster, or hyperscaler services for cloud-centric use.

References

User Avatar
No comments yet.