Hubbry Logo
Open Compute ProjectOpen Compute ProjectMain
Open search
Open Compute Project
Community hub
Open Compute Project
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Open Compute Project
Open Compute Project
from Wikipedia

The Open Compute Project (OCP) is an organization that facilitates the sharing of data center product designs and industry best practices among companies.[1][2] Founded in 2011, OCP has significantly influenced the design and operation of large-scale computing facilities worldwide.[1]

Key Information

As of February 2025, over 400 companies across the world are members of OCP, including Arm, Meta, IBM, Wiwynn, Intel, Nokia, Google, Microsoft, Seagate Technology, Dell, Rackspace, Hewlett Packard Enterprise, NVIDIA, Cisco, Goldman Sachs, Fidelity, Lenovo, Accton Technology Corporation and Alibaba Group.[1][3][2]

Structure

[edit]
Open Compute V2 Server
Open Compute V2 Drive Tray,
2nd lower tray extended

The Open Compute Project Foundation is a 501(c)(6) non-profit incorporated in the state of Delaware, United States. OCP has multiple committees, including the board of directors, advisory board and steering committee to govern its operations.

As of July 2020, there are seven members who serve on the board of directors which is made up of one individual member and six organizational members. Mark Roenigk (Facebook) is the Foundation's president and chairman. Andy Bechtolsheim is the individual member. In addition to Mark Roenigk who represents Facebook, other organizations on the Open Compute board of directors include Intel (Rebecca Weekly), Microsoft (Kushagra Vaid), Google (Partha Ranganathan), and Rackspace (Jim Hawkins).[4]

A current list of members can be found on the opencompute.org website.

History

[edit]

The Open Compute Project began in Facebook as an internal project in 2009 called "Project Freedom". The hardware designs and engineering team were led by Amir Michael (Manager, Hardware Design)[5][6][7] and sponsored by Jonathan Heiliger (VP, Technical Operations) and Frank Frankovsky (Director, Hardware Design and Infrastructure). The three would later open source the designs of Project Freedom and co-found the Open Compute Project.[8][9] The project was announced at a press event at Facebook's headquarters in Palo Alto on April 7, 2011.[10]

OCP projects

[edit]

The Open Compute Project Foundation maintains a number of OCP projects, such as:

Server designs

[edit]

Two years after the Open Compute Project had started, with regards to a more modular server design, it was admitted that "the new design is still a long way from live data centers".[11] However, some aspects published were used in Facebook's Prineville data center to improve energy efficiency, as measured by the power usage effectiveness index defined by The Green Grid.[12]

Efforts to advance server compute node designs included one for Intel processors and one for AMD processors. In 2013, Calxeda contributed a design with ARM architecture processors.[13] Since then, several generations of OCP server designs have been deployed: Wildcat (Intel), Spitfire (AMD), Windmill (Intel E5-2600), Watermark (AMD), Winterfell (Intel E5-2600 v2) and Leopard (Intel E5-2600 v3).[14][15]

OCP Accelerator Module

[edit]

OCP Accelerator Module (OAM) is a design specification for hardware architectures that implement artificial intelligence systems that require high module-to-module bandwidth.[16]

OAM is used in some of AMD's Instinct accelerator modules.

Rack and Power designs

[edit]

The designs for a mechanical mounting system have been published, so that open racks have the same outside width (600 mm) and depth as standard 19-inch racks, but are designed to mount wider chassis with a 537 mm width (21 inches). This allows more equipment to fit in the same volume and improves air flow. Compute chassis sizes are defined in multiples of an OpenU or OU, which is 48 mm, slightly taller than the typical 44mm rack unit. The most current base mechanical specifications were defined and published by Meta as the Open Rack V3 Base Specification in 2022, with significant contributions from Google and Rittal.[17]

At the time the base specification was released, Meta also defined in greater depth the specifications for the rectifiers and power shelf.[18][19] Specifications for the power monitoring interface (PMI), a communications interface enabling upstream communications between the rectifiers and battery backup unit(BBU) were published by Meta that same year, with Delta Electronics as the main technical contributor to the BBU spec.[20]

Since 2022 however, the power demands of AI in the data center has necessitated higher power requirements in order to fulfill the heavy power demands of newer data center processors that have since been released. Meta is currently in the process of updating its Open Rack v3 rectifier, power shelf, battery backup and power management interface specifications to account for these new more powerful AI architectures being used.

In May 2024, at an Open Compute regional summit, Meta and Rittal outlined their plans for development of their High Power Rack (HPR) ecosystem in conjunction with rack, power and cable partners, increasing the power capacity in the rack to 92 kilowatts or more of power, enabling the higher power needs of the latest generation of processors.[21] At the same meeting, Delta Electronics and Advanced Energy introduced their progress in developing new Open Compute standards specifying power shelf and rectifier designs for these HPR applications.[22] Rittal also outlined their collaboration with Meta in designing airflow containment, busbar designs and grounding schemes to the new HPR requirements.[23]

Data storage

[edit]

Open Vault storage building blocks offer high disk densities, with 30 drives in a 2U Open Rack chassis designed for easy disk drive replacement. The 3.5 inch disks are stored in two drawers, five across and three deep in each drawer, with connections via serial attached SCSI.[24] This storage is also called Knox, and there is also a cold storage variant where idle disks power down to reduce energy consumption.[25] Another design concept was contributed by Hyve Solutions, a division of Synnex in 2012.[26][27] At the OCP Summit 2016 Facebook together with Taiwanese ODM Wistron's spin-off Wiwynn introduced Lightning, a flexible NVMe JBOF (just a bunch of flash), based on the existing Open Vault (Knox) design.[28][29]

Energy efficient data centers

[edit]

The OCP has published data center designs for energy efficiency. These include power distribution at three-phase 277/480 VAC, which eliminates one transformer stage in typical North American data centers, a single voltage (12.5 VDC) power supply designed to work with 277/480 VAC input, and 48 VDC battery backup.[12] For European (and other 230V countries) datacenters, there is a specification for 230/400 VAC power distribution and its conversion to 12.5 VDC.[30]

Open networking switches

[edit]

On May 8, 2013, an effort to define an open network switch was announced.[31] The plan was to allow Facebook to load its own operating system software onto the switch. Press reports predicted that more expensive and higher-performance switches would continue to be popular, while less expensive products treated more like a commodity (using the buzzword "top-of-rack") might adopt the proposal.[32]

The first attempt at an open networking switch by Facebook was designed together with Taiwanese ODM Accton using Broadcom Trident II chip and is called Wedge, the Linux OS that it runs is called FBOSS.[33][34][35] Later switch contributions include "6-pack" and Wedge-100, based on Broadcom Tomahawk chips.[36] Similar switch hardware designs have been contributed by: Accton Technology Corporation (and its Edgecore Networks subsidiary), Mellanox Technologies, Interface Masters Technologies, Agema Systems.[37] Capable of running Open Network Install Environment (ONIE)-compatible network operating systems such as Cumulus Linux, Switch Light OS by Big Switch Networks, or PICOS by Pica8.[38] A similar project for a custom switch for the Google platform had been rumored, and evolved to use the OpenFlow protocol.[39][40]

Servers

[edit]

Sub-project for Mezzanine (NIC) OCP NIC 3.0 specification 1v00 was released in late 2019 establishing 3 form factors: SFF, TSFF, and LFF .[41][42]

Litigation

[edit]

In March, 2015,[43] BladeRoom Group Limited and Bripco (UK) Limited sued Facebook, Emerson Electric Co. and others alleging that Facebook has disclosed BladeRoom and Bripco's trade secrets for prefabricated data centers in the Open Compute Project.[44] Facebook petitioned for the lawsuit to be dismissed,[45] but this was rejected in 2017.[46] A confidential mid-trial settlement was agreed in April 2018.[47]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Open Compute Project (OCP) is a collaborative, open-source initiative founded by (now Meta) in 2011 to redesign hardware for s, enabling the sharing of efficient, scalable designs among a global community of technology companies, engineers, and researchers. The project originated from 's efforts in 2009 to create a more energy-efficient in , which was completed in 2011 and was 38% more energy efficient while reducing operating costs by 24% compared to prior facilities. This success prompted the open-sourcing of designs to accelerate industry-wide innovation, lower costs, and promote in computing infrastructure. As a non-profit organization, OCP operates on core tenets including efficiency, impact, openness, scalability, and , requiring contributions to align with at least three of these principles. OCP's structure fosters participation through working groups and subprojects focused on key areas such as servers (including general-purpose and GPU-accelerated systems), storage, networking equipment, rack and power systems, facilities, hardware management, and emerging domains like open accelerator infrastructure for AI, , and . The community shares freely, encouraging hardware manufacturers to adapt and produce OCP-compliant products that prioritize and . Since its inception, OCP has grown into an influential force in the ecosystem, with contributions from major players like , , and , driving the adoption of open standards that have reduced hardware complexity and environmental impact across global data centers. By 2025, the project continues to evolve, supporting advanced applications in AI and sustainable IT while hosting annual global summits to advance collaborative innovation.

History

Founding and Early Development

The Open Compute Project (OCP) was founded in 2011 by engineers at (now Meta) to address the challenges of scaling hardware for rapidly growing infrastructure, particularly in hyperscale s. The initiative stemmed from 's internal efforts starting in 2009 to design an energy-efficient facility in , which became the company's first dedicated . This project, completed by 2011 after two years of work by a small team of engineers, achieved significant improvements: a 38% reduction in energy use and 24% less expensive to operate compared to previous facilities, driven by custom optimizations in servers, racks, and cooling systems. The core motivation was to break free from proprietary hardware constraints that limited innovation and increased costs for large-scale deployments. On April 7, 2011, publicly announced OCP at its headquarters in , releasing open-source designs for key components to foster industry-wide collaboration. These initial contributions included server motherboards optimized for both and CPUs, power supplies, rack architectures, and cooling solutions, all engineered for maximum efficiency without unnecessary branding or aesthetics—a philosophy known as "vanity-free" design. By sharing these specifications under an open license, aimed to reduce , lower costs, and accelerate hardware improvements through collective input, enabling any organization to build or adapt the designs. The effort was led by Jonathan Heiliger, 's Vice President of Technical Operations at the time, who envisioned applying principles to hardware to drive broader ecosystem efficiency. Early development emphasized partnerships to validate and manufacture the designs, with founding supporters including , Rackspace, , and co-founder , who helped establish OCP as a shortly after launch. Quanta Computer, a key , collaborated on production, while provided processors for initial variants, ensuring compatibility and real-world testing. These alliances underscored OCP's community-driven approach, prioritizing shared over competition. The project codified core principles—openness, efficiency, scalability, impact, and sustainability—to guide contributions, requiring all designs to align with at least three of these tenets for acceptance. This foundation laid the groundwork for ongoing collaboration, and later collaborated with the on hardware-software co-design.

Key Milestones and Evolution

Following its founding by in 2011 to open-source efficient hardware designs, the Open Compute Project (OCP) underwent significant organizational evolution in the ensuing years. In 2013, OCP formalized its status as an independent 501(c)(6) non-profit organization, enabling broader industry participation beyond the initial hyperscale contributors and fostering through a comprising founding members like , Rackspace, and others. A pivotal expansion occurred with the launch of new project categories to address diverse infrastructure needs. The Open Vault storage project was introduced in 2012, focusing on modular, high-density storage systems to optimize in large-scale environments. This was followed by the Networking Project in 2013, which aimed to disaggregate network hardware and promote open-source switch designs, marking OCP's entry into connectivity solutions and attracting contributions from telecom operators. Entering the 2020s, OCP shifted emphasis toward sustainability and (AI), aligning with global demands for energy-efficient and scalable computing. In 2022, OCP established Sustainability as a core tenet, including efforts on practices to repurpose hyperscale hardware and reduce e-waste emissions, while the 2021 OCP Future Technologies Initiative formalized efforts to integrate AI-specific hardware innovations, such as advanced accelerators and rack-scale designs. Complementing this, the OCP Experience Center was inaugurated in November 2021 as the first North American testing facility, providing a collaborative space for validating OCP-compliant prototypes and accelerating adoption of sustainable technologies. In 2025, OCP hosted its Global Summit emphasizing AI advancements and added , , and to its board. Membership growth reflected OCP's expanding influence, evolving from a core group of hyperscalers to a diverse ecosystem encompassing providers, companies, and edge infrastructure firms. By late 2024, OCP had grown to over 400 member organizations, with more than 5,000 engineers contributing to projects that drive IT industry spending on OCP designs reaching $41 billion in 2024 (as of October 2024).

Organization and Governance

Structure and Leadership

The Open Compute Project is governed by the Open Compute Project Foundation, a nonprofit entity that oversees community activities, intellectual property management, and strategic direction. The Foundation maintains an independent structure while collaborating with organizations like the to promote hardware-software integration in open infrastructure. A , drawn from contributing member companies, holds ultimate authority for approving top-level projects and appointing key committee leaders. As of February 2025, David Ramku serves as Board Chair, with the board expanded in October 2025 to include representatives from , , and NVIDIA for enhanced leadership in emerging technologies. At the Foundation level, leadership includes the , currently George Tchaparian as of 2025, who directs operations, initiatives, and community engagement. Supporting roles encompass the Chief Innovation Officer (Cliff Grossner), (Zane Ball), and others focused on technical and administrative functions. Within the technical domains—such as compute, storage, networking, and power—domain-specific committees are led by Project Leads elected by their respective communities, who coordinate sub-projects and ensure alignment with OCP principles. The overarching , consisting of one representative per project plus three co-chairs appointed by the Foundation for two-year terms, evaluates strategic proposals, reviews contributions, and facilitates decision-making across all areas. OCP's projects follow a tiered structure to standardize development and adoption of specifications. New ideas enter the Incubation phase, where communities build support, establish governance, create repositories (e.g., on ), and demonstrate initial contributions from at least one corporate member while adhering to four of five core tenets: efficiency, impact, openness, scalability, and . Successful incubation leads to the Accepted phase (also termed Impact), reserved for self-sustaining efforts with defined charters, regular community meetings, and production-ready outputs backed by contributions from at least two corporate members. Throughout, the Community Review process—conducted by Project Leads and the Steering Committee—assesses specifications for technical merit, , and compliance before formal acceptance, typically within 12 months of incubation. Contributions occur through an accessible, collaborative process designed to encourage broad participation. Proposers submit designs, specifications, or prototypes via the OCP Contribution Portal, first signing an Open Web Foundation Contributor License Agreement (OWF CLA) to grant necessary rights while retaining IP ownership. Accepted materials are licensed openly: hardware designs typically under Creative Commons Attribution 4.0 (CC-BY 4.0), and software under OSI-approved licenses such as the BSD license, ensuring free reuse, modification, and distribution. Community voting, facilitated by technical committees and the Steering Committee, determines acceptance based on alignment with OCP tenets and peer review feedback, promoting merit-based evolution of standards.

Membership and Community Engagement

The Open Compute Project (OCP) offers multiple membership tiers to facilitate participation from individuals and organizations, with benefits scaling based on commitment level. Individual membership is free and open to anyone by signing a simple agreement, allowing participation in community discussions and access to resources without financial obligation. Corporate tiers include the Startup program (on demand pricing for early-stage companies), Community (formerly known as Bronze starting January 2026, at $5,000 annually), Silver ($25,000), ($40,000), and ($50,000). These tiers provide escalating perks, such as logo usage rights for branding, voting privileges in project decisions (0 votes for Community and Startup, 1 for Silver, 2 for , and 3 for ), eligibility for volunteer leadership roles (up to 25% representation at Platinum), and discounts on summit sponsorships (up to 15% at Platinum). Community engagement extends beyond membership through diverse activities that foster collaboration and knowledge sharing. Regional chapters operate in areas like , , , , Korea, and , hosting local events to adapt OCP principles to regional needs and promote adoption. Hackathons, often held at summits, encourage innovative problem-solving; for example, the 2023 Global Summit Hackathon and Edge-Native AI Hackathon brought together developers from organizations like Edge and ETSI to prototype solutions. The OCP Academy, launched in October 2025 on the Docebo platform, offers free online courses, webinars, and modules on topics like design and sustainability, aiming to educate thousands of engineers globally. Key tools enable ongoing contributions and interaction within the OCP ecosystem. The project's organization hosts over 150 repositories where members submit specifications, designs, and code under open licenses, supporting collaborative development across workstreams. Forums and mailing lists, such as the OCP-All groups.io list, facilitate discussions on technical and strategic topics, with thousands of engineers actively participating. Annual Global Summits serve as flagship events for networking and announcements; the 2025 summit in (October 13-16), drew over 10,000 attendees and emphasized AI infrastructure innovations. OCP's participant diversity reflects its broad appeal, encompassing hyperscalers like Meta and , which drive large-scale deployments; original equipment manufacturers (OEMs) such as and HPE, contributing to hardware ; and startups via the dedicated program, which provides mentorship and event access to accelerate innovation. This mix ensures a balanced where over 400 member organizations and 5,000 engineers collaborate on efficient, sustainable technologies.

Technical Specifications and Projects

Compute and Server Designs

The Open Compute Project (OCP) emphasizes modular and efficient hardware designs for compute and server systems, enabling hyperscale data centers to scale cost-effectively while minimizing environmental impact. These designs prioritize open specifications that allow for interchangeable components, reducing dependency on hardware and facilitating rapid across the . Core to this approach is the development of standardized form factors and interfaces that support workloads, with a focus on CPU-based servers and accelerator integrations. Server Motherboard (SMB) specifications within OCP are defined under the Datacenter Modular Hardware System (DC-MHS), which provides a flexible framework for multi-node servers in modern data centers. These motherboards support 2U and 4U form factors, accommodating single or dual-socket configurations for and CPUs, such as AMD EPYC or processors. For instance, the MBH00-1T1SP DC-MHS Server Motherboard follows OCP DC-MHS guidelines, enabling modular integration with 48V power systems and peripheral expansions like risers for enhanced I/O connectivity. This standardization allows for easier upgrades and maintenance in dense rack environments. OCP promotes modular compute racks through initiatives like Open Vault and OpenRack versions, which facilitate easy hardware upgrades without full system overhauls. Open Vault, originally contributed by , is a 2U design optimized for high-density configurations, supporting up to 30 drive bays while integrating with compute modules for scalable deployments. Complementary to this, OpenRack V2 and V3 specifications enable modular sleds and trays that house SMBs and other components, allowing operators to swap processors or memory independently to adapt to evolving workloads. These racks emphasize tool-less assembly and hot-swappable elements to minimize downtime in hyperscale operations. The Accelerator Module (OAM) addresses the growing demands of AI and by providing a standardized form factor for GPUs, TPUs, and other accelerators. Introduced in OAM 1.0 in 2018, it defines mechanical, electrical, and thermal interfaces for integration with universal baseboards, supporting PCIe and OAM-specific interconnects for . Subsequent updates, such as OAM 1.5, enhance support for AI workloads with improved cooling options and higher bandwidth, enabling up to 400Gbps fabric connectivity for distributed . This modularity allows seamless attachment to SMBs, accelerating and tasks in OCP-compliant servers. Overall, OCP compute and server designs aim to achieve significant efficiency gains, with early implementations delivering up to 38% better energy efficiency compared to alternatives through optimized power delivery and component sharing. This standardization supports hyperscale deployments by lowering and enabling across vendors. Brief integration with OCP power systems, such as 48V distribution, further enhances these benefits without altering core compute architectures.

Storage and Data Management

The Open Compute Project (OCP) Storage Project develops open specifications for storage hardware and systems tailored to hyperscale data centers, emphasizing , high density, and to reduce costs and improve . Key contributions include chassis designs that support mixed-drive environments and high-performance interfaces, enabling efficient without proprietary lock-in. These specifications address the growing demands for petabyte-scale storage by prioritizing serviceability and power efficiency in rack-scale deployments. A cornerstone of OCP storage designs is the Open Vault, a 2U just a bunch of disks (JBOD) chassis that provides high-density storage with support for mixing hard disk drives (HDDs) and solid-state drives (SSDs). The Open Vault accommodates up to 30 drives in a modular configuration, utilizing a flexible I/O that connects to any compatible host server via standard interfaces like SAS or PCIe. This design facilitates easy expansion and compatibility across OCP ecosystems, with dual trays allowing independent access for maintenance. For high-performance applications, OCP specifies NVMe-based storage modules, such as the Lightning platform, which extends the Open Vault concept to all-flash environments using PCIe Gen3 links. Lightning supports up to 60 NVMe SSDs per tray, delivering low-latency access with P99 read latencies as low as 1,500 µs in cloud-optimized configurations, while maintaining power efficiency under 10W average per drive. These modules adhere to the OCP NVMe Cloud SSD Specification, which mandates features like hot-swappable operation, queue depths of at least 1,024, and endurance ratings exceeding 7 years under continuous power, ensuring reliability for demanding workloads. Data management in OCP storage nodes relies on open-source firmware solutions like , which provides unified baseboard management for , monitoring, and control. In deployments, OpenBMC enables real-time via NVMe-MI interfaces, capturing SSD metrics such as temperature, error counts, and event logs, while supporting firmware updates over I2C and PCIe without system downtime. This integration allows for automated fan control and health monitoring across storage arrays, reducing operational overhead in large-scale environments. Efficiency in OCP storage designs is enhanced through features like hot-swappable drives at any mounting position and minimized cabling, which streamline servicing and lower latency by enabling direct host-to-drive connections. For instance, the Open Vault's reduced internal wiring supports faster over longer distances, contributing to overall system densities of 30 drives per 2U. These optimizations, validated in hyperscale testing, prioritize by balancing performance with simplified infrastructure.

Networking and Optics

The Open Compute Project (OCP) Networking subproject develops open specifications for disaggregated data center networking hardware, emphasizing , , and efficiency to support hyperscale environments. This includes hardware designs for switches and optical interconnects that enable high-bandwidth, low-latency fabrics, particularly for Ethernet-based topologies. efforts focus on standardized pluggable transceivers to reduce costs and improve scalability in dense (DWDM) systems. A foundational component of OCP networking is the Open Network Install Environment (ONIE), a lightweight operating system pre-installed as firmware on bare-metal network switches. Developed initially by Cumulus Networks in 2012 and adopted by OCP in 2013, ONIE enables automated provisioning of any compatible network operating system (NOS), such as SONiC or Open Network Linux, without vendor lock-in. It supports bare-metal hardware ecosystems by standardizing the installation process across diverse switch architectures, including x86, ARM, and PowerPC CPUs, thereby reducing SKU complexity for manufacturers and facilitating rapid deployment in large-scale data centers. ONIE operates in a minimal mode for OS discovery and installation via protocols like DHCP and TFTP, ensuring secure boot options and compatibility with software-defined networking (SDN) stacks. OCP switch designs, such as the and Minipack series contributed by Meta (formerly ), provide open hardware platforms for high-radix Ethernet switching optimized for AI and workloads. The family, starting with the original Wedge-100 in 2014, evolved to support 100G Ethernet with models like the Wedge 100C (32x100G ports using Broadcom 3 ASIC) and Wedge 100S (32x100G ports). Later iterations, including the 400 introduced in 2021, feature a 2RU form factor with 16x400G QSFP-DD uplinks and 32x200G QSFP56 downlinks, delivering 12.8 Tbps switching capacity via Broadcom Tomahawk 3 or Silicon One ASICs. These designs emphasize modular daughter cards for flexibility, allowing with 100G optics while enabling upgrades to 400G for AI fabrics that require low-latency, non-blocking connectivity in top-of-rack (ToR) deployments. The Minipack series extends this modularity for spine-level switching in dense fabrics, with the Minipack2 specification (shared in 2021) supporting 128x200G QSFP56 ports for 25.6 Tbps throughput using Broadcom Tomahawk 4 ASIC. It offers backward compatibility to 128x100G QSFP28 and forward compatibility to 64x400G QSFP-DD, making it suitable for high-scale AI environments like Meta's F16 data center fabric. These switches integrate with OCP's broader ecosystem, including ONIE for OS installation, to support Ethernet-based AI interconnects that handle massive parallel processing without proprietary constraints. OCP's optics specifications standardize pluggable transceivers for efficient short-reach links, with contributions like the CWDM4-OCP (2017) defining 100G modules optimized for multimode up to 2 km. More recent specs include the 200G QSFP56 (2020) for single-mode at 2 km and the 400G QSFP-DD (2021) supporting 500 m reaches with four 100G channels. These align with MSA standards but incorporate OCP tenets for power efficiency and interoperability. In collaboration with the Telecom Infra Project (TIP), OCP supports the Open Optical Packet Transport (OOPT) framework, which defines open interfaces for pluggable transceivers in disaggregated optical networks, enabling multi-vendor DWDM deployments for packet transport. OOPT emphasizes modular like coherent pluggables to lower costs in edge and core transport scenarios. In 2025, OCP launched the Ethernet for Scale-Up Networking (ESUN) initiative to address AI-specific challenges in single-rack or multi-rack scale-up topologies (as of October 2025). Announced on October 13, 2025, at the OCP Global Summit, ESUN—led by contributors including Meta, , and —focuses on developing lossless L2/L3 Ethernet standards for high-bandwidth, low-jitter interconnects aligned with and UEC guidelines. It targets endpoint functionality for AI clusters, building on existing 100G/400G+ infrastructures to enable robust, interoperable fabrics for GPU-direct communications in scale-up AI workloads.

Power, Cooling, and Rack Infrastructure

The Open Rack Version 3 (ORv3) represents a significant evolution in OCP's rack infrastructure, designed to accommodate higher densities and enhanced resilience in data centers. It features a wider frame, typically 600 mm (approximately 23.6 inches) externally, to support 21-inch IT equipment mounting alongside traditional 19-inch options, enabling denser server packing compared to standard EIA-310 racks by allowing more components per unit without compromising airflow or cabling. This design facilitates up to 30 kW per rack while integrating provisions for liquid cooling manifolds and busbars. Additionally, ORv3 incorporates seismic resilience through robust leveling feet capable of supporting a fully loaded rack (up to 1,500 kg) under seismic loads, including a required 10-degree tilt test for 1 minute to ensure stability in earthquake-prone regions. These specifications are outlined in the official OCP Open Rack Base Specification Version 3, promoting interoperability across vendors like and Eaton. Power distribution within OCP infrastructure emphasizes disaggregated and efficient architectures, exemplified by the Mt. Diablo project, a 2024 collaboration between Meta, , and to standardize high-density power delivery for AI workloads. Mt. Diablo introduces a modular power shelf in a dedicated "sidecar" rack adjacent to the IT rack, separating power conversion from compute to support densities exceeding 100 kW per rack while optimizing space and efficiency. The design delivers power via standardized 48V DC busbars to IT equipment, with onboard conversion to 12V or lower for components, reducing conversion losses and enabling scalability to 400V DC for future megawatt-scale racks. This disaggregated approach enhances maintainability by isolating power failures from compute nodes, as detailed in 's technical overview and OCP contributions. OCP's cooling innovations prioritize liquid-based solutions to manage escalating thermal loads, with the Immersion Project standardizing two-phase where servers are submerged in dielectric fluids that boil at low temperatures to absorb heat efficiently, enabling reuse and across systems. Complementing this, direct-to-chip liquid cooling employs cold plates attached to high-heat components like CPUs and GPUs, circulating single- or two-phase refrigerants to transfer heat directly, often achieving a (PUE) below 1.1 by minimizing overhead and fan power. These methods, developed through OCP's Cooling Environments initiative, support rack-level integration with ORv3 manifolds for coolant distribution, as specified in project guidelines and whitepapers. Efficiency in OCP power systems is bolstered by redundant architectures, such as or 2N configurations in power shelves and battery backup units (BBUs), which incorporate dual feeds, hot-swappable modules, and uninterruptible power supplies to maintain operations during failures. These designs achieve five-nines (99.999%) , equating to less than 5.26 minutes of annual downtime, by isolating faults and enabling seamless , as demonstrated in Google's +/-400V DC implementations and OCP BBU specifications. Such redundancy integrates briefly with compute hardware for overall system reliability without introducing single points of failure.

Emerging Technologies for AI and Sustainability

The Open Compute Project (OCP) has advanced its Open Chiplet Economy in 2025 through key contributions aimed at enabling modular, scalable silicon designs for AI and high-performance computing (HPC). This expansion promotes interoperability among chiplet vendors by standardizing interfaces and architectures, fostering a diverse ecosystem for AI accelerators. A pivotal development is the Foundation Chiplet System Architecture (FCSA), contributed by Arm, which provides a specification for system partitioning and chiplet connectivity to reduce fragmentation in heterogeneous integration. Complementing FCSA, the Bunch of Wires 2.0 (BoW 2.0) specification enhances die-to-die interfaces for memory-intensive AI and HPC workloads, supporting high-bandwidth, low-latency connections with defined operating modes, signal ordering, and electrical requirements. These efforts build on the OCP's upstream work in chiplet selection and integration, accelerating innovation in disaggregated compute systems. For AI-specific hardware, OCP has developed modules that support composable silicon architectures, allowing flexible assembly of accelerators for diverse workloads. The Open Accelerator Module (OAM), part of the Open Accelerator Infrastructure (OAI) subproject, defines a standardized form factor and interconnect for compute accelerators, enabling up to 700W TDP in 48V configurations and compatibility with multiple . OAM facilitates composable designs by integrating with universal baseboards and expansion modules, optimizing scalability for AI and . In parallel, OCP initiatives address AI fabrics through open Ethernet-based architectures, including polymorphic designs that scale out GPU connectivity for large clusters. These fabrics incorporate non-scheduled and scheduled Ethernet protocols to manage workload diversity, enhancing efficiency in collective operations for AI-ML tasks. OCP's sustainability projects emphasize guidelines for integrating and achieving carbon-neutral data centers, aligning with broader industry goals for net-zero emissions. The OCP Project focuses on minimizing impacts through metrics for energy use, water consumption, and material circularity, while the Data Center Facility (DCF) subproject targets non-IT decarbonization via power distribution and facility designs. In 2025, OCP released a for carbon disclosure in collaboration with the Infrastructure Masons Climate Accord (as of October 2025), establishing a framework for reporting equipment impacts to support renewable sourcing and offset strategies. These efforts integrate with OCP's testing and validation programs, including OCP Ready certifications for facilities, to verify sustainable practices in real-world deployments. In 2025, OCP collaborations have targeted advanced cooling solutions for AI clusters to address escalating power densities. Partnerships, including those showcased at the OCP Global Summit (as of October 2025), advance liquid cooling standards like Advanced Cooling Environments (ACF) and direct-to-chip distributions, enabling support for high-density racks up to 1 MW. These initiatives, involving contributors like Microsoft and Google, aim to reduce energy consumption in AI inference workloads through optimized thermal management and efficient coolant units, contributing to overall data center efficiency gains.

Impact and Collaborations

Industry Adoption and Ecosystem

The Open Compute Project (OCP) has seen widespread adoption among hyperscale operators, driven by its designs that enhance efficiency and scalability. Meta, as a founding member, has integrated OCP specifications into the majority of its , with early implementations demonstrating that are 38% more efficient to build and 24% less expensive to operate compared to prior proprietary designs. , which joined OCP in 2014, has incorporated its Project Olympus modular rack designs into Azure to accelerate hardware deployment and reduce customization costs. , a board member since 2016, leverages OCP standards for high-density deployments, including liquid cooling solutions first applied to its (TPU) v3 systems in 2018, enabling more compact and efficient AI training environments. The OCP ecosystem has expanded significantly through the OCP Marketplace, a platform showcasing certified and inspired products from a growing network of vendors, supporting diverse infrastructure needs such as power distribution and rack systems. Notable contributors include , which provides OCP-compliant power shelves for efficient energy delivery in data centers, and , offering rack solutions and networking switches like the DS6000 series for AI workloads. By 2025, the reflects robust growth, with OCP-recognized equipment sales projected to exceed $56 billion globally in 2026 and reach $73.5 billion by 2028, fueled by contributions in AI, storage, and networking. Economically, OCP adoption yields substantial benefits through open supply chains that foster competition among vendors and minimize redundant research and development efforts across organizations. Adopters report operational cost reductions, exemplified by Meta's 24% savings in data center running expenses due to optimized power usage and modular components. These efficiencies also drive broader market impacts, with OCP server sales growing at a 35.7% (CAGR) and peripherals at 51% CAGR through , enabling smaller operators to access hyperscale innovations without proprietary lock-in. In and , OCP designs facilitate deployments requiring compact, low-power for distributed environments. The OCP Telecom & Edge project specifies solutions like gateways and open radio units, adopted by telecom operators to support edge processing and reduce latency in remote locations. For instance, these standards enable efficient integration of compute resources at network edges, aligning with industry needs for scalable, energy-efficient hardware in telecom .

Recent Initiatives and Partnerships

In 2025, the Open Compute Project (OCP) Global Summit highlighted significant advancements in AI infrastructure, including the launch of the Ethernet for Scale-Up Networking (ESUN) project. ESUN aims to develop Ethernet-based technologies optimized for large-scale AI clusters, enabling efficient scale-up networking to support demands. The summit also featured expansions to the Open Chiplet Economy, building on the 2024 Chiplet launch by introducing new standards and tools for modular chip design, fostering silicon diversity in AI systems. OCP forged key partnerships in 2025 to address AI-driven challenges across storage, cooling, and interoperability. In October, OCP collaborated with the Storage Networking Industry Association (SNIA) to standardize solutions for AI storage, memory, and networking, promoting open ecosystems to optimize hyperscale data center performance. Similarly, a new alliance with the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) focused on advancing data center cooling technologies, aligning OCP designs with ASHRAE's environmental guidelines to improve energy efficiency. Additionally, OCP partnered with the Open Interchange (OIX) to create a unified open standard harmonizing OIX-2 data center protocols with OCP's interoperability frameworks, enhancing network interconnection in multi-vendor environments. Post-2020 initiatives have included educational efforts to build community expertise, such as the OCP Future Technologies Initiative launched in 2021, which integrates academic and research contributions into projects. In parallel, the 2024 Mt. Diablo project, co-developed by Meta, , and , introduced disaggregated power architectures for AI racks, supporting up to 1 MW per rack through 400V systems and solid-state transformers to enable scalable, efficient power delivery. Looking ahead, OCP is developing AI cluster guidelines through projects like Designs for AI, which provide procurement-ready specifications for scale-up and scale-out configurations to accelerate multi-vendor deployments. On sustainability, the OCP Sustainability Project advances transparency via a 2025 for carbon disclosure, co-developed with iMasons, to standardize reporting and reduce audit redundancies for environmental impact assessments.

Litigation and Intellectual Property Disputes

One notable early legal challenge to the Open Compute Project (OCP) arose in 2012 when Yahoo accused Facebook of infringing 16 related to and server technologies, specifically claiming that designs shared through OCP violated Yahoo's rights. This assertion was part of a broader patent dispute initiated by Yahoo in March 2012, which encompassed and technologies but extended to OCP's open-sourced hardware specifications. The parties settled the suits in July 2012 without monetary exchange, instead forming a cross-licensing agreement and advertising partnership to resolve all claims. In 2015, British firm BladeRoom Group filed a against in the U.S. District Court for the Northern District of California, alleging misappropriation of trade secrets in modular designs developed during failed partnership talks. BladeRoom claimed improperly disclosed its proprietary cooling and construction methodologies through OCP publications, including a blog post and specifications, thereby undermining BladeRoom's commercial exclusivity and causing damages estimated at over $365 million. The case advanced past 's motion to dismiss in 2017, but settled confidentially in April 2018, with BladeRoom dropping claims against while proceeding against co-defendant Emerson Network Power. To address such and risks inherent in open hardware collaboration, OCP established its Rights Management Policy, which requires contributors to grant royalty-free, perpetual licenses under the Open Web Foundation Contributor License Agreement (CLA) for essential covering . This policy includes a 30-day period for excluding specific patent claims via detailed notices, ensuring mutual protection while promoting adoption; final are licensed under the Open Web Foundation Agreement for non-exclusive, royalty-free use. These mechanisms function defensively by networking IP commitments among participants, deterring litigation through reciprocal licensing and clarifying for implementations. These disputes underscored vulnerabilities in OCP's open-source model, where sharing innovations invites infringement claims, yet they reinforced the value of robust licensing frameworks in mitigating legal risks and fostering sustained hardware innovation within the community.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.