Recent from talks
Contribute something
Nothing was collected or created yet.
Intelligent Platform Management Interface
View on WikipediaThe Intelligent Platform Management Interface (IPMI) is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware (BIOS or UEFI) and operating system. IPMI defines a set of interfaces used by system administrators for out-of-band management of computer systems and monitoring of their operation. For example, IPMI provides a way to manage a computer that may be powered off or otherwise unresponsive by using a network connection to the hardware rather than to an operating system or login shell. Another use case may be installing a custom operating system remotely. Without IPMI, installing a custom operating system may require an administrator to be physically present near the computer, insert a DVD or a USB flash drive containing the OS installer and complete the installation process using a monitor and a keyboard. Using IPMI, an administrator can mount an ISO image, simulate an installer DVD, and perform the installation remotely.[1]
The specification is led by Intel and was first published on September 16, 1998. It is supported by more than 200 computer system vendors, such as Cisco, Dell,[2] Hewlett Packard Enterprise, and Intel.[3][4]
Functionality
[edit]Using a standardized interface and protocol allows systems-management software based on IPMI to manage multiple, disparate servers. As a message-based, hardware-level interface specification, IPMI operates independently of the operating system (OS) to allow administrators to manage a system remotely in the absence of an operating system or of the system management software. Thus, IPMI functions can work in any of three scenarios:
- before an OS has booted (allowing, for example, the remote monitoring or changing of BIOS settings)
- when the system is powered down
- after OS or system failure – the key characteristic of IPMI compared with in-band system management is that it enables remote login to the operating system using SSH
System administrators can use IPMI messaging to monitor platform status (such as system temperatures, voltages, fans, power supplies and chassis intrusion); to query inventory information; to review hardware logs of out-of-range conditions; or to perform recovery procedures such as issuing requests from a remote console through the same connections e.g. system power-down and rebooting, or configuring watchdog timers. The standard also defines an alerting mechanism for the system to send a Simple Network Management Protocol (SNMP) platform event trap (PET).
The monitored system may be powered off, but must be connected to a power source and to the monitoring medium, typically a local area network (LAN) connection. IPMI can also function after the operating system has started, and exposes management data and structures to the system management software. IPMI prescribes only the structure and format of the interfaces as a standard, while detailed implementations may vary. An implementation of IPMI version 1.5 can communicate via a direct out-of-band LAN or serial connection or via a side-band LAN connection to a remote client. The side-band LAN connection utilizes the board network interface controller (NIC). This solution is less expensive than a dedicated LAN connection but also has limited bandwidth and security issues.
Systems compliant with IPMI version 2.0 can also communicate via serial over LAN, whereby serial console output can be remotely viewed over the LAN. Systems implementing IPMI 2.0 typically also include KVM over IP, remote virtual media and out-of-band embedded web-server interface functionality, although strictly speaking, these lie outside of the scope of the IPMI interface standard.
DCMI (Data Center Manageability Interface) is a similar standard based on IPMI but designed to be more suitable for Data Center management: it uses the interfaces defined in IPMI, but minimizes the number of optional interfaces and includes power capping control, among other differences.
IPMI components
[edit]
An IPMI sub-system consists of a main controller, called the baseboard management controller (BMC) and other management controllers distributed among different system modules that are referred to as satellite controllers. The satellite controllers within the same chassis connect to the BMC via the system interface called Intelligent Platform Management Bus/Bridge (IPMB) – an enhanced implementation of I²C (Inter-Integrated Circuit). The BMC connects to satellite controllers or another BMC in another chassis via the Intelligent Platform Management Controller (IPMC) bus or bridge. It may be managed with the Remote Management Control Protocol (RMCP), a specialized wire protocol defined by this specification. RMCP+ (a UDP-based protocol with stronger authentication than RMCP) is used for IPMI over LAN.
Several vendors develop and market BMC chips. A BMC utilized for embedded applications may have limited memory and require optimized firmware code for implementation of the full IPMI functionality. Highly integrated BMCs can provide complex instructions and provide the complete out-of-band functionality of a service processor. The firmware implementing the IPMI interfaces is provided by various vendors. A field-replaceable unit (FRU) repository holds the inventory, such as vendor ID and manufacturer, of potentially replaceable devices. A sensor data record (SDR) repository provides the properties of the individual sensors present on the board. For example, the board may contain sensors for temperature, fan speed, and voltage.
Baseboard management controller
[edit]
The baseboard management controller (BMC) provides the intelligence in the IPMI architecture. It is a specialized microcontroller embedded on the motherboard of a computer – generally a server. The BMC manages the interface between system-management software and platform hardware. BMC has its dedicated firmware and RAM.
Different types of sensors built into the computer system report to the BMC on parameters such as temperature, cooling fan speeds, power status, operating system (OS) status, etc. The BMC monitors the sensors and can send alerts to a system administrator via the network if any of the parameters do not stay within pre-set limits, indicating a potential failure of the system. The administrator can also remotely communicate with the BMC to take some corrective actions – such as resetting or power cycling the system to get a hung OS running again. These abilities reduce the total cost of ownership of a system.
Systems compliant with IPMI version 2.0 can also communicate via serial over LAN, whereby serial console output can be remotely viewed over the LAN. Systems implementing IPMI 2.0 typically also include KVM over IP, remote virtual media and out-of-band embedded web-server interface functionality, although strictly speaking, these lie outside of the scope of the IPMI interface standard.
Physical interfaces to the BMC include SMBuses, an RS-232 serial console, address and data lines and an IPMB, that enables the BMC to accept IPMI request messages from other management controllers in the system.
A direct serial connection to the BMC is not encrypted as the connection itself is secure. Connection to the BMC over LAN may or may not use encryption depending on the security concerns of the user.
There are rising concerns about general security regarding BMCs as a closed infrastructure.[5][6][7][8] OpenBMC is a Linux Foundation Collaborative open-source BMC project.[9]
Security
[edit]Historical issues
[edit]On 2 July 2013, Rapid7 published a guide to security penetration testing of the latest IPMI 2.0 protocol and implementations by various vendors.[10]
Some sources in 2013 were advising against using the older version of IPMI,[5] due to security concerns related to the design and vulnerabilities of Baseboard Management Controllers (BMCs).[11][12]
However, like any other management interface, best security practices dictate the placement of the IPMI management port on a dedicated management LAN or VLAN restricted to trusted Administrators.[13]
Latest IPMI specification security improvements
[edit]The IPMI specification has been updated with RAKP+ and a stronger cipher that is computationally impractical to break.[14] Vendors as a result have provided patches that remediate these vulnerabilities.[citation needed]
The DMTF organization has developed a secure and scalable interface specification called Redfish to work in modern datacenter environments.[15]
Potential solutions
[edit]Some potential solutions exist outside of the IPMI standard, depending on proprietary implementations. The use of default short passwords, or "cipher 0" hacks can be easily overcome with the use of a RADIUS server for Authentication, Authorization, and Accounting (AAA) over SSL as is typical in a datacenter or any medium to large deployment. The user's RADIUS server can be configured to store AAA securely in an LDAP database using either FreeRADIUS/OpenLDAP or Microsoft Active Directory and related services.
Role-based access provides a way to respond to current and future security issues by increasing amounts of restriction for higher roles. Role-based access is supported with three roles available: Administrator, Operator and User.
Overall, the User role has read-only access of the BMC and no remote control ability such as power cycle or the ability to view or log into the main CPU on the motherboard. Therefore, any hacker with the User role has zero access to confidential information, and zero control over the system. The User role is typically used to monitor sensor readings, after an SNMP alert has been received by SNMP Network Monitoring Software.
The Operator role is used in the rare event when a system is hung, to generate an NMI crash/core dump file and reboot or power cycle the system. In such a case, the Operator will also have access to the system software to collect the crash/core dump file.
The Administrator role is used to configure the BMC on first boot during the commissioning of the system when first installed.
Therefore, the prudent best practice is to disable the use of the Operator and Administrator roles in LDAP/RADIUS, and only enable them when needed by the LDAP/RADIUS administrator. For example, in RADIUS a role can have its setting Auth-Type changed to:
Auth-Type := Reject
Doing so will prevent RAKP hash attacks from succeeding since the username will be rejected by the RADIUS server.
Version history
[edit]The IPMI standard specification has evolved through a number of iterations:[16][17]
- v1.0 was announced on September 16, 1998: base specification
- v1.5, published on February 21, 2001: added features including IPMI over LAN, IPMI over Serial/Modem, and LAN Alerting
- v2.0, published on February 12, 2004: added features including Serial over LAN, Group Managed Systems, Enhanced Authentication, Firmware Firewall, and VLAN Support
- v2.0 revision 1.1, published on October 1, 2013: amended for errata, clarifications, and addenda, plus addition of support for IPv6 Addressing
- v2.0 revision 1.1 Errata 7, published on April 21, 2015: amended for errata, clarifications, addenda[18]
Implementations
[edit]- HPE Integrated Lights-Out, HP's implementation of IPMI
- Dell DRAC, Dell's implementation of IPMI
- Supermicro Intelligent Management, SMCI's implementation of IPMI
- IBM Remote Supervisor Adapter, IBM's out-of-band management products, including IPMI implementations
- MegaRAC, AMI's out-of-band management product and OEM IPMI firmware
- Avocent MergePoint Embedded Management Software, an OEM IPMI firmware
- Cisco Integrated Management Controller (IMC), Cisco's implementation of IPMI
- Lenovo xClarity, Lenovo's implementation of IPMI
See also
[edit]- Alert Standard Format (ASF), another platform management standard
- Desktop and mobile Architecture for System Hardware (DASH), another platform management standard
- Intel Active Management Technology (AMT), Intel's out-of-band management product, as an alternative to IPMI
- Management Component Transport Protocol (MCTP), a low-level protocol used for controlling hardware components
- Open Platform Management Architecture (OPMA), AMD's out-of-band management standard
- System Service Processor, on some SPARC machines
- Wired for Management (WfM)
References
[edit]- ^ "Supermicro IPMI - What is it and what can it do for you?". Archived from the original on 27 February 2019. Retrieved 27 February 2018.
- ^ An Introduction to the Intelligent Platform Management Interface
- ^ "Intelligent Platform Management Interface; Adopters list". Intel. Retrieved 9 August 2014.
- ^ Chernis, P J (1985). "Petrographic analyses of URL-2 and URL-6 special thermal conductivity samples". doi:10.4095/315247.
{{cite journal}}: Cite journal requires|journal=(help) - ^ a b "The Eavesdropping System in Your Computer - Schneier on Security". Schneier.com. 2013-01-31. Retrieved 2013-12-05.
- ^ "InfoSec Handlers Diary Blog - IPMI: Hacking servers that are turned "off"". Isc.sans.edu. 2012-06-07. Retrieved 2015-05-29.
- ^ Goodin, Dan (2013-08-16). ""Bloodsucking leech" puts 100,000 servers at risk of potent attacks". Arstechnica.com. Retrieved 2015-05-29.
- ^ Anthony J. Bonkoski; Russ Bielawski; J. Alex Halderman (2013). "Illuminating the Security Issues Surrounding Lights-Out Server Management.Usenix Workshop on Offensive Technologies" (PDF). Usenix.org. Retrieved 2015-05-29.
- ^ "OpenBMC Project Community Comes Together at The Linux Foundation to Define Open Source Implementation of BMC Firmware Stack - The Linux Foundation". The Linux Foundation. 2018-03-19. Retrieved 2018-03-27.
- ^ "Metasploit: A Penetration Tester's Guide to IPMI and BMCs". Rapid7.com. 2013-07-02. Retrieved 2013-12-05.
- ^ "Authentication Bypass Vulnerability in IPMI 2.0 RAKP through the use of cipher zero". websecuritywatch.com. 2013-08-23. Retrieved 2013-12-05.
- ^ Dan Farmer (2013-08-22). "IPMI: Freight train to hell" (PDF). fish2.com. Retrieved 2013-12-05.
- ^ Kumar, Rohit (2018-10-19). "Basic BMC and IPMI Management Security Practices". ServeTheHome. Retrieved 2019-12-23.
- ^ "IPMI Specification, V2.0, Rev. 1.1: Document". Intel. Retrieved 2022-06-11.
- ^ "Redfish: A New API for Managing Servers". InfoQ. Retrieved 2022-06-11.
- ^ "Intelligent Platform Management Interface: What is IPMI?". Intel. Retrieved 9 August 2014.
- ^ "Intelligent Platform Management Interface; Specifications". Intel. Retrieved 9 August 2014.
- ^ IPMI - Ver2.0 Rev1.1 Errata7
External links
[edit]- Intel IPMI Technical Resources Website
- A Comparison of common IPMI Software open-source projects
- GNU FreeIPMI
- ipmitool
- ipmiutil
- OpenIPMI
- coreIPM Project - open source firmware for IPMI baseboard management
- IPMeye - Centralized out-of-band access for enterprises / Part of VendorN's OneDDI platform
Intelligent Platform Management Interface
View on GrokipediaOverview
Definition and Purpose
The Intelligent Platform Management Interface (IPMI) is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independent of the host system's operating system, central processing unit, or firmware.[4] Developed by the IPMI Forum—a consortium led by Intel, Hewlett-Packard, NEC, and Dell—IPMI standardizes hardware-level interfaces to ensure interoperability across platforms for enterprise and data center environments.[4] The primary purposes of IPMI include remote monitoring of physical variables such as temperature, voltage levels, and fan speeds through integrated sensors, as well as event logging to record system conditions, out-of-range thresholds, and anomalies in a dedicated log for analysis.[4] It also supports control actions like power cycling, system resets, and firmware updates, all executed without dependence on the main processor or operating system, thereby enabling proactive maintenance and diagnostics.[4] Key benefits of IPMI encompass pre-boot management for configuring system states prior to operating system loading, failure recovery through automated resets and diagnostics, and data center automation to enhance scalability and efficiency in high-availability setups.[4] Unlike in-band management, which relies on the active operating system and its network infrastructure, IPMI emphasizes out-of-band management via dedicated channels such as local area networks or serial connections, allowing access even when the host is powered off or unresponsive; this is typically orchestrated by a baseboard management controller as the core subsystem component.[4]Development History
The Intelligent Platform Management Interface (IPMI) originated in 1998 through a collaborative effort by Intel Corporation, Hewlett-Packard Company, NEC Corporation, and Dell Computer Corporation, who announced the availability of the IPMI v1.0 specifications on September 16 at the Intel Developer Forum.[3] This initiative addressed the growing demands of data centers for reliable remote server management, particularly the limitations of in-band tools like SNMP, which require the operating system to be operational and thus fail to monitor hardware issues during system crashes or shutdowns.[3] The goal was to establish vendor-neutral, open specifications enabling out-of-band access to platform management functions, such as monitoring temperature, voltage, and fans, to predict hardware failures, improve diagnostics, and reduce total cost of ownership through interoperability across diverse systems.[3] Version 1.5, released in February 2001, introduced support for IPMI over LAN, expanding remote management capabilities.[5] By the early 2000s, the IPMI standards had gained widespread adoption, with support from over 200 vendors ensuring broad interoperability in server ecosystems.[6] Notable participants included Cisco Systems and Supermicro Computer, which integrated IPMI into their hardware offerings, expanding its application beyond initial promoters to encompass a diverse range of enterprise and data center equipment. This growth reflected the forum's success in fostering a collaborative environment for ongoing refinements, culminating in subsequent specification releases that built on the foundational v1.0 framework. No major specification changes occurred after 2015, though errata updates continued to address minor issues, such as parameter numbering in LAN configurations.[4] IPMI has since been complemented by DMTF's Redfish standard, which provides a RESTful API for scalable platform management and serves as a modern successor to legacy interfaces like IPMI.Core Functionality
Monitoring Capabilities
The Intelligent Platform Management Interface (IPMI) provides comprehensive monitoring capabilities through a standardized set of sensor devices that track critical hardware parameters in real-time, independent of the host operating system. These sensors monitor parameters such as temperature thresholds, including CPU hotspots; voltage levels; fan speeds in revolutions per minute (RPM); power supply status; and chassis intrusion detection. Sensor data is abstracted and accessible via commands like Get Sensor Reading, allowing for threshold-based alerts when parameters exceed predefined limits, such as overheating or voltage instability.[1] A key component of IPMI monitoring is the System Event Log (SEL), a non-volatile storage repository managed by the baseboard management controller (BMC) that records system events with detailed metadata. The SEL stores events such as overheat alerts and memory errors, each entry including a 32-bit timestamp (seconds since January 1, 1970), severity levels, and sensor-specific details for analysis. Events are retrieved using commands like Get SEL Entry or Read Event Message Buffer, supporting capacities of approximately 3-8 KB with unique record IDs to track the sequence and progression of issues.[1] IPMI supports monitoring through both periodic polling and asynchronous event generation to ensure timely detection of anomalies. Periodic polling involves system software querying sensors at regular intervals using the Get Sensor Reading command, leveraging Sensor Data Records (SDRs) for configuration details like thresholds and units. Asynchronous events are generated proactively via Platform Event Messages (PETs) or System Event Messages (SEMs), which are queued in the Event Message Buffer or delivered over the Intelligent Platform Management Bus (IPMB) to notify remote managers without constant polling overhead.[1] The specification defines up to 255 distinct sensor types, encompassing both discrete states—such as power-on/off transitions or button presses—and analog readings like continuous temperature or voltage values, which include conversion formulas for accurate interpretation. These sensor types are cataloged in the SDR Repository, enabling flexible event filtering and correlation to field-replaceable units (FRUs) for precise diagnostics. For instance, discrete sensors might report binary states like chassis intrusion, while analog ones provide numeric data with hysteresis to avoid event flooding.[1]Management Operations
The Intelligent Platform Management Interface (IPMI) provides a suite of remote management operations that enable administrators to control and configure server systems without direct physical access, leveraging the baseboard management controller (BMC) over interfaces such as LAN or serial connections. These operations build on collected monitoring data, such as event triggers from environmental sensors, to execute automated or manual actions for maintenance and recovery. Key capabilities include power cycling, hardware inventory management, boot configuration, console access, firmware maintenance, diagnostics, and chassis-level adjustments, all standardized through defined network functions (NetFNs) and commands to ensure interoperability across implementations.[1] Remote power management in IPMI allows for precise control of system power states, including powering on, powering off, resetting, or initiating graceful shutdowns, which is essential for remote rebooting or recovery in data centers. This is achieved via the Chassis NetFN (0x00) with the Chassis Control command (0x02), supporting actions like power down, power up, power cycle, hard reset, diagnostic interrupt, and soft shutdown; these can be invoked over serial terminal modes with commands such as SYS POWER ON or SYS RESET. Additionally, platform event filtering (PEF) integrates power actions—such as power down (action 1), power cycle (2), or reset (3)—in response to predefined events, enhancing automated reliability without OS dependency.[1] FRU inventory management facilitates the reading and writing of data on field replaceable units (FRUs), such as motherboards, power supplies, or chassis components, to support asset tracking, serialization, and configuration auditing. Using the Storage NetFN (0x0A), the Get FRU Inventory Area Info command (0x10) retrieves the size and location of FRU data areas, while Read FRU Data (0x11) and Write FRU Data (0x12) enable extraction or modification of structured information like part numbers, serial numbers, and manufacturing dates stored in non-volatile memory. This capability extends to private management buses via Master Write-Read commands under the Transport NetFN, allowing comprehensive hardware lifecycle management remotely.[1] Boot device selection and console redirection provide pre-OS remote access akin to keyboard-video-mouse (KVM) functionality, enabling troubleshooting and configuration during system startup. The Chassis NetFN (0x00) with Set System Boot Options command (0x08) configures boot flags to prioritize devices like PXE, HDD, or BIOS setup, often set via serial commands like SYS SET BOOT in terminal mode. Serial over LAN (SOL) implements console redirection by activating a virtual serial port over the network using the Application NetFN (0x06) with Activate Payload (0x3A, payload type 0x01) and SOL-specific commands like Get SOL Configuration Parameters (Transport NetFN, 0x39), allowing bidirectional text-based access to the system's serial console for diagnostics or OS installation.[1] Firmware updates and diagnostic runs support ongoing system integrity and fault isolation through remote execution. Firmware maintenance involves updating BMC or device firmware via implementation-defined or OEM commands, often under the Application NetFN or using Hot Plug Manager (HPM) extensions, paired with storage operations like entering SDR repository update mode (Storage NetFN, 0x12) and writing sensor data records (0x14) to incorporate new configurations. Diagnostics are triggered using the Application NetFN (0x06) Set Watchdog Timer command (0x22) for timed interrupts or the Chassis NetFN Chassis Control (0x02) with the pulse diagnostic interrupt option (0x04), with results queried via Get Self-Test Results (Application NetFN, 0x04); additional tests can use standard commands like Get Self-Test Results for component-level checks.[1] Chassis control operations allow adjustment of physical components, such as fan speeds, to maintain optimal operating conditions based on predefined thresholds derived from monitoring events. Through the Chassis NetFN (0x00), commands like Set Power Restore Policy (0x06) or Chassis Control (0x02) manage overall chassis state, while fan speed modifications are typically handled via Sensor NetFN (0x04) Set Sensor Hysteresis (0x25) or threshold settings (0x26) to enable dynamic responses, such as increasing RPM in response to temperature events. PEF configurations further automate these adjustments by linking chassis actions to event filters, ensuring proactive thermal and power management.[1]System Components
Baseboard Management Controller
The Baseboard Management Controller (BMC) serves as the central processing unit for Intelligent Platform Management Interface (IPMI) operations, functioning as a specialized microcontroller embedded directly on the motherboard of a server or computing system.[1] It operates independently of the host central processing unit (CPU), basic input/output system (BIOS), and operating system, relying on its own dedicated processor, firmware, and memory to ensure autonomous management capabilities.[1] This isolation allows the BMC to monitor and control system hardware continuously, even in failure scenarios affecting the primary system components.[1] In processing IPMI commands, the BMC receives requests through various interfaces, including network connections such as local area network (LAN) over user datagram protocol (UDP) with internet protocol version 4 (IPv4) or version 6 (IPv6), as well as serial interfaces like keyboard controller style (KCS), system management interface chip (SMIC), block transfer (BT), or serial/modem.[1] It interfaces directly with system sensors and actuators to gather data on environmental factors—such as temperatures, voltages, and fan speeds—and to execute control actions, utilizing a sensor model to interpret and respond to these inputs.[1] The BMC then generates appropriate responses, including completion codes, which are routed back via mechanisms like the receive message queue or data output registers, enabling system management software to interact effectively.[1] For internal communication, it may utilize the Intelligent Platform Management Bus (IPMB).[1] The BMC maintains key resource repositories to support its management functions, including Sensor Data Records (SDR) stored in non-volatile memory, which contain configurations for sensors such as their types, locations, event thresholds, and system-specific details.[1] Additionally, it stores Field Replaceable Unit (FRU) information, providing inventory data like serial numbers, part identifiers, device locations, and access specifications for replaceable components.[1] These repositories are accessible out-of-band, facilitating remote diagnostics and maintenance without relying on the host system.[1] To enable persistent availability, the BMC operates within a separate power domain, drawing from standby power rails that remain active even when the main system is powered off or in low-power states such as advanced configuration and power interface (ACPI) S4 or S5.[1] This design supports out-of-band access for remote monitoring and control, ensuring the BMC can initiate recovery actions like power cycling or resets independently of the host's operational status.[1]Intelligent Platform Management Bus
The Intelligent Platform Management Bus (IPMB) serves as the primary internal communication backbone within an IPMI-managed system, enabling the exchange of management information between the baseboard management controller (BMC) and various satellite controllers.[7] It operates as a multi-drop, two-wire serial bus that connects the BMC—acting as the bus master—to satellite controllers on components such as storage devices, I/O cards, and power supplies, facilitating distributed monitoring and control without relying on the host CPU.[7][8] IPMB is implemented as a subset of the I²C bus protocol, standardized by Philips (now NXP Semiconductors) and adapted by Intel for platform management, running at a typical speed of 100 kbps to balance reliability and performance in noisy environments.[7] The protocol employs only master write transactions over I²C, where the BMC initiates all communications, ensuring deterministic access in multi-master scenarios.[7] Message framing begins with an IPMB connection header consisting of the target slave address (7-bit, with read/write bit always set to 0), the network function (netFn) and logical unit number (LUN) byte, and an 8-bit checksum, followed by the payload and a second checksum for the entire message.[7] Sequence numbers are incorporated via a 1-byte sequence field in the message header, incremented by the sender for each new request to allow receivers to match responses to specific instances and detect lost or duplicated packets.[7] Checksums use an 8-bit two's complement arithmetic, computed such that the sum of all bytes in the header or message (including the checksum itself) equals zero modulo 256, providing error detection for transmission integrity.[7] Command and response formats are structured with fields for the requester's source address (rqSA), responder's source address (rsSA), LUN, command code, data bytes, and completion code, where requests use even netFn values and responses use the corresponding odd values (e.g., netFn 06h for request becomes 07h for response).[7] The addressing scheme utilizes 7-bit I²C slave addresses, with IPMB reserving specific ranges for intelligent devices—such as 20h for the BMC and 30h–3Fh, B0h–BFh, and D0h–DEh for add-in controllers—allowing configurations that support up to 15 internal nodes per segment to accommodate typical server chassis designs.[10] For larger systems, bridging via dedicated bridge controllers (e.g., using address 22h for ICMB interfaces) enables interconnection of multiple IPMB segments through store-and-forward message relaying, where incoming requests are reformatted and retransmitted to the target segment without altering the core payload. This hierarchical structure supports scalability in multi-node enclosures like blade servers. IPMB specifications include provisions for extensions, such as private buses attached behind satellite controllers, which allow vendors to implement proprietary I²C-based features for chassis-specific modularity while maintaining compatibility with the standard IPMB protocol on the main segment.[7][10] These private buses enable non-intelligent I²C devices to coexist without conflicting with IPMI traffic, promoting flexible integration of custom hardware in managed platforms.[7]Specification Versions
IPMI 1.5
The Intelligent Platform Management Interface (IPMI) version 1.5 specification was released on February 21, 2001, extending the earlier v1.0 standard by introducing serial and LAN interfaces specifically designed for out-of-band access to system monitoring and control functions, independent of the host operating system or main CPU. This version established a foundational framework for remote platform management, supporting interfaces such as the Intelligent Platform Management Bus (IPMB), PCI Management Bus, and serial/modem connections, while supporting compatibility with ACPI power management for enterprise-class servers.[11] A key enhancement in IPMI 1.5 over v1.0 was the addition of the Remote Management Control Protocol (RMCP), which encapsulates IPMI messages within UDP/IP packets for network-based command transmission, using UDP port 623 for primary communication and enabling pre-OS management scenarios. Authentication in this version relies on basic mechanisms, including straight password/key and MD5 challenge-response methods, applied per message or at the user level, with support for up to 64 User IDs per channel (implementation-dependent; commonly 16) and configurable privilege levels (user, operator, administrator). These features facilitated initial remote access without requiring dedicated hardware beyond the baseboard management controller (BMC).[11] Despite these advances, IPMI 1.5 exhibited notable limitations, including weak security due to the absence of session integrity or confidentiality protections in RMCP, making it susceptible to replay attacks and man-in-the-middle interference despite authentication. The specification capped user support at a maximum of 64 per channel (implementation-dependent) and provided only basic platform event filtering (PEF) without advanced capabilities for complex event correlation or policy-based routing. IPMI 1.5 saw widespread adoption in early 2000s server platforms from vendors like Intel, HP, and Dell, serving as the initial standard for standardized remote monitoring and control in data centers before the enhanced security of v2.0.[11][12][13]IPMI 2.0
The Intelligent Platform Management Interface (IPMI) version 2.0 was initially released on June 1, 2004, as the second generation of the specification, building upon the foundational elements of earlier versions to enhance remote management capabilities.[1] It was later revised to version 1.1 on October 1, 2013, with subsequent errata updates issued through April 21, 2015, addressing clarifications, parameter corrections, and implementation guidance without introducing fundamental changes.[4] A key advancement in IPMI 2.0 is the introduction of RMCP+ (Remote Management Control Protocol Plus), which establishes secure, encrypted communication sessions over LAN, replacing the less secure RMCP from prior versions and enabling robust out-of-band management even when the host system is powered off.[1] IPMI 2.0 significantly upgrades authentication mechanisms through the Remote Authenticated Key Exchange (RAKP) protocol, which supports multiple cipher suites including HMAC-SHA1 for message integrity and confidentiality, thereby mitigating risks associated with plaintext transmissions in remote access scenarios.[1] This protocol facilitates mutual authentication between the management controller and remote clients, using challenge-response methods to derive session keys without exposing passwords directly over the network.[14] Additionally, the specification expands operational features to support multiple simultaneous remote sessions per channel (implementation-dependent; recommended minimum of 4), allowing multiple administrators to manage the platform concurrently without interference.[15] It also incorporates VLAN tagging for network isolation and segmentation, enabling IPMI traffic to be confined to specific virtual networks for improved security and efficiency.[16] Furthermore, integration with SNMP traps provides standardized alerting mechanisms, where the baseboard management controller can send asynchronous notifications to network management systems for events like hardware failures or threshold breaches.[17] Following the 2015 errata, no major revisions to the IPMI 2.0 core specification have been released, positioning it as the enduring standard for platform management interfaces as of 2025, with development efforts shifting toward complementary standards such as the Data Center Manageability Interface (DCMI) version 1.1 and Redfish.[4][18]Security Considerations
Known Vulnerabilities
In 2013, security researchers at Rapid7 identified significant exposure of Baseboard Management Controllers (BMCs) implementing the Intelligent Platform Management Interface (IPMI), revealing over 35,000 Supermicro IPMI interfaces accessible from the internet with default credentials such as ADMIN/ADMIN.[19] These weak defaults allowed unauthorized remote access, potentially enabling attackers to execute arbitrary code, reboot systems, or extract sensitive data from the BMC without authentication changes.[14] The IPMI 1.5 specification introduced notable protocol weaknesses over LAN communications, including the transmission of passwords in clear text during user authentication and password changes, which exposed them to eavesdropping by network observers.[20] Additionally, the lack of encryption and session integrity in version 1.5 made it susceptible to replay attacks, where intercepted packets could be reused to impersonate legitimate users and issue unauthorized commands.[20] Common misconfigurations in IPMI deployments have exacerbated risks, particularly leaving UDP port 623 open without firewall protections, which facilitates amplification distributed denial-of-service (DDoS) attacks through IPMI's support for broadcast messages that generate larger response traffic.[21] These broadcasts can overwhelm targets when spoofed with victim IP addresses.[21] Post-2015, vulnerabilities in legacy IPMI systems have persisted, highlighting ongoing risks in unpatched or outdated deployments. For instance, in 2018, a SQL injection flaw in Cisco's Integrated Management Controller (IMC)—the BMC for Unified Computing System (UCS) servers—allowed unauthenticated remote attackers to execute arbitrary SQL commands via the web interface, potentially compromising system integrity (CVE-2018-15447).[22] Such incidents underscore the challenges of securing older IPMI implementations amid evolving threats, though version 2.0 addressed some issues like clear-text transmission through enhanced cipher support.[1] More recent vulnerabilities include CVE-2023-28863, disclosed in 2023, which allows attackers with network access to bypass negotiated integrity and confidentiality in IPMI sessions, potentially enabling unauthorized commands.[23] In 2023, multiple critical flaws in Supermicro BMC IPMI firmware (e.g., ZDI-23-1200) permitted remote code execution and privilege escalation.[24] The 2024 AMI MegaRAC vulnerability (CVE-2024-54085) enables remote takeover and denial-of-service on affected BMCs.[25] As of 2025, Supermicro reported additional BMC IPMI issues, including a root-of-trust bypass (CVE-2025-7937) allowing malicious firmware injection.[26] These highlight the continued need for firmware updates and secure configurations.Specification-Based Mitigations
The IPMI 2.0 specification introduces role-based access control through defined user privilege levels to mitigate unauthorized access risks. These levels include Callback (privilege 1h), which permits only basic callback initiation for remote session setup; User (privilege 2h), restricted to read-only operations such as retrieving sensor data and system event logs without modification capabilities; Operator (privilege 3h), allowing operational tasks like power control and monitoring but excluding configuration changes; and Administrator (privilege 4h), granting full access to all commands, including security settings and channel management. An optional OEM Proprietary level (privilege 5h) supports vendor-specific extensions. Privilege limits are enforced per channel and user via commands like Set Channel Access and Set User Access, ensuring the effective privilege is the minimum of the channel limit and user limit, thereby preventing privilege escalation.[1] Encryption in IPMI 2.0 is provided through the RMCP+ protocol, which uses AES-128 in Cipher Block Chaining (CBC) mode for payload confidentiality, derived from a 128-bit Session Integrity Key (SIK) and a per-packet 16-byte initialization vector. This mechanism protects sensitive data, such as user credentials and management commands, during transmission over LAN channels. RMCP+ employs the Remote Authenticated Key-Exchange Protocol (RAKP) with HMAC-SHA1 or HMAC-SHA256 for mutual authentication and integrity, incorporating challenge-response exchanges, session sequence numbers, and a 32-entry sliding window to detect replays, thereby preventing man-in-the-middle attacks by verifying endpoint authenticity and data integrity. Cipher suites, configurable via Get Channel Cipher Suites, support AES-128 alongside other options, with encryption dynamically enabled or suspended per session.[1] Alerting safeguards in the specification include configurable Platform Event Trap (PET) mechanisms for secure notifications, integrated with SNMP traps over UDP port 623, allowing policy-based event filtering and multiple destinations with retries and timeouts to ensure reliable delivery without flooding. The System Event Log (SEL) serves as an audit log, autonomously recording events including authentication attempts, sensor thresholds, and security-related incidents with timestamps and generator IDs, supporting commands like Get SEL Entry for retrieval and configurable thresholds for full/nearly full conditions. These features enable auditing of access attempts, such as failed logins from default credentials, to detect potential exploits.[1] Compliance recommendations in IPMI errata emphasize robust key management, with the Set Channel Security Keys command enabling updates to RMCP+ keys (K_R for remote console and K_G for managed system) and optional locking to prevent further modifications, facilitating periodic rotation for enhanced security. While two-factor authentication is not mandated, the specification's enhanced authentication via pre-shared keys and challenge-response aligns with best practices for multi-layered protection, recommending cryptographically strong, unpredictable random values and full 160-bit keys for one-key logins to maintain integrity against brute-force attacks.[27]Implementations and Tools
Vendor-Specific Solutions
Major vendors have developed proprietary implementations of the Intelligent Platform Management Interface (IPMI) through integrated baseboard management controllers (BMCs), extending the standard protocol with custom features for enhanced remote management, security, and integration in enterprise environments. These solutions build on the core IPMI specifications while adding vendor-specific tools like graphical interfaces, automation capabilities, and APIs tailored to their hardware ecosystems. Dell's Integrated Dell Remote Access Controller (iDRAC) serves as an embedded BMC that supports IPMI 2.0 for out-of-band management of PowerEdge servers, featuring a web-based graphical user interface (GUI) for real-time monitoring and control.[28] It includes virtual media redirection to mount ISO images remotely and the Lifecycle Controller, which automates firmware updates, hardware configuration, and diagnostics without host OS involvement.[29] iDRAC also enables IPMI over LAN for secure remote access, with configurable settings for channel access and user privileges.[30] Hewlett Packard Enterprise (HPE) implements IPMI via the Integrated Lights-Out (iLO) advanced management processor in ProLiant servers, providing IPMI 2.0 compliance with extensions for scripting and multi-node orchestration.[31] iLO supports advanced scripting through its RESTful API and command-line interface, allowing automation of tasks like power cycling and sensor monitoring across distributed environments.[32] A key extension is iLO Federation, which enables peer-to-peer communication among iLO instances for centralized management of multiple servers in a group, with no specified limit on group size, including shared alert propagation and group policy enforcement without requiring a dedicated management server.[33] Supermicro's BMC offerings integrate IPMI 2.0 in their server motherboards and systems, emphasizing cost-effective remote management with features like KVM-over-IP for virtual console access.[34] Recent models support an HTML5-based web console for browser-native remote control, eliminating the need for Java plugins and improving compatibility across devices.[35] The BMC also includes media redirection for virtual drives and serial-over-LAN (SOL) for text-based console access, alongside health monitoring for components like fans and power supplies.[36] Cisco's Unified Computing System (UCS) incorporates IPMI through the Cisco Integrated Management Controller (CIMC) in C-Series rack servers and via the UCS Manager for B-Series blade servers, enabling standardized management with proprietary extensions.[37] CIMC supports IPMI over LAN for blade and standalone servers, with API extensions including a RESTful interface based on the Redfish standard for programmatic integration.[38] These APIs facilitate cloud orchestration by allowing UCS components to interface with platforms like VMware vCenter or AWS for automated provisioning and monitoring in hybrid environments.[39]Open-Source Software
Open-source software plays a crucial role in enabling developers and system administrators to interact with IPMI interfaces without relying on proprietary tools, supporting both in-band and out-of-band management for tasks such as monitoring and control. These tools are typically implemented as libraries, utilities, and integrations that adhere to IPMI specifications, allowing for custom solutions in Linux environments and larger orchestration frameworks.[40][41][42] OpenIPMI is a prominent open-source library designed to simplify the development of IPMI management applications by providing an abstraction layer over the IPMI protocol. It consists of a Linux kernel device driver, such as ipmi_si, which handles low-level communication with the baseboard management controller (BMC), and a user-level library that offers higher-level APIs for in-band and out-of-band access. This setup supports features like event-driven monitoring and command execution, making it suitable for integrating IPMI into custom software stacks. The project is hosted on SourceForge and actively maintained for compatibility with modern Linux kernels.[40][43] FreeIPMI is a comprehensive GNU suite of tools and libraries for IPMI v1.5 and v2.0 compliance, focusing on in-band and out-of-band operations to manage remote systems. Key components include ipmidetect, which scans for BMCs on the network; bmc-info, for retrieving detailed BMC configuration and status; and libipmimonitoring with tools like ipmi-sel for parsing and managing system event logs (SEL). These utilities abstract IPMI details, enabling straightforward sensor monitoring, event interpretation, and chassis control without deep protocol knowledge. The suite is distributed under the GPL and available via official GNU repositories.[41][44] IPMItool serves as a versatile command-line utility for direct interaction with IPMI-enabled devices, supporting both local kernel drivers and remote LAN interfaces over IPMI v1.5 and v2.0. It allows users to send raw IPMI commands, read sensor data repositories (SDR), monitor environmental sensors, and script operations like power cycling or field-replaceable unit (FRU) information retrieval. For instance, commands such asipmitool sensor list provide real-time hardware status, while ipmitool chassis power enables automated power management in scripts. The tool is open-source, licensed under BSD, hosted on GitHub (archived as of 2023), and has broad adoption in server administration.[42][45]
These open-source tools integrate seamlessly with orchestration platforms to automate IPMI-based provisioning and management at scale. In Ansible, modules like community.general.ipmi_power facilitate power control and node management within playbooks, supporting idempotent operations for infrastructure as code. Similarly, OpenStack's Ironic service leverages IPMI drivers for bare-metal provisioning, using tools like IPMItool or FreeIPMI to handle PXE booting, power control, and hardware inspection across clusters.[46][47]References
- https://www.intel.com/content/dam/www/public/[us](/page/United_States)/en/documents/product-briefs/ipmp-spec-v1.0.pdf
