Hubbry Logo
Intelligent Platform Management InterfaceIntelligent Platform Management InterfaceMain
Open search
Intelligent Platform Management Interface
Community hub
Intelligent Platform Management Interface
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Intelligent Platform Management Interface
Intelligent Platform Management Interface
from Wikipedia

The Intelligent Platform Management Interface (IPMI) is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware (BIOS or UEFI) and operating system. IPMI defines a set of interfaces used by system administrators for out-of-band management of computer systems and monitoring of their operation. For example, IPMI provides a way to manage a computer that may be powered off or otherwise unresponsive by using a network connection to the hardware rather than to an operating system or login shell. Another use case may be installing a custom operating system remotely. Without IPMI, installing a custom operating system may require an administrator to be physically present near the computer, insert a DVD or a USB flash drive containing the OS installer and complete the installation process using a monitor and a keyboard. Using IPMI, an administrator can mount an ISO image, simulate an installer DVD, and perform the installation remotely.[1]

The specification is led by Intel and was first published on September 16, 1998. It is supported by more than 200 computer system vendors, such as Cisco, Dell,[2] Hewlett Packard Enterprise, and Intel.[3][4]

Functionality

[edit]

Using a standardized interface and protocol allows systems-management software based on IPMI to manage multiple, disparate servers. As a message-based, hardware-level interface specification, IPMI operates independently of the operating system (OS) to allow administrators to manage a system remotely in the absence of an operating system or of the system management software. Thus, IPMI functions can work in any of three scenarios:

  • before an OS has booted (allowing, for example, the remote monitoring or changing of BIOS settings)
  • when the system is powered down
  • after OS or system failure – the key characteristic of IPMI compared with in-band system management is that it enables remote login to the operating system using SSH

System administrators can use IPMI messaging to monitor platform status (such as system temperatures, voltages, fans, power supplies and chassis intrusion); to query inventory information; to review hardware logs of out-of-range conditions; or to perform recovery procedures such as issuing requests from a remote console through the same connections e.g. system power-down and rebooting, or configuring watchdog timers. The standard also defines an alerting mechanism for the system to send a Simple Network Management Protocol (SNMP) platform event trap (PET).

The monitored system may be powered off, but must be connected to a power source and to the monitoring medium, typically a local area network (LAN) connection. IPMI can also function after the operating system has started, and exposes management data and structures to the system management software. IPMI prescribes only the structure and format of the interfaces as a standard, while detailed implementations may vary. An implementation of IPMI version 1.5 can communicate via a direct out-of-band LAN or serial connection or via a side-band LAN connection to a remote client. The side-band LAN connection utilizes the board network interface controller (NIC). This solution is less expensive than a dedicated LAN connection but also has limited bandwidth and security issues.

Systems compliant with IPMI version 2.0 can also communicate via serial over LAN, whereby serial console output can be remotely viewed over the LAN. Systems implementing IPMI 2.0 typically also include KVM over IP, remote virtual media and out-of-band embedded web-server interface functionality, although strictly speaking, these lie outside of the scope of the IPMI interface standard.

DCMI (Data Center Manageability Interface) is a similar standard based on IPMI but designed to be more suitable for Data Center management: it uses the interfaces defined in IPMI, but minimizes the number of optional interfaces and includes power capping control, among other differences.

IPMI components

[edit]
IPMI architecture diagram shows BMC sideband via SMBUS.
Interfaces to the baseboard management controller (BMC)

An IPMI sub-system consists of a main controller, called the baseboard management controller (BMC) and other management controllers distributed among different system modules that are referred to as satellite controllers. The satellite controllers within the same chassis connect to the BMC via the system interface called Intelligent Platform Management Bus/Bridge (IPMB) – an enhanced implementation of I²C (Inter-Integrated Circuit). The BMC connects to satellite controllers or another BMC in another chassis via the Intelligent Platform Management Controller (IPMC) bus or bridge. It may be managed with the Remote Management Control Protocol (RMCP), a specialized wire protocol defined by this specification. RMCP+ (a UDP-based protocol with stronger authentication than RMCP) is used for IPMI over LAN.

Several vendors develop and market BMC chips. A BMC utilized for embedded applications may have limited memory and require optimized firmware code for implementation of the full IPMI functionality. Highly integrated BMCs can provide complex instructions and provide the complete out-of-band functionality of a service processor. The firmware implementing the IPMI interfaces is provided by various vendors. A field-replaceable unit (FRU) repository holds the inventory, such as vendor ID and manufacturer, of potentially replaceable devices. A sensor data record (SDR) repository provides the properties of the individual sensors present on the board. For example, the board may contain sensors for temperature, fan speed, and voltage.

Baseboard management controller

[edit]
Fully integrated BMC as a single chip on a server motherboard

The baseboard management controller (BMC) provides the intelligence in the IPMI architecture. It is a specialized microcontroller embedded on the motherboard of a computer – generally a server. The BMC manages the interface between system-management software and platform hardware. BMC has its dedicated firmware and RAM.

Different types of sensors built into the computer system report to the BMC on parameters such as temperature, cooling fan speeds, power status, operating system (OS) status, etc. The BMC monitors the sensors and can send alerts to a system administrator via the network if any of the parameters do not stay within pre-set limits, indicating a potential failure of the system. The administrator can also remotely communicate with the BMC to take some corrective actions – such as resetting or power cycling the system to get a hung OS running again. These abilities reduce the total cost of ownership of a system.

Systems compliant with IPMI version 2.0 can also communicate via serial over LAN, whereby serial console output can be remotely viewed over the LAN. Systems implementing IPMI 2.0 typically also include KVM over IP, remote virtual media and out-of-band embedded web-server interface functionality, although strictly speaking, these lie outside of the scope of the IPMI interface standard.

Physical interfaces to the BMC include SMBuses, an RS-232 serial console, address and data lines and an IPMB, that enables the BMC to accept IPMI request messages from other management controllers in the system.

A direct serial connection to the BMC is not encrypted as the connection itself is secure. Connection to the BMC over LAN may or may not use encryption depending on the security concerns of the user.

There are rising concerns about general security regarding BMCs as a closed infrastructure.[5][6][7][8] OpenBMC is a Linux Foundation Collaborative open-source BMC project.[9]

Security

[edit]

Historical issues

[edit]

On 2 July 2013, Rapid7 published a guide to security penetration testing of the latest IPMI 2.0 protocol and implementations by various vendors.[10]

Some sources in 2013 were advising against using the older version of IPMI,[5] due to security concerns related to the design and vulnerabilities of Baseboard Management Controllers (BMCs).[11][12]

However, like any other management interface, best security practices dictate the placement of the IPMI management port on a dedicated management LAN or VLAN restricted to trusted Administrators.[13]

Latest IPMI specification security improvements

[edit]

The IPMI specification has been updated with RAKP+ and a stronger cipher that is computationally impractical to break.[14] Vendors as a result have provided patches that remediate these vulnerabilities.[citation needed]

The DMTF organization has developed a secure and scalable interface specification called Redfish to work in modern datacenter environments.[15]

Potential solutions

[edit]

Some potential solutions exist outside of the IPMI standard, depending on proprietary implementations. The use of default short passwords, or "cipher 0" hacks can be easily overcome with the use of a RADIUS server for Authentication, Authorization, and Accounting (AAA) over SSL as is typical in a datacenter or any medium to large deployment. The user's RADIUS server can be configured to store AAA securely in an LDAP database using either FreeRADIUS/OpenLDAP or Microsoft Active Directory and related services.

Role-based access provides a way to respond to current and future security issues by increasing amounts of restriction for higher roles. Role-based access is supported with three roles available: Administrator, Operator and User.

Overall, the User role has read-only access of the BMC and no remote control ability such as power cycle or the ability to view or log into the main CPU on the motherboard. Therefore, any hacker with the User role has zero access to confidential information, and zero control over the system. The User role is typically used to monitor sensor readings, after an SNMP alert has been received by SNMP Network Monitoring Software.

The Operator role is used in the rare event when a system is hung, to generate an NMI crash/core dump file and reboot or power cycle the system. In such a case, the Operator will also have access to the system software to collect the crash/core dump file.

The Administrator role is used to configure the BMC on first boot during the commissioning of the system when first installed.

Therefore, the prudent best practice is to disable the use of the Operator and Administrator roles in LDAP/RADIUS, and only enable them when needed by the LDAP/RADIUS administrator. For example, in RADIUS a role can have its setting Auth-Type changed to:

Auth-Type := Reject

Doing so will prevent RAKP hash attacks from succeeding since the username will be rejected by the RADIUS server.

Version history

[edit]

The IPMI standard specification has evolved through a number of iterations:[16][17]

  • v1.0 was announced on September 16, 1998: base specification
  • v1.5, published on February 21, 2001: added features including IPMI over LAN, IPMI over Serial/Modem, and LAN Alerting
  • v2.0, published on February 12, 2004: added features including Serial over LAN, Group Managed Systems, Enhanced Authentication, Firmware Firewall, and VLAN Support
  • v2.0 revision 1.1, published on October 1, 2013: amended for errata, clarifications, and addenda, plus addition of support for IPv6 Addressing
  • v2.0 revision 1.1 Errata 7, published on April 21, 2015: amended for errata, clarifications, addenda[18]

Implementations

[edit]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Intelligent Platform Management Interface (IPMI) is an open-standard, hardware-level interface specification that defines a set of computer interface protocols for an autonomous subsystem enabling management and monitoring of platform hardware independent of the host system's CPU, firmware (such as or ), and operating system. This approach allows for remote access, control, and diagnostics even when the main system is powered off, unresponsive, or lacking an operational OS. IPMI facilitates essential functions such as monitoring system health through sensors for temperature, voltage, fan speeds, and power supply status; logging events in a System Event Log (SEL); inventorying hardware via (FRU) information; and enabling recovery actions like , resets, or alerts via or SNMP. It supports multiple communication channels, including local buses like the Intelligent Platform Management Bus (IPMB), serial/modems, and notably LAN for remote management over IP networks, reducing the need for physical intervention in data centers and enterprise environments. At its core, IPMI relies on a Baseboard Management Controller (BMC), a dedicated on the that handles these operations autonomously. Developed collaboratively by , , , and , IPMI version 1.0 was first released on September 16, 1998, as a message-based protocol to standardize server platform across vendors. Version 1.5, published February 21, 2001, introduced IPMI over LAN and serial/ support for broader remote access. The current standard, version 2.0 (released February 12, 2004, with revisions up to 1.1 in 2013), added enhanced security features like RMCP+ protocol, support, stronger authentication (e.g., HMAC-SHA1), and Serial-over-LAN (SOL) for console redirection, while maintaining .

Overview

Definition and Purpose

The Intelligent Platform Management Interface (IPMI) is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independent of the host system's operating system, , or firmware. Developed by the IPMI Forum—a led by , , , and —IPMI standardizes hardware-level interfaces to ensure across platforms for enterprise and environments. The primary purposes of IPMI include remote monitoring of physical variables such as , voltage levels, and fan speeds through integrated sensors, as well as event to record conditions, out-of-range thresholds, and anomalies in a dedicated log for analysis. It also supports control actions like , resets, and updates, all executed without dependence on the main processor or operating , thereby enabling proactive and diagnostics. Key benefits of IPMI encompass pre-boot for configuring system states prior to operating system loading, failure recovery through automated resets and diagnostics, and to enhance and in high-availability setups. Unlike in-band , which relies on the active operating system and its network infrastructure, IPMI emphasizes via dedicated channels such as local area networks or serial connections, allowing access even when the host is powered off or unresponsive; this is typically orchestrated by a baseboard controller as the core subsystem component.

Development History

The Intelligent Platform Management Interface (IPMI) originated in 1998 through a collaborative effort by Corporation, Company, Corporation, and Computer Corporation, who announced the availability of the IPMI v1.0 specifications on September 16 at the Intel Developer Forum. This initiative addressed the growing demands of data centers for reliable remote server management, particularly the limitations of in-band tools like SNMP, which require the operating system to be operational and thus fail to monitor hardware issues during system crashes or shutdowns. The goal was to establish vendor-neutral, open specifications enabling access to platform management functions, such as monitoring temperature, voltage, and fans, to predict hardware failures, improve diagnostics, and reduce through across diverse systems. Version 1.5, released in February 2001, introduced support for IPMI over LAN, expanding remote management capabilities. By the early 2000s, the IPMI standards had gained widespread adoption, with support from over 200 vendors ensuring broad interoperability in server ecosystems. Notable participants included Cisco Systems and Supermicro Computer, which integrated IPMI into their hardware offerings, expanding its application beyond initial promoters to encompass a diverse range of enterprise and data center equipment. This growth reflected the forum's success in fostering a collaborative environment for ongoing refinements, culminating in subsequent specification releases that built on the foundational v1.0 framework. No major specification changes occurred after , though errata updates continued to address minor issues, such as parameter numbering in LAN configurations. IPMI has since been complemented by DMTF's standard, which provides a RESTful for scalable platform management and serves as a modern successor to legacy interfaces like IPMI.

Core Functionality

Monitoring Capabilities

The Intelligent Platform Management Interface (IPMI) provides comprehensive monitoring capabilities through a standardized set of devices that track critical hardware parameters in real-time, independent of the host operating . These s monitor parameters such as temperature thresholds, including CPU hotspots; voltage levels; fan speeds in (RPM); status; and chassis intrusion detection. data is abstracted and accessible via commands like Get Sensor Reading, allowing for threshold-based alerts when parameters exceed predefined limits, such as overheating or voltage instability. A key component of IPMI monitoring is the System Event Log (SEL), a non-volatile storage repository managed by the baseboard management controller (BMC) that records system events with detailed metadata. The SEL stores events such as overheat alerts and memory errors, each entry including a 32-bit timestamp (seconds since January 1, 1970), severity levels, and sensor-specific details for analysis. Events are retrieved using commands like Get SEL Entry or Read Event Message Buffer, supporting capacities of approximately 3-8 KB with unique record IDs to track the sequence and progression of issues. IPMI supports monitoring through both periodic polling and asynchronous event generation to ensure timely detection of anomalies. Periodic polling involves querying sensors at regular intervals using the Get Sensor Reading command, leveraging Sensor Data Records (SDRs) for configuration details like thresholds and units. Asynchronous events are generated proactively via Platform Event Messages (PETs) or System Event Messages (SEMs), which are queued in the Event Message Buffer or delivered over the Intelligent Platform Management Bus (IPMB) to notify remote managers without constant polling overhead. The specification defines up to 255 distinct types, encompassing both discrete states—such as power-on/off transitions or presses—and analog readings like continuous or voltage values, which include conversion formulas for accurate interpretation. These types are cataloged in the SDR Repository, enabling flexible event filtering and to field-replaceable units (FRUs) for precise diagnostics. For instance, discrete sensors might report binary states like chassis intrusion, while analog ones provide numeric data with to avoid event flooding.

Management Operations

The Intelligent Platform Management Interface (IPMI) provides a suite of remote operations that enable administrators to control and configure server systems without direct physical access, leveraging the baseboard controller (BMC) over interfaces such as LAN or serial connections. These operations build on collected monitoring data, such as event triggers from environmental sensors, to execute automated or manual actions for maintenance and recovery. Key capabilities include , hardware inventory , boot configuration, console access, firmware maintenance, diagnostics, and chassis-level adjustments, all standardized through defined network functions (NetFNs) and commands to ensure across implementations. Remote power management in IPMI allows for precise control of system power states, including powering on, powering off, resetting, or initiating graceful shutdowns, which is essential for remote rebooting or recovery in data centers. This is achieved via the NetFN (0x00) with the Chassis Control command (0x02), supporting actions like power down, , power cycle, , diagnostic , and soft shutdown; these can be invoked over serial terminal modes with commands such as SYS POWER ON or SYS RESET. Additionally, platform event filtering (PEF) integrates power actions—such as power down (action 1), power cycle (2), or reset (3)—in response to predefined events, enhancing automated reliability without OS dependency. FRU inventory management facilitates the reading and writing of data on field replaceable units (FRUs), such as motherboards, power supplies, or components, to support , serialization, and configuration auditing. Using the Storage NetFN (0x0A), the Get FRU Inventory Area Info command (0x10) retrieves the size and location of FRU data areas, while Read FRU Data (0x11) and Write FRU Data (0x12) enable extraction or modification of structured information like part numbers, serial numbers, and manufacturing dates stored in . This capability extends to private management buses via Master Write-Read commands under the Transport NetFN, allowing comprehensive hardware lifecycle management remotely. Boot device selection and console redirection provide pre-OS remote access akin to keyboard-video-mouse (KVM) functionality, enabling troubleshooting and configuration during system startup. The Chassis NetFN (0x00) with Set System Boot Options command (0x08) configures boot flags to prioritize devices like PXE, HDD, or setup, often set via serial commands like SYS SET BOOT in terminal mode. (SOL) implements console redirection by activating a virtual over the network using the Application NetFN (0x06) with Activate Payload (0x3A, payload type 0x01) and SOL-specific commands like Get SOL Configuration Parameters (Transport NetFN, 0x39), allowing bidirectional text-based access to the system's serial console for diagnostics or OS installation. Firmware updates and diagnostic runs support ongoing system integrity and fault isolation through remote execution. Firmware maintenance involves updating BMC or device firmware via implementation-defined or OEM commands, often under the Application NetFN or using Hot Plug Manager (HPM) extensions, paired with storage operations like entering SDR repository update mode (Storage NetFN, 0x12) and writing sensor data records (0x14) to incorporate new configurations. Diagnostics are triggered using the Application NetFN (0x06) Set command (0x22) for timed interrupts or the Chassis NetFN Chassis Control (0x02) with the pulse diagnostic interrupt option (0x04), with results queried via Get Self-Test Results (Application NetFN, 0x04); additional tests can use standard commands like Get Self-Test Results for component-level checks. Chassis control operations allow adjustment of physical components, such as fan speeds, to maintain optimal operating conditions based on predefined thresholds derived from monitoring events. Through the NetFN (0x00), commands like Set Power Restore Policy (0x06) or Control (0x02) manage overall chassis state, while fan speed modifications are typically handled via Sensor NetFN (0x04) Set Sensor (0x25) or threshold settings (0x26) to enable dynamic responses, such as increasing RPM in response to events. PEF configurations further automate these adjustments by linking chassis actions to event filters, ensuring proactive and .

System Components

Baseboard Management Controller

The Baseboard Management Controller (BMC) serves as the for Intelligent Platform Management Interface (IPMI) operations, functioning as a specialized embedded directly on the of a server or system. It operates independently of the host (CPU), basic input/output system (), and operating system, relying on its own dedicated processor, firmware, and memory to ensure autonomous management capabilities. This isolation allows the BMC to monitor and control system hardware continuously, even in failure scenarios affecting the primary system components. In processing IPMI commands, the BMC receives requests through various interfaces, including network connections such as (LAN) over (UDP) with internet protocol version 4 (IPv4) or version 6 (IPv6), as well as serial interfaces like keyboard controller style (KCS), system management interface chip (SMIC), block transfer (BT), or serial/modem. It interfaces directly with system s and actuators to gather data on environmental factors—such as temperatures, voltages, and fan speeds—and to execute control actions, utilizing a sensor model to interpret and respond to these inputs. The BMC then generates appropriate responses, including completion codes, which are routed back via mechanisms like the receive message queue or data output registers, enabling system management software to interact effectively. For internal communication, it may utilize the Intelligent Platform Management Bus (IPMB). The BMC maintains key resource repositories to support its management functions, including Sensor Data Records (SDR) stored in , which contain configurations for sensors such as their types, locations, event thresholds, and system-specific details. Additionally, it stores (FRU) information, providing inventory data like serial numbers, part identifiers, device locations, and access specifications for replaceable components. These repositories are accessible , facilitating remote diagnostics and maintenance without relying on the host system. To enable persistent availability, the BMC operates within a separate power domain, drawing from rails that remain active even when the main system is powered off or in low-power states such as advanced configuration and power interface () S4 or S5. This design supports access for remote monitoring and control, ensuring the BMC can initiate recovery actions like or resets independently of the host's operational status.

Intelligent Platform Management Bus

The Intelligent Platform Management Bus (IPMB) serves as the primary internal communication backbone within an IPMI-managed system, enabling the exchange of management information between the baseboard management controller (BMC) and various satellite controllers. It operates as a multi-drop, two-wire serial bus that connects the BMC—acting as the bus master—to satellite controllers on components such as storage devices, I/O cards, and power supplies, facilitating distributed monitoring and control without relying on the host CPU. IPMB is implemented as a subset of the I²C bus protocol, standardized by Philips (now NXP Semiconductors) and adapted by Intel for platform management, running at a typical speed of 100 kbps to balance reliability and performance in noisy environments. The protocol employs only master write transactions over I²C, where the BMC initiates all communications, ensuring deterministic access in multi-master scenarios. Message framing begins with an IPMB connection header consisting of the target slave address (7-bit, with read/write bit always set to 0), the network function (netFn) and logical unit number (LUN) byte, and an 8-bit checksum, followed by the payload and a second checksum for the entire message. Sequence numbers are incorporated via a 1-byte sequence field in the message header, incremented by the sender for each new request to allow receivers to match responses to specific instances and detect lost or duplicated packets. Checksums use an 8-bit two's complement arithmetic, computed such that the sum of all bytes in the header or message (including the checksum itself) equals zero modulo 256, providing error detection for transmission integrity. Command and response formats are structured with fields for the requester's source address (rqSA), responder's source address (rsSA), LUN, command code, data bytes, and completion code, where requests use even netFn values and responses use the corresponding odd values (e.g., netFn 06h for request becomes 07h for response). The addressing scheme utilizes 7-bit slave addresses, with IPMB reserving specific ranges for intelligent devices—such as 20h for the BMC and 30h–3Fh, B0h–BFh, and D0h–DEh for add-in controllers—allowing configurations that support up to 15 internal nodes per segment to accommodate typical server designs. For larger systems, bridging via dedicated bridge controllers (e.g., using address 22h for ICMB interfaces) enables interconnection of multiple IPMB segments through store-and-forward message relaying, where incoming requests are reformatted and retransmitted to the target segment without altering the core . This hierarchical structure supports scalability in multi-node enclosures like blade servers. IPMB specifications include provisions for extensions, such as private buses attached behind satellite controllers, which allow vendors to implement proprietary -based features for chassis-specific modularity while maintaining compatibility with the standard IPMB protocol on the main segment. These private buses enable non-intelligent devices to coexist without conflicting with IPMI traffic, promoting flexible integration of custom hardware in managed platforms.

Specification Versions

IPMI 1.5

The Intelligent Platform Management Interface (IPMI) version 1.5 specification was released on February 21, 2001, extending the earlier v1.0 standard by introducing serial and LAN interfaces specifically designed for out-of-band access to system monitoring and control functions, independent of the host operating system or main CPU. This version established a foundational framework for remote platform management, supporting interfaces such as the Intelligent Platform Management Bus (IPMB), PCI Management Bus, and serial/modem connections, while supporting compatibility with ACPI power management for enterprise-class servers. A key enhancement in IPMI 1.5 over v1.0 was the addition of the Remote Management Control Protocol (RMCP), which encapsulates IPMI messages within UDP/IP packets for network-based command transmission, using UDP port 623 for primary communication and enabling pre-OS management scenarios. in this version relies on basic mechanisms, including straight password/key and challenge-response methods, applied per message or at the user level, with support for up to 64 User IDs per channel (implementation-dependent; commonly 16) and configurable privilege levels (user, operator, administrator). These features facilitated initial remote access without requiring dedicated hardware beyond the baseboard management controller (BMC). Despite these advances, IPMI 1.5 exhibited notable limitations, including weak due to the absence of session or protections in RMCP, making it susceptible to replay attacks and man-in-the-middle interference despite . The specification capped user support at a maximum of 64 per channel (implementation-dependent) and provided only basic platform event filtering (PEF) without advanced capabilities for complex event correlation or . IPMI 1.5 saw widespread adoption in early server platforms from vendors like , HP, and , serving as the initial standard for standardized remote monitoring and control in data centers before the enhanced security of v2.0.

IPMI 2.0

The Intelligent Platform Management Interface (IPMI) version 2.0 was initially released on June 1, 2004, as the second generation of the specification, building upon the foundational elements of earlier versions to enhance remote management capabilities. It was later revised to version 1.1 on October 1, 2013, with subsequent errata updates issued through April 21, 2015, addressing clarifications, parameter corrections, and implementation guidance without introducing fundamental changes. A key advancement in IPMI 2.0 is the introduction of RMCP+ (Remote Management Control Protocol Plus), which establishes secure, encrypted communication sessions over LAN, replacing the less secure RMCP from prior versions and enabling robust out-of-band management even when the host system is powered off. IPMI 2.0 significantly upgrades authentication mechanisms through the Remote Authenticated Key Exchange () protocol, which supports multiple suites including HMAC-SHA1 for and confidentiality, thereby mitigating risks associated with transmissions in remote access scenarios. This protocol facilitates between the management controller and remote clients, using challenge-response methods to derive session keys without exposing passwords directly over the network. Additionally, the specification expands operational features to support multiple simultaneous remote sessions per channel (implementation-dependent; recommended minimum of 4), allowing multiple administrators to manage the platform concurrently without interference. It also incorporates tagging for network isolation and segmentation, enabling IPMI traffic to be confined to specific virtual networks for improved and efficiency. Furthermore, integration with SNMP traps provides standardized alerting mechanisms, where the management controller can send asynchronous notifications to systems for events like hardware failures or threshold breaches. Following the 2015 errata, no major revisions to the IPMI 2.0 core specification have been released, positioning it as the enduring standard for platform management interfaces as of 2025, with development efforts shifting toward complementary standards such as the Data Center Manageability Interface (DCMI) version 1.1 and .

Security Considerations

Known Vulnerabilities

In 2013, security researchers at Rapid7 identified significant exposure of Baseboard Management Controllers (BMCs) implementing the Intelligent Platform Management Interface (IPMI), revealing over 35,000 IPMI interfaces accessible from the with default credentials such as ADMIN/ADMIN. These weak defaults allowed unauthorized remote access, potentially enabling attackers to execute arbitrary code, reboot systems, or extract sensitive data from the BMC without changes. The IPMI 1.5 specification introduced notable protocol weaknesses over LAN communications, including the transmission of passwords in clear text during user authentication and password changes, which exposed them to by network observers. Additionally, the lack of and session in version 1.5 made it susceptible to replay attacks, where intercepted packets could be reused to impersonate legitimate users and issue unauthorized commands. Common misconfigurations in IPMI deployments have exacerbated risks, particularly leaving UDP port 623 open without firewall protections, which facilitates amplification distributed denial-of-service (DDoS) attacks through IPMI's support for broadcast messages that generate larger response traffic. These broadcasts can overwhelm targets when spoofed with victim IP addresses. Post-2015, vulnerabilities in legacy IPMI systems have persisted, highlighting ongoing risks in unpatched or outdated deployments. For instance, in , a flaw in Cisco's Integrated Management Controller (IMC)—the BMC for Unified Computing System (UCS) servers—allowed unauthenticated remote attackers to execute arbitrary SQL commands via the web interface, potentially compromising system integrity (CVE-2018-15447). Such incidents underscore the challenges of securing older IPMI implementations amid evolving threats, though addressed some issues like clear-text transmission through enhanced cipher support. More recent vulnerabilities include CVE-2023-28863, disclosed in 2023, which allows attackers with network access to bypass negotiated and in IPMI sessions, potentially enabling unauthorized commands. In 2023, multiple critical flaws in BMC IPMI (e.g., ZDI-23-1200) permitted remote code execution and . The 2024 AMI MegaRAC vulnerability (CVE-2024-54085) enables remote takeover and denial-of-service on affected BMCs. As of 2025, reported additional BMC IPMI issues, including a root-of-trust bypass (CVE-2025-7937) allowing malicious injection. These highlight the continued need for updates and secure configurations.

Specification-Based Mitigations

The IPMI 2.0 specification introduces through defined user privilege levels to mitigate unauthorized access risks. These levels include Callback (privilege 1h), which permits only basic callback initiation for remote session setup; User (privilege 2h), restricted to read-only operations such as retrieving sensor data and system event logs without modification capabilities; Operator (privilege 3h), allowing operational tasks like and monitoring but excluding configuration changes; and Administrator (privilege 4h), granting full access to all commands, including settings and channel . An optional OEM Proprietary level (privilege 5h) supports vendor-specific extensions. Privilege limits are enforced per channel and user via commands like Set Channel Access and Set User Access, ensuring the effective privilege is the minimum of the channel limit and user limit, thereby preventing . Encryption in IPMI 2.0 is provided through the RMCP+ protocol, which uses AES-128 in Cipher Block Chaining (CBC) mode for payload confidentiality, derived from a 128-bit Session Integrity Key (SIK) and a per-packet 16-byte initialization vector. This mechanism protects sensitive data, such as user credentials and management commands, during transmission over LAN channels. RMCP+ employs the Remote Authenticated Key-Exchange Protocol (RAKP) with HMAC-SHA1 or HMAC-SHA256 for mutual authentication and integrity, incorporating challenge-response exchanges, session sequence numbers, and a 32-entry sliding window to detect replays, thereby preventing man-in-the-middle attacks by verifying endpoint authenticity and data integrity. Cipher suites, configurable via Get Channel Cipher Suites, support AES-128 alongside other options, with encryption dynamically enabled or suspended per session. Alerting safeguards in the specification include configurable Platform Event Trap (PET) mechanisms for secure notifications, integrated with SNMP traps over UDP port 623, allowing policy-based event filtering and multiple destinations with retries and timeouts to ensure reliable delivery without flooding. The System Event Log (SEL) serves as an audit log, autonomously recording events including attempts, thresholds, and security-related incidents with timestamps and generator IDs, supporting commands like Get SEL Entry for retrieval and configurable thresholds for full/nearly full conditions. These features enable auditing of access attempts, such as failed logins from default credentials, to detect potential exploits. Compliance recommendations in IPMI errata emphasize robust , with the Set Channel Security Keys command enabling updates to RMCP+ keys (K_R for remote console and K_G for managed system) and optional locking to prevent further modifications, facilitating periodic rotation for enhanced security. While two-factor is not mandated, the specification's enhanced via pre-shared keys and challenge-response aligns with best practices for multi-layered protection, recommending cryptographically strong, unpredictable random values and full 160-bit keys for one-key logins to maintain integrity against brute-force attacks.

Implementations and Tools

Vendor-Specific Solutions

Major vendors have developed proprietary implementations of the Intelligent Platform Management Interface (IPMI) through integrated baseboard management controllers (BMCs), extending the standard protocol with custom features for enhanced remote management, security, and integration in enterprise environments. These solutions build on the core IPMI while adding vendor-specific tools like graphical interfaces, capabilities, and APIs tailored to their hardware ecosystems. Dell's Integrated Dell Remote Access Controller (iDRAC) serves as an embedded BMC that supports IPMI 2.0 for of servers, featuring a web-based (GUI) for real-time monitoring and control. It includes virtual media redirection to mount ISO images remotely and the Lifecycle Controller, which automates updates, hardware configuration, and diagnostics without host OS involvement. iDRAC also enables IPMI over LAN for secure remote access, with configurable settings for channel access and user privileges. Hewlett Packard Enterprise (HPE) implements IPMI via the Integrated Lights-Out (iLO) advanced management processor in ProLiant servers, providing IPMI 2.0 compliance with extensions for scripting and multi-node orchestration. iLO supports advanced scripting through its RESTful API and command-line interface, allowing automation of tasks like power cycling and sensor monitoring across distributed environments. A key extension is iLO Federation, which enables peer-to-peer communication among iLO instances for centralized management of multiple servers in a group, with no specified limit on group size, including shared alert propagation and group policy enforcement without requiring a dedicated management server. Supermicro's BMC offerings integrate IPMI in their server motherboards and systems, emphasizing cost-effective remote management with features like KVM-over-IP for access. Recent models support an HTML5-based web console for browser-native remote control, eliminating the need for plugins and improving compatibility across devices. The BMC also includes media redirection for virtual drives and serial-over-LAN () for text-based console access, alongside health monitoring for components like fans and power supplies. Cisco's Unified Computing System (UCS) incorporates IPMI through the Cisco Integrated Management Controller (CIMC) in C-Series rack servers and via the UCS Manager for B-Series blade servers, enabling standardized management with proprietary extensions. CIMC supports IPMI over LAN for blade and standalone servers, with API extensions including a RESTful interface based on the standard for programmatic integration. These facilitate cloud orchestration by allowing UCS components to interface with platforms like VMware vCenter or AWS for automated provisioning and monitoring in hybrid environments.

Open-Source Software

Open-source software plays a crucial role in enabling developers and system administrators to interact with IPMI interfaces without relying on tools, supporting both in-band and for tasks such as monitoring and control. These tools are typically implemented as libraries, utilities, and integrations that adhere to IPMI specifications, allowing for custom solutions in environments and larger frameworks. OpenIPMI is a prominent open-source designed to simplify the development of IPMI management applications by providing an over the IPMI protocol. It consists of a device driver, such as ipmi_si, which handles low-level communication with the baseboard management controller (BMC), and a user-level that offers higher-level APIs for in-band and out-of-band access. This setup supports features like event-driven monitoring and command execution, making it suitable for integrating IPMI into custom software stacks. The project is hosted on and actively maintained for compatibility with modern kernels. FreeIPMI is a comprehensive GNU suite of tools and libraries for IPMI v1.5 and v2.0 compliance, focusing on in-band and operations to manage remote systems. Key components include ipmidetect, which scans for BMCs on the network; bmc-info, for retrieving detailed BMC configuration and status; and libipmimonitoring with tools like ipmi-sel for parsing and managing system event logs (SEL). These utilities abstract IPMI details, enabling straightforward monitoring, event interpretation, and chassis control without deep protocol knowledge. The suite is distributed under the GPL and available via official GNU repositories. IPMItool serves as a versatile command-line utility for direct interaction with IPMI-enabled devices, supporting both local kernel drivers and remote LAN interfaces over IPMI v1.5 and v2.0. It allows users to send raw IPMI commands, read sensor data repositories (SDR), monitor environmental s, and script operations like or field-replaceable unit (FRU) . For instance, commands such as ipmitool sensor list provide real-time hardware status, while ipmitool chassis power enables automated in scripts. The tool is open-source, licensed under BSD, hosted on (archived as of 2023), and has broad adoption in server administration. These open-source tools integrate seamlessly with platforms to automate IPMI-based provisioning and management at scale. In , modules like community.general.ipmi_power facilitate power control and node management within playbooks, supporting idempotent operations for . Similarly, OpenStack's Ironic service leverages IPMI drivers for bare-metal provisioning, using tools like IPMItool or FreeIPMI to handle PXE booting, power control, and hardware inspection across clusters.

References

  1. https://www.intel.com/content/dam/www/public/[us](/page/United_States)/en/documents/product-briefs/ipmp-spec-v1.0.pdf
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.