Recent from talks
Contribute something
Nothing was collected or created yet.
Libvirt
View on Wikipedia| libvirt | |
|---|---|
| Developer | Red Hat |
| Initial release | December 19, 2005[1] |
| Stable release | 12.0.0[2]
/ 15 January 2026 |
| Repository | |
| Written in | C |
| Operating system | Linux, FreeBSD, Windows, macOS[3] |
| Type | Library |
| License | GNU Lesser General Public License |
| Website | libvirt |
libvirt is an open-source API, daemon and management tool for managing platform virtualization.[3] It can be used to manage KVM, Xen, VMware ESXi, QEMU and other virtualization technologies. These APIs are widely used in the orchestration layer of hypervisors in the development of a cloud-based solution.
Internals
[edit]
libvirt is a C library with bindings in other languages, notably in Python,[4] Perl,[5] OCaml,[6] Ruby,[7] Java,[8] JavaScript (via Node.js)[9] and PHP.[10] libvirt for these programming languages is composed of wrappers around another class/package called libvirtmod. libvirtmod's implementation is closely associated with its counterpart in C/C++ in syntax and functionality.
Supported Hypervisors
[edit]- LXC – lightweight Linux container system
- OpenVZ – lightweight Linux container system
- Kernel-based Virtual Machine/QEMU (KVM) – open-source hypervisor for Linux and SmartOS[11]
- Xen – bare-metal hypervisor
- User-mode Linux (UML) – paravirtualized kernel
- VirtualBox – hypervisor by Oracle (formerly by Sun) for Windows, Linux, macOS, and Solaris
- VMware ESXi and GSX – hypervisors for Intel hardware
- VMware Workstation and Player – hypervisors for Windows and Linux
- Hyper-V – hypervisor for Windows by Microsoft
- PowerVM – hypervisor by IBM for AIX, Linux and IBM i
- Bhyve – hypervisor for FreeBSD 10+[12] (support added with libvirt 1.2.2)
User Interfaces
[edit]Various virtualization programs and platforms use libvirt. Virtual Machine Manager, GNOME Boxes and others provide graphical interfaces. The most popular command line interface is virsh, and higher level tools such as oVirt.[13]
Corporate
[edit]Development of libvirt is backed by Red Hat,[14] with significant contributions by other organisations and individuals. libvirt is available on most Linux distributions; remote servers are also accessible from Apple Mac OS X and Microsoft Windows clients.[15]
See also
[edit]References
[edit]- ^ "0.0.1: Dec 19 2005". libvirt. 2017-06-16. Archived from the original on 2020-02-20. Retrieved 2017-06-16.
- ^ Jiri Denemark (15 January 2026). "Release of libvirt-12.0.0". Retrieved 16 January 2026.
- ^ a b "libvirt home page description".
- ^ "Python bindings".
- ^ "Perl bindings".
- ^ "OCaml bindings".
- ^ "Ruby bindings".
- ^ "Java bindings".
- ^ "Node.js module". 9 January 2017.
- ^ "PHP bindings".
- ^ "The Observation Deck » KVM on illumos". 15 August 2011.
- ^ "bhyve - FreeBSD Wiki". wiki.freebsd.org.
- ^ "oVirt Virtualization Management Platform".
- ^ "Innovation Without Disruption: Red Hat Enterprise Linux 5.4 Now Available".
- ^ "Windows availability".
Books
[edit]- Warnke, Robert; Ritzau, Thomas. qemu-kvm & libvirt (in German). Norderstedt, Germany: Books on Demand. ISBN 978-3-8370-0876-0.
External links
[edit]Libvirt
View on GrokipediaOverview
Definition and Purpose
Libvirt is an open-source toolkit consisting of an API, daemon, and management tools designed for platform virtualization and management.[8] It serves as a software library that enables consistent interaction with various virtualization technologies through a stable, unified programming interface accessible from languages such as C, Python, Perl, and Go.[8] Licensed under the GNU Lesser General Public License version 2.1 or later, libvirt facilitates secure and efficient oversight of virtualization resources on a single node or remotely.[10] The primary purpose of libvirt is to provide a common abstraction layer for managing virtual machines (VMs), storage pools and volumes, virtual networks, and physical host nodes across diverse hypervisors, including KVM and Xen.[11] This unified interface abstracts the complexities of underlying platforms, allowing administrators and applications to perform operations without needing hypervisor-specific knowledge.[1] Libvirt simplifies key aspects of VM lifecycle management, such as creation, startup, shutdown, migration, and deletion, while also supporting resource monitoring, configuration adjustments, and event handling for efficient virtualization operations.[3] By offering these capabilities, it reduces administrative overhead and enhances portability, making it a foundational building block for higher-level management applications and tools.[11] The latest stable release as of November 2025 is version 11.9.0, issued on November 3, 2025.[12]History
Libvirt originated as a project initiated by Red Hat engineers in late 2005, aimed at creating a standardized API to manage diverse virtualization platforms and mitigate the challenges posed by fragmented management interfaces across emerging hypervisors. The project began with its first commit on November 2, 2005.[2] The project's first public release, version 0.1.0, arrived in 2006, introducing foundational support for Xen hypervisor domain management through a minimal set of APIs and the virsh command-line tool.[13] This initial version focused on basic operations like domain listing, starting, and stopping, establishing libvirt as an early abstraction layer for Xen-based virtualization. Key milestones followed rapidly to broaden compatibility. Version 0.2.0, released in 2006, integrated support for KVM, enabling management of kernel-based virtual machines alongside Xen. By version 0.4.0 in 2007, initial QEMU emulation capabilities were added, enhancing flexibility for full-system simulation. Expansion to additional hypervisors occurred between 2008 and 2010, with version 0.6.0 in 2007 introducing LXC container support and version 0.8.0 in 2009 adding VMware ESX integration, allowing libvirt to handle a wider array of virtualization environments. From 2011 onward, efforts emphasized API stabilization and robustness. The release of version 1.0.0 on November 2, 2012, represented a significant maturation point, coinciding with libvirt's seventh anniversary and incorporating refined support for multiple hypervisors, including improved Xen 4.2 compatibility and QEMU enhancements.[2] Libvirt's version progression has been consistent, evolving from 0.1.0 in 2006—centered on core hypervisor interactions—to 11.9.0 on November 3, 2025, with ongoing emphases on stability, security, and feature extensions like container management via LXC, which was pioneered in early releases.[12] Originally driven primarily by Red Hat contributors, libvirt transitioned post-2010 to a more diverse open-source ecosystem, attracting contributions from multiple organizations and independent developers through its mailing lists and Git repository.[14][13] Recent developments from 2023 to 2025 have focused on modern architectures and security paradigms, including enhanced ARM support—such as the 'pauth' CPU feature in QEMU integrations—and advancements in confidential computing, like AMD SEV launch security and Intel SGX enclave handling in versions 8.10.0 and later.[12][15][16]Architecture
Core Components
Libvirt's core architecture revolves around two primary components: the libvirt library and the libvirtd daemon, which together provide a unified interface for virtualization management. The libvirt library serves as the foundational C-based API, enabling programmatic access to virtualization resources through a modular design that incorporates pluggable drivers for various backend technologies, including hypervisors, storage pools, and networks.[1] This modularity allows the library to abstract complex operations into a consistent set of functions, facilitating interactions without direct dependency on specific implementations.[4] The libvirtd daemon acts as the central management process in monolithic mode, responsible for handling virtual machine (VM) operations, monitoring events, and allocating resources across supported drivers. It can operate in a monolithic fashion using the libvirtd daemon or in modular fashion using separate daemons like virtqemud for each driver. Modular daemons have become the default in recent distributions such as Fedora since 2021, and are the recommended approach going forward, providing benefits like improved process isolation, better crash resilience, and finer-grained resource management.[17][18] In system mode, libvirtd runs with root privileges to manage host-wide resources, whereas session mode supports non-privileged, per-user instances for lighter workloads. Event monitoring is achieved through dedicated sockets, allowing real-time notifications on VM state changes, resource usage, and errors.[19] Resource allocation is driver-mediated, with the daemon enforcing policies on CPU, memory, and I/O to prevent contention and optimize performance.[17] Key modules within the libvirt library handle essential aspects of virtualization. Domain management oversees the full VM lifecycle, from definition and creation using XML descriptors to runtime operations like suspension, resumption, and destruction, represented through opaque pointers such asvirDomainPtr.[1] Node information retrieval provides insights into host resources, including CPU topology, memory availability, and hardware capabilities via connection handles like virConnectPtr.[1] Secret management secures sensitive credentials, such as encryption keys or passwords, by storing and retrieving them through a dedicated driver that supports both system-wide and per-user modes, often integrated automatically with stateful hypervisors.[20]
Internally, client applications connect to the libvirtd daemon via socket-based communication, using URIs to specify the connection type (e.g., local UNIX sockets like /var/run/libvirt/libvirt-sock for read-write access). The daemon then proxies requests to appropriate backend drivers, abstracting their specifics and returning standardized results to the client, which ensures portability across different virtualization platforms.[1] This flow supports both local and remote access, with read-only sockets for monitoring to minimize security risks.[19]
Error handling in libvirt emphasizes flexibility and detail, allowing applications to retrieve comprehensive error information through structures like virErrorPtr, which include codes, domains, levels, and messages tied to specific connections or domains. Synchronous callbacks can be registered for immediate notification, while asynchronous reporting and reset functions enable robust integration in threaded environments.[21] Logging mechanisms are configurable via environment variables in the library (e.g., LIBVIRT_DEBUG for priority levels) and configuration files in the daemon (e.g., log_level in /etc/libvirt/libvirtd.conf), directing outputs to syslog, files, or the systemd journal with filters for categories and priorities to focus on critical events like warnings or errors.[22] These features include crash-time debug buffers to aid troubleshooting, ensuring operational reliability without overwhelming standard logs.[22]
API Design and Daemon
The libvirt API provides a unified and stable interface for managing virtualization resources across diverse hypervisors, abstracting platform-specific details to enable consistent operations. It exposes key resources such as domains (virtual machines), networks, storage pools, and volumes through objects likevirDomainPtr, virNetworkPtr, virStoragePoolPtr, and virStorageVolPtr, allowing users to perform lifecycle management, configuration, and monitoring tasks.[1] This design supports both local access via direct driver connections and remote access through the libvirtd daemon using URIs, ensuring portability and scalability in multi-host environments.[1]
The API communicates via a custom RPC protocol that facilitates client-server interactions between applications and the libvirt daemon. This protocol operates over TCP for remote connections or Unix domain sockets for local ones, with structured data encoded in XDR format for interoperability and extensibility.[23] Authentication is handled externally through SASL for mechanisms like DIGEST-MD5 or GSSAPI, or TLS with x509 certificates for encryption, while the protocol itself focuses on message serialization, including calls, replies, events, and streams, without built-in security to avoid redundancy.[23] Tunneling options, such as over SSH, further enhance secure remote usage.[23]
The libvirt daemon, primarily libvirtd in monolithic mode or modular daemons like virtqemud, handles core operations by processing API requests and maintaining system state. It employs an event loop using poll(2) to monitor sockets for incoming client messages and domain events, dispatching them to a worker thread pool for concurrent handling to ensure responsiveness.[24] For live migration, the daemon supports managed direct and peer-to-peer modes via API calls like virDomainMigrate and virDomainMigrateToURI, coordinating VM state transfer across hosts with options for tunnelled encryption over the RPC protocol.[25] Snapshot management is facilitated through the daemon's tracking of virDomainSnapshotPtr objects, using APIs such as virDomainSnapshotCreateXML for creating disk-only, memory-inclusive, or full-system snapshots defined in XML format, with support for internal/external storage backends and reversion via virDomainRevertToSnapshot.[26]
Libvirt offers language bindings to extend its C API, enabling programmatic access in higher-level languages. The Python binding, libvirt-python, is installed via package managers like dnf install libvirt-python on Fedora or apt install python3-libvirt on Ubuntu, and provides classes mirroring the C API.[27] A basic usage example connects to the hypervisor and lists domains:
import libvirt
conn = libvirt.open('qemu:///system')
if conn is None:
print('Failed to open connection to the [hypervisor](/page/Hypervisor)')
else:
try:
doms = conn.listAllDomains()
for dom in doms:
print(dom.name())
finally:
conn.close()
import libvirt
conn = libvirt.open('qemu:///system')
if conn is None:
print('Failed to open connection to the [hypervisor](/page/Hypervisor)')
else:
try:
doms = conn.listAllDomains()
for dom in doms:
print(dom.name())
finally:
conn.close()
listAllDomains() for domain enumeration.[28]
For Perl, the Sys::Virt binding is installed using CPAN: cpan Sys::Virt, supporting modules for domains, networks, and storage.[27] A basic example establishes a connection and retrieves domain information:
use Sys::Virt;
my $conn = Sys::Virt->new(uri => 'qemu:///system');
my @domains = $conn->list_domains();
foreach my $dom (@domains) {
print $dom->get_name(), "\n";
}
$conn->disconnect();
use Sys::Virt;
my $conn = Sys::Virt->new(uri => 'qemu:///system');
my @domains = $conn->list_domains();
foreach my $dom (@domains) {
print $dom->get_name(), "\n";
}
$conn->disconnect();
list_domains() to fetch active domains by name.[29]
The Ruby binding, libvirt-ruby, installs via RubyGems: gem install libvirt, and wraps API calls in a Rubyic style.[27] An introductory example connects and creates a domain from XML:
require 'libvirt'
conn = Libvirt::open('qemu:///system')
xml = File.read('domain.xml')
dom = conn.create_domain_xml(xml)
puts dom.name
dom.destroy if dom.active?
conn.close
require 'libvirt'
conn = Libvirt::open('qemu:///system')
xml = File.read('domain.xml')
dom = conn.create_domain_xml(xml)
puts dom.name
dom.destroy if dom.active?
conn.close
create_domain_xml instantiates a domain, with methods like active? for state checks.[30] Other bindings include those for C#, Go, Java, OCaml, and PHP, each maintained separately for integration into respective ecosystems.[27]
Libvirt enforces strict backward compatibility for its primary public API (libvirt.so and libvirt.h), promising indefinite ABI stability where functions, structs, enums, and constants are never removed or altered—only new elements are added.[31] Symbol versioning tags introductions, ensuring applications built against older releases remain functional, while hypervisor-specific APIs lack such guarantees and require validation per release.[31] Deprecations are minimized without a fixed cycle, prioritizing long-term stability for production use.[31]
Supported Technologies
Hypervisors
Libvirt provides comprehensive support for several primary hypervisors, enabling full lifecycle management of virtual machines (VMs) through its unified API. The KVM/QEMU driver offers the most extensive capabilities, handling QEMU emulators (version 6.2.0 and later) with support for software emulation, KVM hardware acceleration on Linux, and Hypervisor.framework on macOS.[32] This driver facilitates complete VM operations, including creation, startup, migration, and shutdown, across a wide range of architectures.[32] For Xen, libvirt's libxl driver manages both Domain-0 (dom0) host operations and Domain-U (domU) guest VMs, supporting paravirtualized (PV) and fully virtualized (HVM) modes via Xen's libxl toolstack.[33] VMware ESXi integration occurs through the vSphere API (version 2.5+), allowing remote management of ESXi 3.5/4.x/5.x hosts and vCenter servers without requiring a local libvirtd daemon.[34] In addition to these primary hypervisors, libvirt supports secondary technologies with varying degrees of integration. LXC containers are managed as lightweight virtualization domains on Linux hosts, providing isolation through kernel namespaces and cgroups.[35] The bhyve hypervisor receives full support on FreeBSD, enabling VM lifecycle operations tailored to BSD environments.[35] The Cloud Hypervisor driver, introduced in version 9.1.0 (March 2023), supports basic lifecycle management and is under active development for this Rust-based VMM optimized for cloud workloads.[36] VirtualBox and Hyper-V offer more limited functionality; VirtualBox support is available but not commonly distributed in major Linux repositories, while the Hyper-V driver is client-side only, connecting via WS-Management (WinRM) over HTTP/S to manage Hyper-V 2012 R2 and newer servers without local daemon support.[35][37] VM configurations across hypervisors are defined using XML-based domain descriptors, which standardize elements for resource allocation while accommodating driver-specific extensions. The<cpu> element specifies model, topology (sockets, cores, threads), and features like pinning or passthrough (e.g., mode='host-passthrough' for KVM/QEMU), allowing emulation of host CPUs or custom architectures.[7] Memory allocation is handled via <memory> for boot-time sizing (in KiB or MiB) and <memoryBacking> for advanced options like hugepages or locked pages, with hypervisor variations such as Xen's support for nosharepages.[7] Device passthrough, including PCI and USB, uses <hostdev> elements, enabling direct hardware access in KVM/QEMU and limited SCSI/network emulation in ESXi via <controller> and <interface> tags.[7]
Libvirt's hypervisor compatibility emphasizes Linux hosts for full local execution across supported drivers, with partial support for Windows and macOS primarily through remote connections or build toolchains like MinGW and Homebrew.[35] On Linux distributions such as Fedora, Ubuntu, and RHEL, all primary hypervisors operate natively; FreeBSD limits to bhyve and Xen, while macOS supports QEMU/HVF remotely or via builds.[35] Windows integration is constrained to Hyper-V remote management and older build targets (Vista/Server 2008).[35]
Recent evolutions have expanded libvirt's hypervisor capabilities to emerging hardware. ARM64 (AArch64) support was added in version 6.7.0 (August 2020), enabling QEMU/KVM operations on ARM-based hosts like those in cloud and edge computing.[12] In 2023, version 9.1.0 introduced initial support for the Cloud Hypervisor, with ongoing enhancements including networking (version 10.7.0, September 2024) and disk hotplug (version 11.8.0, October 2025). In 2024, version 10.5.0 introduced confidential computing extensions via SEV-SNP (Secure Encrypted Virtualization - Secure Nested Paging) as a <launchSecurity/> type, enhancing VM isolation on AMD platforms for KVM/QEMU and other drivers, with further extensions like SEV-SNP for Cloud Hypervisor in version 11.2.0 (April 2025).[12] These updates ensure libvirt remains adaptable to modern processor architectures and security requirements.[12]
Storage and Networking Management
Libvirt provides abstractions for managing storage resources on the host system, enabling the creation and oversight of storage pools and volumes that virtual machines (VMs) can utilize as disk images or block devices. A storage pool represents a designated quantity of storage, such as a directory, logical volume group, or iSCSI target, allocated for VM use. Supported pool types include directory pools, which manage files like raw or qcow2 images in a host filesystem (the default storage pool is a directory pool with the target path /var/lib/libvirt/images); logical volume manager (LVM) pools, which carve out thin or thick logical volumes from a volume group for efficient space allocation; and iSCSI pools, which connect to remote logical unit numbers (LUNs) on an iSCSI target without supporting on-the-fly volume creation.[38][39] Storage volumes are subdivisions within these pools, formatted in types such as qcow2 for copy-on-write images supporting snapshots and thin provisioning, or raw for unformatted block access offering direct performance. Operations on volumes include creation via XML definitions, cloning to duplicate volumes efficiently across pools, and snapshotting to capture point-in-time states, which facilitates backup and recovery workflows. These capabilities ensure that storage remains available and optimized for VM attachment, with libvirt handling the underlying backend specifics uniformly through its API.[38][39] For networking, libvirt abstracts virtual networks that provide connectivity for VMs, including virtual bridges for layer-2 switching, network address translation (NAT) for outbound internet access without exposing guests directly, and VLAN tagging for segmenting traffic on trunked host interfaces. The default NAT network, often implemented via the virbr0 bridge, exemplifies this abstraction: when starting a VM configured for this network, libvirt creates a vnetX tap interface for the VM and attaches it as a slave to the virbr0 bridge, bringing the bridge state to UP and enabling communication; the VM then receives an IP address, typically in the 192.168.122.x range, from the dnsmasq DHCP server, facilitating NAT connectivity.[40][41] Integration with host interfaces occurs via bridged mode, where a physical network interface is enslaved to a Linux bridge for direct VM access to the LAN, or through SR-IOV, which allows passthrough of virtual functions from a physical NIC to achieve near-native performance by bypassing the host kernel. VLAN support extends to both standard bridges and Open vSwitch configurations, enabling tagged traffic isolation.[40][41] Management of these resources is facilitated by libvirt's C API, with functions such asvirStoragePoolDefineXML for defining persistent storage pools from XML descriptions, including validation flags to ensure configuration integrity before activation. Similarly, virNetworkCreate launches a predefined virtual network, transitioning it to an active state with automatic firewall and DHCP setup, while virNetworkCreateXML allows immediate creation and startup from XML for dynamic environments. These APIs abstract backend differences, such as iptables for NAT rules or LVM commands for volume provisioning.[42][43]
Advanced features enhance reliability in distributed setups, including persistent storage migration during VM live migration, achieved via the --copy-storage-all flag in virsh migrate, which copies non-shared disks synchronously to the destination host while supporting zero-block detection for sparsification since version 10.9.0 (2024). For networks, firewall rules are automatically generated using iptables or nftables backends (introduced in version 10.4.0, 2024), with enhancements like the libvirt-routed zone for firewalld integration (version 8.10.0, 2022) to permit inbound connections on routed networks without manual intervention.[25][12][44]
Best practices for storage and networking emphasize proactive monitoring to prevent resource exhaustion, such as regularly querying pool capacities with virStoragePoolGetInfo to avoid over-allocation in LVM or directory pools. It is essential to configure storage pools to autostart (virsh pool-autostart <pool-name>) to ensure they are available after a host reboot. Particular attention should be paid to the default storage pool, which must be active for tools like virt-install to validate installation media successfully. An inactive default pool or inaccessible target directory often leads to errors such as "Could not start storage pool: cannot open directory". To address this, verify pool status with virsh pool-list --all; if the 'default' pool is inactive, activate it using virsh pool-start default. If activation fails due to directory access issues, ensure the target directory exists (sudo mkdir -p /var/lib/libvirt/images), set appropriate ownership and permissions (typically sudo chown root:root /var/lib/libvirt/images and sudo chmod 755 /var/lib/libvirt/images, though some distributions use qemu:qemu or similar), and check for security module restrictions (e.g., SELinux or AppArmor). Review libvirt logs for further details using journalctl -u libvirtd or by examining /var/log/libvirt/libvirtd.log. For custom storage pools, confirm they are active and their target paths are accessible.[38][6]
For networks, defining isolated modes or explicit VLANs minimizes exposure, while using managed SR-IOV pools prevents host interface conflicts. Administrators should validate XML configurations before definition to catch errors early and prefer thin-provisioned volumes in qcow2 format for scalable growth without immediate space commitment.[38]
User Interfaces
Command-Line Interfaces
Libvirt provides several command-line interfaces (CLIs) for managing virtual machines (domains), storage, networks, and other resources, enabling both interactive and automated administration. These tools interact with the libvirt daemon via its API, offering a text-based alternative to graphical interfaces for precise control in server environments or scripts.[45][46] The primary CLI tool is virsh, a versatile shell and command-line utility for domain lifecycle management across supported hypervisors such as KVM, Xen, and LXC. It allows users to list active and inactive domains withvirsh list --all, which displays details like domain ID, name, and state. Starting a domain from an XML configuration file uses virsh start domain-name, while immediate shutdown is achieved via virsh destroy domain-name, which forcefully terminates the guest without graceful shutdown. Configuration modifications, such as adjusting CPU or memory allocations, can be made persistently using virsh edit domain-name, which opens the domain's XML in the default editor for the next boot.[45]
Virsh also supports advanced operations like live migration and snapshot management. The virsh migrate --live domain-name dest-uri command performs live migration of a running domain to a remote host, preserving its state with minimal downtime; for example, virsh migrate --live fedora qemu+ssh://remote-host/system transfers the "fedora" domain over SSH. Snapshot reversion is handled by virsh snapshot-revert domain-name snapshot-name, restoring the domain to a previous state from its snapshot list.[45]
Complementing virsh are specialized CLIs for common tasks. virt-install provisions new virtual machines from installation media, supporting KVM, Xen, or container guests; it automates XML generation and installation via options like --name, --ram, and --location for network-based OS trees, as in virt-install --name new-vm --ram 1024 --vcpus 2 --os-variant fedora42 --location http://[example.com](/page/Example.com)/os.[47][48]
Common Issues with virt-install
A common error when using virt-install is "Validating install media failed: Could not start storage pool: cannot open directory". This error occurs when libvirt cannot validate the installation media because the storage pool (typically the 'default' pool) cannot be started, usually due to the pool's target directory being inaccessible or non-existent (e.g., /var/lib/libvirt/images). Common causes:- The storage pool is inactive.
- The pool's target directory does not exist.
- Incorrect permissions on the directory.
- Security modules (SELinux/AppArmor) blocking access.
- Verify storage pools:
virsh pool-list --all - If 'default' is inactive, start it:
virsh pool-start default - If starting fails with "cannot open directory":
- Ensure the directory exists:
sudo mkdir -p /var/lib/libvirt/images - Set correct permissions (typically):
sudo chown root:root /var/lib/libvirt/imagesandsudo chmod 755 /var/lib/libvirt/images - Or for qemu user:
sudo chown qemu:qemu /var/lib/libvirt/images(depends on distro)
- Ensure the directory exists:
- Make pool autostart:
virsh pool-autostart default - Check libvirt logs for details:
journalctl -u libvirtdor/var/log/libvirt/libvirtd.log - Retry the virt-install command.
virt-clone --original old-vm --name new-vm --file /path/to/new-disk.img, automatically updating UUID, MAC addresses, and names to avoid conflicts.[50][48]
For console access, virt-viewer connects to a domain's graphical console using VNC or SPICE protocols, providing a minimal viewer without full management features; invocation is simple, like virt-viewer domain-name, which leverages libvirt to establish the connection.[51][48]
These CLIs integrate well with scripting for batch operations and remote management. Virsh and related tools accept connection URIs to target local or remote daemons, such as qemu+ssh://host/system for secure access over SSH, enabling commands like virsh -c qemu+ssh://remote-host/system list to manage distant systems. In scripts, output can be parsed for automation, such as looping through virsh list --all to start multiple idle domains sequentially.[45][52]
While powerful for automation, these interfaces lack graphical elements, relying on text output and requiring familiarity with XML and command syntax, making them ideal for headless servers but less intuitive for visual monitoring.[45][46]
Graphical and Web Interfaces
Libvirt provides several graphical and web-based interfaces that enable users, particularly non-programmers, to manage virtual machines (VMs) without relying on command-line tools. These frontends leverage libvirt's API to offer intuitive visualizations for VM lifecycle operations, monitoring, and configuration, facilitating easier adoption in desktop and server environments.[46] Virtual Machine Manager (virt-manager) is a GTK-based desktop graphical user interface designed for comprehensive VM management through libvirt. It supports VM creation via wizards that configure resources and virtual hardware, real-time monitoring with live performance statistics, and embedded VNC or SPICE clients for graphical console access. Additional features include drag-and-drop attachment of ISO files for installation media, performance graphs displaying CPU, memory, disk, and network metrics, and support for remote connections to manage VMs across multiple hosts. Primarily targeted at Linux desktop users, virt-manager allows remote access from Windows or macOS clients via SSH or TLS-secured libvirt connections.[48][53] GNOME Boxes offers a simplified graphical interface built on libvirt, libosinfo, and QEMU, aimed at desktop virtualization for end-users seeking minimal configuration overhead. It enables quick browsing and access to local or remote VMs and containers, with intuitive customization of machine preferences and performance monitoring through integrated views. Unlike more advanced tools, GNOME Boxes prioritizes ease of use by automating setups for testing operating systems or remote desktop scenarios, while sharing underlying code with virt-manager for robust libvirt integration. It is optimized for GNOME environments on Linux desktops.[54][55] qt-virt-manager is a Qt-based graphical user interface for managing libvirt-based VMs, networks, storage, and other entities. It provides tools for creating, controlling, and monitoring virtual machines with a focus on cross-platform compatibility and modern Qt widgets.[56][46] Karton, developed in 2025 as part of KDE's Google Summer of Code project, is a native Qt-Quick/Kirigami-based virtual machine manager for KDE Plasma. It offers a modern interface for listing, configuring, installing, and accessing libvirt-controlled VMs, with features like SPICE integration and streamlined creation wizards, providing a Qt-native alternative to GTK tools. As of November 2025, it is advancing toward integration in KDE environments.[57][46] For web-based management, Cockpit serves as a browser-accessible graphical interface for Linux servers, incorporating libvirt support via the cockpit-machines module to handle VM creation, storage, networking, and logs. Users can install OS images, view performance graphs correlating CPU, memory, network, and disk activity, and access VM consoles directly in the browser. Cockpit facilitates multi-host management by connecting to remote systems over SSH from a single session, enabling oversight of VMs across distributed environments. It runs on major Linux distributions like Fedora, RHEL, and Ubuntu, with client access available from Windows, macOS, or mobile browsers.[58][59][60] Kimchi, an older HTML5-based web interface, provides KVM guest management through libvirt as a plugin for the Wok framework. It allows users to create VMs from templates, view live screenshots of running guests, modify storage and network settings, and connect to displays via browser. Accessed over HTTPS with PAM authentication, Kimchi targets single-host simplification but has seen limited updates since around 2017, making it less commonly used compared to modern alternatives. It supports Linux platforms with KVM hypervisors.[61][46]Development and Community
Corporate Contributions
Red Hat has served as the primary developer and maintainer of libvirt since its inception in 2005, employing key contributors such as Daniel P. Berrangé and funding the core development team to ensure integration with Red Hat Enterprise Linux (RHEL) Virtualization.[62] This sponsorship model involves Red Hat dedicating engineering resources to upstream development, including feature requests and bug fixes driven by enterprise needs in RHEL.[62] Other corporations have made targeted contributions to libvirt. IBM contributed to KVM hypervisor support and s390x architecture enhancements starting around 2010, including integration for IBM Secure Execution (introduced in 2019) to enable protected virtualization on IBM Z systems.[63] SUSE has contributed to libvirt's interoperability and daemon configurations, ensuring seamless integration with openSUSE and SUSE Linux Enterprise Server for virtualization management.[64] Starting in 2023, Google has supported cloud-related enhancements through funding student projects via Google Summer of Code, focusing on API improvements for scalable virtualization in cloud environments.[65] Corporate involvement has directly influenced key features. For instance, Red Hat led the development of live storage migration in 2012, allowing seamless disk relocation for running virtual machines without downtime, a capability critical for enterprise high-availability setups.[66] Multi-vendor efforts, including contributions from Linaro and ARM ecosystem partners, enabled robust ARM architecture support around 2020, broadening libvirt's applicability to edge and mobile computing platforms. In recent years, particularly by 2025, Intel has increased its contributions to libvirt for confidential computing, enhancing support for Intel Trust Domain Extensions (TDX) to facilitate hardware-encrypted virtual machines that protect against host-level attacks.[67] These efforts include upstream patches for TDX attestation and guest hardening, aligning with broader industry pushes for secure multi-tenant environments.[68]Community and Governance
Libvirt is governed by a community of maintainers, committers, and contributors who guide its development through consensus. Maintainers oversee major areas like the API, drivers, and tools, while committers review and merge patches. Contributors include individuals and organizations submitting code, documentation, and bug reports. Communication occurs primarily via mailing lists such as [email protected] for development discussions and [email protected] for user support. The project participates in outreach programs like Google Summer of Code to engage new developers.[69][70]Licensing and Distributions
Libvirt's core library is distributed under the GNU Lesser General Public License (LGPL) version 2.1 or later, which permits dynamic linking with proprietary software while requiring that any modifications to the library itself be made available under the same license.[71] In contrast, the accompanying tools, such as the virsh command-line interface and the libvirtd daemon, are licensed under the GNU General Public License (GPL) version 2.0 or later, ensuring that derivative works incorporating these components adhere to copyleft requirements.[10] This dual-licensing approach facilitates broad adoption by allowing integration into both open-source and closed-source environments without imposing restrictive obligations on users of the library alone.[3] Libvirt is available as native packages across major Linux distributions, including Fedora, Ubuntu, Debian, and CentOS Stream, where it is maintained through official repositories to ensure compatibility with the host system's kernel and dependencies.[10] For instance, Fedora provides libvirt via RPM packages integrated with its virtualization stack, while Ubuntu and Debian offer DEB packages that align with their respective ecosystem standards.[72] Ports exist for non-Linux systems as well, such as FreeBSD through its ports collection and macOS via the Homebrew package manager, enabling cross-platform management of virtualization resources.[73][74] Installation options for libvirt include pre-built packages from distribution repositories (RPM for Fedora and CentOS, DEB for Ubuntu and Debian), which handle automatic dependency resolution, and source compilation for custom configurations.[10] Source builds require tools like Meson and Ninja, along with key dependencies such as QEMU for hypervisor support and systemd for daemon management on compatible systems.[10] These variants ensure flexibility, with package managers preferred for production deployments to maintain security updates and system integration.[75] Distribution versions of libvirt typically track stable upstream releases, with Ubuntu 24.04 LTS shipping version 10.0.0 and providing backports for long-term support releases to incorporate security fixes and enhancements without full upgrades. As of November 2025, the latest stable release is version 11.9.0, released on November 3, 2025. Similar alignment occurs in other distributions, where maintainers synchronize with libvirt's biannual release cycle to balance stability and new features.[12][35] As a free and open-source software (FOSS) project, libvirt adheres to principles of openness and interoperability, promoting no vendor lock-in by standardizing virtualization management across diverse platforms and hypervisors.[8]Integrations and Applications
Cloud Orchestration
Libvirt plays a central role in cloud orchestration by serving as the virtualization backend for major platforms, enabling the management of virtual machines (VMs) at scale in distributed environments. In OpenStack, libvirt is the default and most commonly used driver for the Nova compute service, handling the creation, migration, and lifecycle management of KVM-based instances across clusters. This integration allows Nova to leverage libvirt's API for low-level hypervisor operations, ensuring consistent VM provisioning in multi-node deployments. Additionally, OpenStack's Heat orchestration engine supports the deployment of libvirt-managed resources through templates that define stacks including compute instances, storage, and networking, facilitating automated infrastructure-as-code workflows in cloud environments.[76][77] For container-orchestration platforms like Kubernetes, libvirt integrates with extensions such as KubeVirt, a Kubernetes-native virtualization layer that embeds libvirt in pods to provide a dedicated API for VM management, enhancing orchestration for non-containerizable applications.[78][79] Enterprise virtualization platforms further extend libvirt's orchestration capabilities. oVirt, an open-source management solution, relies on libvirt via its VDSM daemon to oversee KVM VMs across data centers, providing centralized control for storage, networking, and high-availability features. As the upstream project for Red Hat Enterprise Virtualization (RHEV), oVirt uses libvirt for core VM operations, enabling scalable deployments in enterprise settings with support for live migration and resource pooling. Proxmox VE, while primarily built on direct QEMU/KVM interactions, allows supplementary use of libvirt tools for advanced VM configurations on its platform, though it employs a custom toolkit for core management to simplify web-based orchestration.[80][46][81] Libvirt's design supports scalability in cloud clusters through features like multi-tenant isolation and API controls. Multi-tenant isolation is achieved via sVirt, which integrates Mandatory Access Control (MAC) like SELinux to enforce domain separation between VMs from different tenants, preventing unauthorized access in shared environments such as OpenStack deployments. For large-scale operations, libvirt implements rate limiting in areas like network filter evaluations to mitigate denial-of-service risks from excessive packet processing, and ongoing developments include dirty page rate limits for efficient live migrations in clustered setups. These mechanisms ensure reliable performance in orchestrators handling thousands of VMs.[82][83] In hybrid cloud scenarios, libvirt underpins integrations observed in deployments like AWS Outposts and Azure Stack. For instance, OpenShift clusters on AWS Outposts utilize libvirt-backed KVM for remote worker nodes, extending cloud-native orchestration to on-premises hardware while maintaining compatibility with AWS services.[84] These case studies highlight libvirt's versatility in providing consistent virtualization layers across hybrid boundaries.Security and Best Practices
Libvirt provides a robust framework for securing virtualization environments through authentication, authorization, isolation mechanisms, and encrypted communications. Security begins at the daemon level, where libvirtd should run as a non-privileged user to minimize host exposure, with socket permissions restricted to authorized groups.[85] Mandatory access control systems like SELinux or AppArmor are enabled by default in many distributions to confine QEMU processes, preventing guest escapes or unauthorized host resource access.[86] Administrators must configure these policies carefully, such as labeling disk images with appropriate SELinux types likesvirt_image_t for exclusive access.[87]
For remote management, Transport Layer Security (TLS) is recommended to encrypt connections and authenticate clients and servers using x509 certificates. Setup involves generating a Certificate Authority (CA) key and self-signed certificate, then signing server and client certificates for deployment in standardized paths like /etc/pki/libvirt/.[88] This prevents man-in-the-middle attacks on the libvirt RPC protocol, with best practices including using a central CA for simplified revocation and ensuring private keys remain protected.[89] Authentication can further integrate with SASL mechanisms, though insecure ones like plaintext should be disabled in libvirtd.conf.[90]
Authorization uses pluggable access control drivers, such as PolicyKit (PolKit), to enforce fine-grained permissions on API calls based on user identities and object attributes like domain UUIDs.[91] By default, connections are unauthenticated until verified, granting only read-only access post-authentication unless full privileges are assigned. Best practices include defining rules in PolKit configuration files to restrict operations like domain creation to specific users or groups, avoiding the insecure none driver.[92]
Virtual machine isolation is enhanced by avoiding automatic disk format detection on untrusted images to prevent exploitation via malicious headers that could leak host data.[85] Instead, explicitly specify formats like qcow2 and validate images for backing chains or oversized logical sizes before import. For storage, encrypt virtual disks using LUKS or similar to protect against physical theft or rogue admins, especially in shared environments. Networking should use isolated virtual networks with ebtables rules, restricting guest-to-host traffic beyond necessary migration ports.[85]
Device passthrough requires careful security labeling to mitigate risks of guest access to host resources; for instance, use SELinux contexts like virt_content_t for read-only host files passed to guests.[87] Disable confinement only when necessary via <seclabel model='none'/> in domain XML, but prefer mediated devices like VFIO for PCI passthrough to enforce IOMMU isolation. For confidential computing, AMD Secure Encrypted Virtualization (SEV) can be enabled in libvirt (version 4.5.0+) by configuring <launchSecurity type='sev'/> in the domain XML, requiring kernel parameters like mem_encrypt=on and attestation via tools like sevctl. Intel Trust Domain Extensions (TDX) is supported as of libvirt 8.0 (October 2025) with <launchSecurity type='tdx'/>, enabling hardware-isolated VMs on compatible Intel hardware for enhanced data protection in use. Validation tools like virt-qemu-sev-validate ensure measurement integrity before launch.[15][12][93]
Ongoing best practices include regular updates to libvirt and hypervisors to address vulnerabilities, monitoring via virt-host-validate for secure features, and reporting issues through the dedicated security team process rather than public bug trackers.[69] Avoid running unnecessary services on the host and use dedicated partitions for guest storage to limit scan scopes in LVM configurations.[91]