Recent from talks
Nothing was collected or created yet.
Proxmox Virtual Environment
View on Wikipedia| Proxmox Virtual Environment | |
|---|---|
Proxmox VE 8.0 administration interface screenshot | |
| Developer | Proxmox Server Solutions GmbH |
| Written in | Perl,[1] Rust[2] |
| OS family | Linux (Unix-like) |
| Working state | Current |
| Source model | Free and open source software |
| Initial release | 15 April 2008 |
| Latest release | 9.0[3] |
| Latest preview | 8.0 beta1[4] / 9 June 2023 |
| Repository | |
| Available in | 25 languages[5] |
| Update method | APT |
| Package manager | dpkg |
| Supported platforms | AMD64 |
| Kernel type | Monolithic (Linux) |
| Userland | GNU |
| Default user interface | Web-based |
| License | GNU Affero General Public License[6] |
| Official website | www |
Proxmox Virtual Environment (PVE, or simply Proxmox) is a virtualization platform designed for the provisioning of hyper-converged infrastructure.
Proxmox allows deployment and management of virtual machines and containers.[7][8] It is based on a modified Ubuntu LTS kernel.[9] Two types of virtualization are supported: container-based with LXC (starting from version 4.0 replacing OpenVZ used in version up to 3.4, included[10]), and full virtualization with KVM.[11]
It includes a web-based management interface.[12][13] There is also a mobile application available for controlling PVE environments.[14]
Proxmox is released under the terms of the GNU Affero General Public License, version 3.
History
[edit]Development of Proxmox VE started in 2005 when Dietmar Maurer and Martin Maurer, two Linux developers, discovered OpenVZ had no backup tool or management GUI. KVM was also appearing at the same time in Linux, and was added shortly afterwards.[15]
The first public release took place in April 2008. It supported container and full virtualization, managed with a web-based user interface similar to other commercial offerings.[16]
Features
[edit]Proxmox VE is an open-source server virtualization platform to manage two virtualization technologies: Kernel-based Virtual Machine (KVM) for virtual machines and LXC for containers - with a single web-based interface.[11] It also integrates out-of-the-box-tools for configuring high availability between servers, software-defined storage, networking, and disaster recovery.[17]
Proxmox VE supports live migration for guest machines between nodes in the scope of a single cluster, which allows smooth migration without interrupting their services.[18] Since PVE 7.3 there is an experimental feature for migration between unrelated nodes in different clusters.[19]
To authenticate users to the web GUI, Proxmox can use its own internal authentication database, PAM, OIDC, LDAP or Active Directory.[20] Multi-factor authentication is also available using TOTP, WebAuthn, or YubiKey OTP.[21]
Since PVE 8.1 there is a full Software-Defined Network (SDN) stack implemented and is compatible with Secure Boot.[22]
Guest machine backups can be done using the included standalone vzdump tool.[23] PVE can also be integrated with a separate Proxmox Backup Server (PBS) using a web GUI,[24] or with the text-based Proxmox Backup Client application.[25]
Since PVE 8, along with the standard GUI installer, there is a semi-graphic (TUI) installer integrated into the ISO image.[20] From PVE 8.2 it is possible to make automatic scripted installations.[26]
High-Availability Cluster
[edit]Proxmox VE (PVE) can be clustered across multiple server nodes.[27]
Since version 2.0, Proxmox VE offers a high availability option for clusters based on the Corosync communication stack. Starting from PVE 6.0, Corosync 3.x is in use (not compatible with earlier PVE versions). Individual virtual servers can be configured for high availability using the integrated HA manager.[28][29] If a Proxmox node becomes unavailable or fails, the virtual machines can be automatically moved to another node and restarted.[30] The database and FUSE-based Proxmox Cluster file system (pmxcfs[31]) makes it possible to perform the configuration of each cluster node via the Corosync communication stack with SQLite engine.[13]
Another HA-related element in PVE is the distributed file system Ceph, which can be used as a shared storage for guest machines.[32]
There is also an independent tool available for rebalancing virtual machines and containers between nodes, called Prox Load Balancer (ProxLB).[33]
Virtual Appliances
[edit]Proxmox VE has pre-packaged server software appliances which can be downloaded via the GUI.[34]
Proxmox Datacenter Manager
[edit]At the end of 2024 it was announced that Proxmox Datacenter Manager (PDM) was being developed. Its role is to aggregate management of multiple PVE clusters or hosts, possibly thousands. First release was labelled as alpha, with beta and stable versions expected in 2025.[35] Beta version (0.9), which has been released in September 2025, is lacking notifications and console access for nodes, but allows pulling error logs from nodes and managing updates on them. It enables simple migration of virtual machines and LXC containers between managed nodes. [36]
See also
[edit]References
[edit]- ^ "Proxmox Manager Git Tree". Retrieved 4 March 2019.
- ^ "Proxmox VE Rust Git Tree". git.proxmox.com.
- ^ "Proxmox Virtual Environment 9.0 with Debian 13 released". 5 August 2025. Retrieved 5 August 2025.
- ^ "Roadmap". Proxmox. Retrieved 2014-12-03.
- ^ "projects / proxmox-i18n.git / tree". Retrieved 16 November 2022.
- ^ "Open Source – Proxmox VE". Proxmox Server Solutions. Retrieved 17 July 2015.
- ^ Simon M.C. Cheng (31 October 2014). Proxmox High Availability. Packt Publishing Ltd. pp. 41–. ISBN 978-1-78398-089-5.
- ^ Plura, Michael (July 2013). "Aus dem Nähkästchen". IX Magazin. 2013 (7). Heise Zeitschriften Verlag: 74–77. Retrieved July 20, 2015.
- ^ "Proxmox VE Kernel - Proxmox VE". pve.proxmox.com. Retrieved 2017-05-26.
- ^ "Proxmox VE 4.0 with Linux Containers (LXC) and new HA Manager released". Proxmox. 11 December 2015. Retrieved 12 December 2015.
- ^ a b Ken Hess (July 11, 2011). "Proxmox: The Ultimate Hypervisor". ZDNet. Retrieved September 29, 2021.
- ^ Vervloesem, Koen. "Proxmox VE 2.0 review – A virtualization server for any situation", Linux User & Developer, 11 April 2012. Retrieved on 16 July 2015.
- ^ a b Drilling, Thomas (May 2013). "Virtualization Control Room". Linux Pro Magazine. Linux New Media USA. Retrieved July 17, 2015.
- ^ "Proxmox Virtual Environment". Google Play. Proxmox Server Solutions GmbH. Retrieved 12 November 2023.
- ^ "Proxmox VE 1.5: combining KVM and OpenVZ". Linux Weekly News. Retrieved 2015-04-10.
- ^ Ken Hess (April 15, 2013). "Happy 5th birthday, Proxmox". ZDNet. Retrieved October 4, 2021.
- ^ "Features". www.proxmox.com. Retrieved 2019-05-12.
- ^ Rajvanshi, Akash. "Proxmox 101". Retrieved 12 November 2023.
- ^ "How to migrate VM from one PVE cluster to another". Proxmox Forums. Retrieved 12 November 2023.
- ^ a b Lee, Brandon. "Proxmox 8: New Features and Home Lab Upgrade Instructions". Retrieved 12 November 2023.
- ^ Lee, Brandon (2024-03-22). "First 10 Steps I do on Proxmox in 2024". Virtualization Howto. Retrieved 2024-03-28.
- ^ Borisov, Bobby. "Proxmox VE 8.1 Introduces Secure Boot Compatibility". Linuxiac. Retrieved 3 December 2023.
- ^ "Backup of a running container with vzdump". OpenVZ Wiki. Retrieved 12 November 2023.
- ^ "Getting Started With Proxmox Backup Server". Retrieved 12 November 2023.
- ^ "How To Use Proxmox Backup Client To Backup Files In Linux". Retrieved 12 November 2023.
- ^ Smith, Lyle. "Proxmox VE 8.2 Introduces VMware Import Wizard, Enhanced Backup Options, and Advanced GUI Features". StorageReview. Retrieved 24 April 2024.
- ^ Wasim Ahmedi (2014-07-14). Mastering Proxmox. Packt Publishing Ltd. pp. 99–. ISBN 978-1-78398-083-3.
- ^ "PVE HA Manager Source repository". Retrieved 2020-10-19.
- ^ "Proxmox VE documentation: High Availability". Retrieved 2020-10-19.
- ^ "High Availability Virtualization using Proxmox VE and Ceph". Jacksonville Linux Users' Group. Archived from the original on 2020-11-30. Retrieved 2017-12-15.
- ^ "Proxmox Cluster File System (pmxcfs)". Proxmox VE Administration Guide. Retrieved 15 November 2022.
- ^ Ladyzhenskyi, Pavel. "Setting up a Proxmox VE cluster with Ceph shared storage". Medium.com. Retrieved 12 November 2023.
- ^ "ProxLB - The Prox Load Balancer for Proxmox". ProxLB. Retrieved 2 April 2025.
- ^ "The next server operating system you buy will be a virtual machine". ZDNET. 15 October 2013. Retrieved 20 July 2015.
- ^ t.lamprecht (Dec 19, 2024). "Proxmox Datacenter Manager - First Alpha Release". Proxmox forum. Retrieved 9 July 2025.
- ^ Pande, Ayush. "Proxmox Datacenter Manager is an underrated tool for your PVE servers". XDA. Retrieved 31 October 2025.
External links
[edit]Proxmox Virtual Environment
View on GrokipediaHistory
Origins and Development
Proxmox Virtual Environment (Proxmox VE) originated from the efforts of Proxmox Server Solutions GmbH, a company founded in February 2005 in Vienna, Austria, by brothers Martin Maurer and Dietmar Maurer.[6] The company's initial focus was on developing efficient, open-source Linux-based software to enable secure, stable, and scalable IT infrastructures, addressing the high costs and licensing restrictions of proprietary virtualization platforms like VMware.[6] This motivation stemmed from the need for accessible alternatives that small and medium-sized enterprises could deploy without significant financial barriers, leveraging free and open-source technologies.[7] Development of Proxmox VE began in 2007, with the project emphasizing the integration of container-based virtualization via OpenVZ and full virtualization through the KVM hypervisor, all managed via an intuitive web-based interface.[7] The first public release, version 0.9, arrived on April 15, 2008, marking the debut of this unified platform built directly on Debian GNU/Linux as its base distribution to ensure stability and broad hardware compatibility from the outset.[8] Key early contributors included the Maurer brothers, who led the core development, alongside an emerging community of open-source enthusiasts contributing to initial testing and refinements.[6] Over time, Proxmox VE evolved from a purely community-driven initiative to a hybrid model supported by enterprise services from Proxmox Server Solutions GmbH, introducing subscription-based options for stable repositories, professional support, and enhanced features while maintaining the core codebase under the GNU Affero General Public License version 3 (AGPLv3).[9] This licensing choice, adopted early on, ensured that the software remained freely available for modification and redistribution, fostering widespread adoption and ongoing contributions from the open-source community.[7] The Debian foundation continued to underpin this growth, providing a reliable ecosystem for integrating virtualization tools without deviating from the project's open-source ethos.[2]Major Releases and Milestones
Proxmox VE 2.0, released in March 2012, marked a significant evolution by introducing a REST API for programmatic management, improved clustering capabilities powered by Corosync for enhanced reliability, and the replacement of OpenVZ with LXC container support to provide more lightweight and secure virtualization options.[8][10] In October 2015, Proxmox VE 4.0 shifted its base to Debian 8 (Jessie), bolstering ZFS support with version 0.6.5.3 including root filesystem capabilities, and implementing initial high-availability fencing mechanisms via a new HA Manager that utilized watchdog-based isolation for streamlined cluster resilience.[11][12] Proxmox VE 6.0 arrived in July 2019 on Debian 10 (Buster), featuring integrated Ceph Nautilus storage with a dedicated dashboard for cluster-wide monitoring and a redesigned modern graphical user interface that improved usability across web and mobile access.[13][14] The June 2023 release of Proxmox VE 8.0, built on Debian 12 (Bookworm), advanced backup processes with native incremental support via Proxmox Backup Server integration, introduced VirtIO-fs for efficient shared filesystem access between host and guests, and refined software-defined networking (SDN) with better zone management and VLAN awareness.[15][16] Proxmox VE 9.0, released on August 5, 2025, based on Debian 13 "Trixie", enabled VM snapshots on thick-provisioned LVM storage for greater flexibility in backup strategies, added SDN "fabrics" for simplified multi-tenant network orchestration, and incorporated high-availability affinity rules to optimize resource placement in clusters. Proxmox VE provides an official in-place upgrade path from version 8 to 9, which requires first updating to the latest Proxmox VE 8.4, creating backups, running the pve8to9 checklist script, updating APT repositories to use the "trixie" suite and Proxmox VE 9 repositories, performingapt dist-upgrade, and rebooting afterward. It is recommended to always verify backups and test the upgrade in non-production environments first due to potential breaking changes or hardware compatibility issues.[17][18][4][19]
Proxmox VE 9.1, released on November 19, 2025, is a point release following Proxmox VE 9.0 (August 5, 2025), based on Debian 13.2 "Trixie" with Linux kernel 6.17. Key features include support for creating LXC containers from OCI images, TPM state storage in qcow2 format for snapshots, fine-grained nested virtualization control via vCPU flags, and enhanced SDN monitoring in the GUI (e.g., connected guests, EVPN zones, routes, and interfaces).[5][4][20]
Key milestones include the project's 10-year anniversary in April 2018, celebrating its growth from initial open-source roots to a robust enterprise platform with widespread adoption in virtualization environments.[8]
Overview
Core Functionality
Proxmox Virtual Environment (Proxmox VE) is an open-source virtualization platform based on Debian GNU/Linux, designed to manage KVM hypervisor-based virtual machines and LXC Linux containers within a single, unified interface. This integration allows administrators to deploy and oversee both full virtual machines and lightweight containers efficiently on the same host system, leveraging QEMU/KVM for hardware-assisted virtualization and LXC for OS-level containerization.[1][21] At its core, Proxmox VE enables centralized orchestration of compute, storage, and networking resources, particularly in hyper-converged infrastructure configurations where these elements are consolidated on shared server nodes to simplify management and scale operations. This approach supports software-defined storage solutions like Ceph or ZFS and virtual networking via bridges or SDN, all coordinated through the platform's built-in services without requiring separate tools.[22][21] The platform's primary management interface is a web-based graphical user interface (GUI), accessible securely via HTTPS on TCP port 8006, which provides comprehensive dashboards for resource monitoring, VM/container creation, and configuration tasks. The web interface uses the default username 'root' and a password that is set by the user during the installation process; there is no predefined default password for security reasons. It employs PAM (Pluggable Authentication Modules) for authentication. Access is via https://<server-IP>:8006. Common issues with the web GUI include a blank page accompanied by JavaScript errors related to ExtJS, frequently caused by browser extensions such as ad blockers (e.g., uBlock Origin, Privacy Badger, or NoScript) interfering with ExtJS resources. Other causes can include browser cache issues, strict tracking protection features in browsers like Firefox, or outdated browser versions.[23] To resolve these client-side issues:- Disable browser extensions or use incognito/private browsing mode (which often disables extensions by default).
- Clear browser cache and cookies for the Proxmox site.
- Try accessing the GUI with a different browser (e.g., switching from Firefox to Chrome).
- Ensure access via HTTPS using the correct hostname rather than just the IP address if certificate issues are present.
- Check the browser console (F12) for specific blocked URLs or error messages to identify the cause.
Target Use Cases
Proxmox Virtual Environment (VE) is deployed in enterprise data centers to provide cost-effective orchestration of virtual machines and containers, leveraging its open-source architecture under the GNU AGPLv3 license, which eliminates proprietary licensing expenses.[2] This makes it an attractive alternative to VMware ESXi for small and medium-sized businesses (SMBs), where budget constraints and the need for full-featured virtualization without ongoing costs drive migrations.[29] For instance, in Brazil, public health facilities rely on Proxmox VE to deliver enterprise-grade high availability databases across distributed sites.[30] In homelab and development setups, Proxmox VE enables users to test multi-operating system environments efficiently, supported by its free licensing model that avoids fees associated with commercial hypervisors.[2] Its intuitive web-based interface facilitates rapid prototyping and experimentation on personal hardware, allowing developers to simulate complex infrastructures without financial barriers.[29] Additionally, Proxmox VE offers high flexibility for homelabs by supporting the execution of NAS virtual machines, such as TrueNAS, or disk passthrough for direct ZFS management, providing advantages over dedicated NAS systems through integrated virtualization and storage capabilities.[31][32] For edge computing and remote sites, Proxmox VE's lightweight resource footprint—requiring minimal overhead for mixed workloads—and capabilities like serial console access for headless management suit environments with limited connectivity.[28][29] These attributes enable offline operation post-installation, ideal for distributed deployments such as manufacturing facilities or isolated outposts.[33] A notable example is Eomnia's renewable-powered data center in Antarctica, which uses a Proxmox VE cluster for storage and recovery in extreme remote conditions.[34] Proxmox VE facilitates hybrid cloud integrations by allowing API-driven exports and backups of workloads to public clouds like AWS or Azure, enabling burst capacity scaling during peak demands.[35] This approach supports seamless tiering of resources for off-site redundancy, combining on-premises control with cloud elasticity.[35] Case studies highlight its adoption in education and healthcare by 2025. In university labs, the Austrian college HTL Leonding employs Proxmox VE to teach computer networking, supporting 450 students across 18 classes with a clustered environment for hands-on virtualization exercises.[36] Similarly, academic institutions in Ukraine have developed Proxmox-based clouds for training computer science educators, emphasizing scalable resource allocation for pedagogical simulations.[37] In healthcare, secure isolated workloads are managed effectively, as demonstrated by the Portuguese online pharmacy Farmácia Nova da Maia, which uses Proxmox VE with ZFS for high-availability operations ensuring data integrity and compliance.[38] By 2025, such implementations underscore its role in protecting sensitive patient data through containerized isolation and redundancy.[30]Installation
Proxmox VE is installed directly on bare-metal hardware using a hybrid ISO image that includes a complete Debian-based system with all necessary Proxmox VE packages. The official installation guide is available at https://pve.proxmox.com/pve-docs/chapter-pve-installation.html. The ISO image can be downloaded from https://www.proxmox.com/en/downloads.[](https://pve.proxmox.com/pve-docs/chapter-pve-installation.html)[](https://www.proxmox.com/en/downloads) Prerequisites include a 64-bit CPU with Intel VT-x or AMD-V support, at least 2 GB of RAM recommended (1 GB minimum for evaluation), a network card, and a backup of all data on target disks, as the installation erases all existing data on selected disks. The hardware requirements for Proxmox VE have remained consistent through 2025 and into 2026, with no major changes from prior versions. Official minimum/recommended requirements (applicable to the latest versions) are as follows:[39]- CPU: 64-bit Intel or AMD64 with Intel VT/AMD-V support (required for KVM virtualization). VT-d/AMD-Vi for PCI(e) passthrough.
- Memory: Minimum 2 GB for OS and Proxmox VE services (1 GB for evaluation/testing only), plus additional for guests. For Ceph/ZFS, add ~1 GB per TB of storage.
- Storage: Fast and redundant (SSD recommended for best performance); OS on hardware RAID with BBU or non-RAID with ZFS; VM storage options include RAID, ZFS, Ceph (no hardware RAID with ZFS/Ceph).
- Network: At least one NIC (minimum); redundant Gbit NICs recommended for production, with dedicated networks for cluster/Ceph traffic.
- Minimum for evaluation only: 1 GB RAM, basic hard drive, one NIC.
- Download the Proxmox VE ISO image.
- Prepare bootable USB/CD media from the hybrid ISO using tools such as dd, Etcher, or Rufus.
- Boot from the media (disable Secure Boot in BIOS/UEFI for versions prior to 8.1 if necessary).
- Select the target disk(s), noting that all data on them will be erased.
- Choose a filesystem (ext4, XFS, BTRFS, or ZFS).
- Configure timezone, language/keyboard layout, root password, email address for notifications, and network settings.
- Review the configuration and proceed with installation.
- Reboot the system upon completion.
- Access the web interface at https://[IP address]:8006 using the root username and the password set during installation.
Architecture
Underlying Platform
Proxmox Virtual Environment (Proxmox VE) is built upon Debian GNU/Linux as its foundational operating system, providing a stable and widely supported base for virtualization and containerization tasks. The latest release, Proxmox VE 9.0, is based on Debian 13 "Trixie", incorporating updated packages and security enhancements from this Debian version to ensure compatibility and reliability in enterprise and homelab environments.[17][4] At the kernel level, Proxmox VE employs a custom Linux kernel derived from upstream sources, with version 6.14.8-2 serving as the stable default in Proxmox VE 9.0. This kernel includes essential modules for hardware virtualization, such as KVM (Kernel-based Virtual Machine), enabling efficient paravirtualization for guest operating systems. Additional patches in the Proxmox kernel optimize performance for storage, networking, and container isolation, while optional newer kernels like 6.17 are available for advanced users seeking cutting-edge features.[40][41] Core virtualization technologies are integrated directly into the platform without relying on external hypervisor abstractions. QEMU provides full virtual machine emulation, supporting a wide range of guest architectures and hardware passthrough, while LXC handles OS-level containerization for lightweight, efficient deployment of Linux-based workloads. These components leverage the kernel's capabilities to deliver both full isolation in VMs and shared-kernel efficiency in containers.[1][42] Package management in Proxmox VE utilizes the Debian APT system, allowing seamless installation and updates of software components. Users can access the no-subscription community repository for free, timely updates derived directly from Debian and Proxmox testing, or opt for the enterprise repository, which offers rigorously tested packages with extended stability guarantees for production use. This dual-repository model ensures flexibility while prioritizing security and reliability in updates.[43] Built-in security features enhance the underlying platform's robustness from the ground up. AppArmor mandatory access control profiles are enforced for LXC containers, restricting process capabilities and file access to mitigate potential exploits in containerized environments. Additionally, two-factor authentication (2FA) is natively supported via TOTP (Time-based One-Time Password) methods, such as those compatible with Google Authenticator, adding a critical layer of protection for administrative access to the Proxmox VE web interface and API.[42][44]Key Components and Services
The pve-manager service serves as the core management component in Proxmox Virtual Environment, responsible for orchestrating API requests, user authentication, and persistent configuration storage within the /etc/pve directory, which utilizes the Proxmox Cluster File System (pmxcfs) for replicated access across nodes.[45][46] This setup ensures centralized management of system settings, including user permissions and realm configurations, integrated with authentication backends like PAM or LDAP. The pvedaemon operates as the privileged REST API daemon, running under root privileges to execute operations requiring elevated access, such as VM and container lifecycle management, including start, stop, and migration events through integration with QEMU for virtual machines and LXC for containers via predefined hooks.[45][47] It delegates incoming requests to worker processes—typically three by default—to handle these tasks efficiently, ensuring secure and concurrent processing of privileged commands without direct exposure from the web interface.[47] Complementing pvedaemon, the pveproxy daemon functions as the unprivileged API proxy, listening on TCP port 8006 over HTTPS and running as the www-data user with restricted permissions to serve the web-based management interface.[45][48] It forwards requests needing higher privileges to the local pvedaemon instance, maintaining security isolation while enabling cluster-wide API access from any node.[48] Server-side issues affecting the web GUI, such as failure to load properly, can often be resolved by restarting the pveproxy service using the commandsystemctl restart pveproxy.
Storage backends in Proxmox VE are configured through datacenter-wide storage pools, supporting diverse types such as LVM-thin for thin-provisioned local block storage with snapshot and clone capabilities, directory-based storage for file-level access on existing filesystems, and iSCSI initiators for shared block-level storage over networks.[49][50] These backends are defined in /etc/pve/storage.cfg, allowing flexible pooling of local and remote resources for VM disks, container images, and ISO files without manual mounting in most cases.[49] For instance, LVM-thin pools enable overcommitment of storage space, while iSCSI supports dynamic discovery of targets via the pvesm tool.[51][52]
Logging and monitoring are facilitated by the pvestatd daemon, which continuously queries and aggregates real-time status data for all resources, including VMs, containers, and storage pools, broadcasting updates to connected clients for dashboard displays.[45] This service integrates with the system's syslog for event logging, capturing operational details like service starts, errors, and resource utilization to support troubleshooting and performance oversight.[45] Proxmox VE, built on Debian, leverages standard syslog facilities alongside pvestatd's metrics collection for comprehensive internal monitoring.[21]
Features
Virtualization Technologies
Proxmox Virtual Environment (Proxmox VE) supports two primary virtualization technologies: full virtualization via the Kernel-based Virtual Machine (KVM) hypervisor and OS-level virtualization via Linux Containers (LXC). These options allow users to deploy virtual machines (VMs) and containers tailored to different workload requirements, with KVM providing hardware emulation for broad OS compatibility and LXC offering lightweight isolation for Linux-based environments.[7] KVM in Proxmox VE leverages QEMU for device emulation, enabling the creation of fully virtualized VMs that support a wide range of guest operating systems, including Windows, Linux distributions, and BSD variants. Proxmox VE 9.1 introduces fine-grained control over nested virtualization through new vCPU flags, allowing administrators to selectively enable specific CPU features for exposure to guest VMs to support nested hypervisor environments more precisely.[5] The hypervisor utilizes paravirtualized drivers, such as VirtIO for block, network, and other devices, which reduce emulation overhead by allowing guests to communicate directly with the host kernel for improved I/O performance. Live migration is supported for running VMs, facilitating seamless transfers between cluster nodes without downtime when shared storage is available. Additionally, snapshotting capabilities, including live snapshots that capture VM memory state, enable point-in-time backups and quick rollbacks. Proxmox VE 9.1 enhances snapshot functionality with support for storing TPM state in qcow2 format, allowing snapshots of virtual machines configured with emulated Trusted Platform Module (TPM) devices to preserve TPM state correctly.[5][53][54][55] LXC provides lightweight, OS-level virtualization specifically for Linux guests, sharing the host kernel to minimize resource usage while isolating processes, filesystems, and network stacks. Proxmox VE's Proxmox Container Toolkit (pct) manages LXC instances, supporting unprivileged containers by default through user namespaces, which map container UIDs and GIDs to non-privileged ranges on the host for enhanced security against privilege escalation attacks. There is no "pct set password" command. The --password option is used with "pct create" to set the root password during LXC container creation (e.g.,pct create 100 debian-12-standard --password mypass). For existing containers, access the container and change the password manually: run pct enter <CTID> (or pct exec <CTID> -- bash), then execute passwd and follow the prompts to set the root password. Bind mounts allow efficient sharing of host directories or resources with containers, enabling data access without full filesystem duplication. This setup suits applications requiring low-latency execution, such as web servers or microservices.[42][56][57]
Proxmox VE 9.1 integrates support for Open Container Initiative (OCI) images, allowing users to pull images from registries like Docker Hub or upload them manually to create LXC container templates. These images are automatically converted to the LXC format via the Proxmox Container Toolkit, enabling provisioning as full system containers or lightweight application containers optimized for microservices with minimal resource footprint. This facilitates seamless deployment of standardized applications, such as databases or API services, directly through the GUI or CLI, bridging container build pipelines with Proxmox environments.[5][57]
In comparison, KVM VMs offer stronger isolation suitable for heterogeneous or security-sensitive workloads but introduce some performance overhead in CPU and I/O operations relative to native execution, due to the emulation layer. LXC containers, by contrast, achieve performance close to native execution with lower overhead than full virtualization, though they are limited to Linux guests and provide process-level rather than hardware-level isolation.[7]
Proxmox VE includes a template system for rapid deployment, allowing users to create reusable templates from existing VMs or containers, which can then be cloned to instantiate multiple instances. Templates can be derived from ISO installations for OS images or imported from Open Virtualization Format (OVF) files for compatibility with other platforms, streamlining provisioning in both standalone and clustered setups.[58][59]
Clustering and High Availability
Proxmox Virtual Environment supports multi-node clustering to enable centralized management and high availability for virtual machines (VMs) and containers. The cluster is built using Corosync for reliable cluster communication and membership, which handles heartbeat messaging and node synchronization across the network.[60] Pacemaker serves as the cluster resource manager, overseeing the state of resources such as VMs and containers, and coordinating their migration or restart as needed.[61] For stable operation, clusters require at least three nodes to establish a reliable quorum, where a majority vote (e.g., two out of three nodes) prevents decisions in partitioned networks; for two-node clusters, a QDevice can be used to provide an additional external vote to maintain quorum. This quorum mechanism ensures cluster integrity during node failures or network issues.[60] Proxmox VE node names are the system's hostnames and must be unique within the cluster. They follow standard Linux hostname rules: alphanumeric characters (a-z, 0-9), hyphens allowed but cannot start or end with a hyphen, no special characters (e.g., no @, spaces, underscores in strict cases), minimum 2 characters. Node names must be resolvable across cluster nodes (via DNS or /etc/hosts). The cluster name follows the same rules as node names and cannot be changed after creation.[60][62] Best practices include using short, meaningful, consistent names (e.g., pve01, pve02, location-based like dc1-node1) for clarity and management, and avoiding complex names to prevent issues. Quorum requirements (majority of nodes online, recommended at least 3 nodes for reliable quorum in HA setups, 2-node clusters need an external QDevice) are unrelated to node name format. High availability (HA) in Proxmox VE allows configured VMs and containers to automatically restart on healthy nodes if the hosting node fails, minimizing downtime to seconds or minutes depending on resource size. This process relies on fencing to isolate faulty nodes and prevent data corruption; supported methods include hardware watchdogs, which trigger automatic reboots on unresponsive nodes via integrated circuits on the motherboard, or external fencing devices such as IPMI (Intelligent Platform Management Interface) for remote power control.[61] Fencing configuration is performed via command-line tools, with each node requiring at least one verified fencing method to ensure reliable isolation.[63] In setups using local ZFS storage without shared storage, high availability can be achieved through asynchronous replication of VM and container disks to other cluster nodes. Replication uses ZFS snapshots and incremental send/receive operations, with configurable schedules having a minimum interval of one minute. Upon node failure, the HA manager can start the replicated instances on another node where a recent copy exists, providing failover and redundancy without requiring shared storage. Due to the asynchronous nature of replication, there is a potential for limited data loss corresponding to the time since the last successful sync. Proper operation requires a quorate cluster, typically with at least three nodes for reliable quorum or a QDevice in two-node setups.[64][61] Live migration enables seamless movement of running VMs and containers between nodes without interruption, provided shared storage is available for disk images. The network connecting nodes should have low latency to maintain synchronization during the transfer of memory pages and CPU state. Migration time depends on the VM's RAM size, vCPU count, network bandwidth, and features like compression and deduplication.[61] To avoid split-brain scenarios where multiple nodes attempt concurrent access to shared resources, Proxmox VE implements uncorrelated fencing, ensuring that fencing actions on one node do not depend on communication from others. This is complemented by lease-based storage locks on shared storage systems, which grant exclusive access rights to a single node for a limited time, preventing conflicts during failover and enforcing data consistency.[63]Storage Management
Proxmox Virtual Environment (Proxmox VE) employs a flexible storage model that allows virtual machine (VM) images, container data, and other resources to be provisioned across local or shared backends, enabling efficient resource utilization in both single-node and clustered setups.[50] This model supports a variety of storage types, from simple directory-based to advanced distributed systems, with built-in tools for configuration via the web interface or command-line utilities likepvesm.[49]
Among the supported filesystems, ZFS stands out for its advanced capabilities in snapshotting, cloning, and deduplication, making it ideal for local storage needs. ZFS datasets are used to store VM images in raw format and container data volumes, providing efficient space management through features like copy-on-write snapshots that capture VM states without full duplication.[65] For distributed environments, Ceph integration offers robust block and object storage, utilizing RADOS Block Device (RBD) images for high-performance, scalable VM disks that support thin provisioning and replication across nodes. Ceph's architecture ensures data redundancy via placement groups and CRUSH maps, allowing Proxmox VE to manage Ceph clusters directly for hyper-converged deployments.[66]
Storage in Proxmox VE is organized into pools, which logically group underlying backends such as local directories, LVM volumes, or remote protocols including NFS, iSCSI, and GlusterFS. These pools enable unified access to diverse storage resources, with content types like VM disk images, ISO files, and backups selectable per pool to enforce access controls.
In clustered environments, the best practice for centralized storage of ISO images is to use shared file-based storage such as NFS or CIFS/SMB. In Proxmox VE 9, these can be added as storage pools in the Datacenter > Storage section of the web interface or via the CLI with pvesm add nfs or pvesm add cifs, setting the content type to include "ISO images" and enabling the "shared" property. This configuration allows all nodes to access the same ISOs without duplication. NFS is commonly recommended for Linux environments due to performance and simplicity; an external NFS server (e.g., in a VM or separate device) can export a directory for ISOs. Block-level storages like LVM or Ceph RBD should be avoided, as they do not support file-based content like ISOs. For small-scale or non-critical use, local Directory storage is simpler but not centralized.[50][67]
Direct-attached storage (DAS) enclosures can also be integrated via USB or Thunderbolt ports, leveraging Linux kernel support to present as local storage devices. For enclosures configured with hardware RAID, they typically appear as a single disk, which can then be added to LVM, ZFS, or directory-based storage pools. In JBOD mode, individual disks from the enclosure can be utilized to create ZFS pools with RAID configurations directly in Proxmox VE. However, Thunderbolt and USB4 support may require additional setup, such as installing the bolt package for device recognition or configuring authorization policies via Polkit or udev rules, and functionality can vary depending on hardware compatibility and Proxmox VE version, potentially leading to recognition issues in some setups.[68][69][70] Thin provisioning is facilitated through LVM-thin pools, which allocate blocks only upon writing to optimize space usage on physical volumes, or via qcow2 image formats on directory or NFS backends that support dynamic growth up to predefined limits.[50][71]
Replication enhances data redundancy for local storage by periodically syncing guest volumes between cluster nodes, minimizing downtime during migrations or failures. This asynchronous feature leverages ZFS snapshots and incremental send/receive streams for efficient transfers of ZFS-based volumes, or block-level replication for LVM-thin and directory storages. It is designed for clusters with local ZFS storage on each node and supports replication to multiple target nodes. Jobs are scheduled via Proxmox VE's built-in job manager, accessible through the GUI or pvesr CLI tool, with configurable intervals ranging from a minimum of one minute to a maximum of once a week, such as hourly syncs. This feature can be combined with High Availability (HA) to enable automatic failover to replicated volumes on another node in the event of a node failure. However, because the replication is asynchronous, there is a risk of data loss for changes made after the last successful replication. HA requires a functioning cluster quorum; for two-node clusters, this typically necessitates the use of a QDevice to maintain quorum.[64][61]
For enhanced reliability with shared block storage like iSCSI, Proxmox VE supports Multipath I/O (MPIO) to aggregate multiple physical paths to a logical unit number (LUN), providing failover and load balancing. MPIO is configured using the multipath-tools package, which detects paths and applies policies like round-robin for I/O distribution, while tuning parameters such as queue depths (e.g., via queue_length in multipath.conf) allow optimization for performance under high load by controlling outstanding I/O requests per path. This setup ensures continuous access even if individual paths fail, commonly used with enterprise storage arrays.[72][73]
DAS enclosures can further support NAS-like sharing by configuring SMB or NFS protocols directly within Proxmox VE on the attached storage, or by passing through the enclosure via PCI or USB to a VM running dedicated software such as TrueNAS or OpenMediaVault for advanced management.[74] Proxmox VE can serve as an alternative to dedicated NAS operating systems like TrueNAS, offering advantages such as being free and open-source with top-tier virtualization capabilities via KVM and LXC containers, high flexibility for homelab environments, and the option to passthrough disks for ZFS management or run NAS-focused VMs like TrueNAS. However, it is not a dedicated NAS OS, often requires command-line interface (CLI) usage for advanced storage tasks, and its pure storage features are inferior to those of specialized systems like TrueNAS, which provide more comprehensive GUI-based ZFS management and file-sharing protocols.[32][75]
Backup and Restore
Proxmox Virtual Environment employs the vzdump tool as its primary mechanism for performing backups of virtual machines (VMs) and Linux containers (LXC). This integrated utility generates consistent full backups that encompass the complete configuration files and disk data for each VM or LXC, ensuring data integrity without partial captures. Backups can be initiated through the web-based graphical user interface (GUI) or the command-line interface (CLI) using thevzdump command, allowing administrators to schedule automated jobs or execute them manually as needed.[76][7]
The vzdump tool supports multiple backup modes to balance minimal disruption with data consistency: stop mode shuts down the VM or LXC before capturing the data; suspend mode temporarily freezes the guest for the duration of the backup; and snapshot mode enables live backups for running KVM-based VMs by leveraging storage-level snapshots, which is particularly useful for shared or block-based storages. For LXC, only stop or suspend modes are available, as they rely on freezing the container processes. Supported archive formats include .vma for VM backups, which is optimized for efficient storage of sparse files and out-of-order disk data, .tar for LXC and configuration archives, and qcow2 for individual disk images within VM backups. These archives can be directed to local storage, such as directories or LVM volumes, or remote storage options like NFS, iSCSI, or Ceph, selectable from configured storage pools.[76][77][78]
Proxmox VE supports incremental backups when targeting Proxmox Backup Server (PBS) as the storage backend, utilizing dirty bitmap tracking within QEMU to identify and transfer only modified disk blocks after an initial full backup. This feature reduces data transfer volumes and storage requirements for subsequent backups compared to full backups. Traditional vzdump backups to non-PBS storages remain full only, but the incremental capability enhances efficiency for large-scale deployments.[4][59][79]
Integration with Proxmox Backup Server provides advanced data protection through chunk-based repositories that employ content-defined deduplication to eliminate redundant data across multiple backups, minimizing overall storage footprint. PBS supports client-side encryption using AES-256 in GCM mode, where encryption keys are managed on the Proxmox VE host before transmission, ensuring data remains secure even on untrusted storage. Additionally, client-side pruning automates the removal of obsolete backup snapshots based on configurable retention policies, such as keep-last or keep-hourly schemes, directly from the backup job configuration without server-side intervention. This seamless integration treats PBS as a native storage type in Proxmox VE, enabling unified management via the GUI.[80][7][76]
Restore operations in Proxmox VE allow for straightforward recovery of full VMs or LXC by importing the backup archive through the GUI or CLI tools like qm restore for VMs and pct restore for LXC, which recreates the original configuration and attaches restored disks to the target storage. Guest-driven restores leverage the qemu-guest-agent installed within the VM to facilitate file-level recovery or application-consistent restores, such as quiescing file systems during the process to avoid corruption. Post-restore verification employs checksum comparisons on the imported data chunks to confirm integrity, particularly with PBS backups where built-in validation ensures no transmission errors or corruption occurred. These processes support overwriting existing guests or creating new ones, with options for selective disk restoration.[81][4][7]
When GUI-based import of backup archives fails due to a reverse proxy configuration, alternative methods can be employed. Administrators may use a jumphost or management VM to perform an internal SCP transfer of the backup file to the /var/lib/vz/dump/ directory on the Proxmox node, after which the file will appear in the Backups tab for restoration via the GUI or CLI tools. Alternatively, temporary SSH access to the node's internal IP address can facilitate direct file transfer. To resolve GUI upload issues without these workarounds, the reverse proxy should be reconfigured; for Nginx, this includes setting client_max_body_size to 0 (unlimited) and proxy_request_buffering off to handle large file uploads effectively.[82][83][84]
Networking Capabilities
Proxmox Virtual Environment (VE) leverages the Linux network stack to provide flexible networking options for virtual machines (VMs) and containers, enabling configurations from basic bridging to advanced software-defined setups. Network interfaces are managed through the/etc/network/interfaces file or the web-based GUI, allowing administrators to define bridges, bonds, and VLANs directly on the host. This integration ensures seamless connectivity for guest systems while supporting high-performance passthrough to physical NICs.[85]
By default, Proxmox VE uses a bridged networking model, where a Linux bridge (typically vmbr0) is created during installation and linked to a physical network interface. Virtual machines and containers connect to this bridge, which functions as a virtual switch, allowing guests to share the host's network segment as if connected to the same physical switch. This enables VMs to appear as individual devices on the LAN with their own MAC addresses. Network configuration is primarily performed through the web GUI under the Node > Network tab, where changes are staged and validated. Alternatively, administrators can manually edit /etc/network/interfaces. Changes are applied by clicking the "Apply Configuration" button in the GUI, which applies them live using the ifupdown2 tool (default since Proxmox VE 7.0), or manually via the command ifreload -a. In some cases, a reboot may be required for the changes to take effect. Although the default installation configures the management bridge vmbr0 with a static IP address, DHCP is supported. To configure vmbr0 for DHCP, the following example can be used in /etc/network/interfaces, replacing any existing static IP settings (address, netmask, gateway lines must be removed):
auto vmbr0
iface vmbr0 inet dhcp
bridge-ports eno1 # Replace with your actual physical NIC (e.g., eth0, enp3s0f0)
bridge-stp off
bridge-fd 0
auto vmbr0
iface vmbr0 inet dhcp
bridge-ports eno1 # Replace with your actual physical NIC (e.g., eth0, enp3s0f0)
bridge-stp off
bridge-fd 0
vmbr0 to connect VMs and containers to the physical network. These bridges act as virtual switches, allowing guest network interfaces to be attached directly, with traffic appearing as originating from the guest's own MAC address. VLAN tagging (IEEE 802.1q) can be applied to any network device, including NICs, bonds, or bridges, to segment traffic without requiring separate physical interfaces; for instance, a VLAN-aware bridge like vmbr0.10 isolates traffic on VLAN ID 10. Bonding for link aggregation, particularly using LACP (802.3ad mode), combines multiple NICs into a single logical interface for increased bandwidth and redundancy, though it necessitates compatible switch configuration on the upstream side.[85][85][86]
Available as an experimental feature since 2019 and enhanced in Proxmox VE 8.0, Software-Defined Networking (SDN) extends these capabilities with overlay networks for enhanced isolation and scalability across clusters. SDN supports EVPN-VXLAN configurations, where Ethernet VPN (EVPN) uses BGP for layer-3 routing over VXLAN tunnels, enabling multi-zone isolation that separates tenant networks without physical segmentation. Controllers such as EVPN handle dynamic routing and peering, while Open vSwitch (OVS) serves as an alternative to native Linux bridges for advanced features like flow-based forwarding. Configurations are stored in /etc/pve/sdn and synchronized across cluster nodes, allowing GUI-based zone and VNet definitions for automated deployment. Proxmox VE 9.1 further enhanced SDN monitoring in the GUI, adding support for viewing connected guests, EVPN zones, routes, and interfaces.[87][7][86][88][4]
Firewall integration enhances network security through nftables-based rules applied at multiple levels: datacenter-wide for cluster policies, host-specific for node traffic, and VM/container-level for granular control. The proxmox-firewall service, opt-in for nftables in recent versions, supports stateful filtering, rate limiting to prevent DoS attacks, and anti-spoofing measures via MAC and IP validation on ingress traffic. Rules can reference VNets in SDN setups, ensuring consistent enforcement across virtual networks.[89][89][4]
For cluster communication, Proxmox VE uses the Corosync protocol over dedicated interfaces to minimize latency and contention with guest traffic. Since version 6.0, the default transport via Kronosnet employs unicast UDP for reliable messaging, simplifying deployments in environments without multicast support; however, multicast or legacy unicast modes remain configurable for compatibility. A separate NIC is recommended for this traffic to ensure consistent low-latency performance essential for quorum and synchronization.[60][90][7]
