Recent from talks
Nothing was collected or created yet.
OpenNebula
View on Wikipedia| OpenNebula | |
|---|---|
| Developers | OpenNebula Systems, OpenNebula Community |
| Initial release | July 24, 2008 |
| Stable release | 7.0.1[1]
/ 27 October 2025 |
| Repository | |
| Written in | C++, Ruby, Shell script, lex, yacc, JavaScript |
| Operating system | Linux |
| Platform | Hypervisors (VMware vCenter, KVM, LXD/LXC, and AWS Firecracker) |
| Available in | English, Czech, French, Slovak, Spanish, Chinese, Thai, Turkish, Portuguese, Turkish, Russian, Dutch, Estonian, Japanese |
| Type | Cloud computing |
| License | Apache License version 2 |
| Website | opennebula |
OpenNebula is an open source cloud computing platform for managing heterogeneous data center, public cloud and edge computing infrastructure resources. OpenNebula manages on-premises and remote virtual infrastructure to build private, public, or hybrid implementations of infrastructure as a service (IaaS) and multi-tenant Kubernetes deployments. The two primary uses of the OpenNebula platform are data center virtualization and cloud deployments based on the KVM hypervisor, LXC system containers, and AWS Firecracker microVMs. The platform is also capable of offering the cloud infrastructure necessary to operate a cloud on top of existing VMware infrastructure. In early June 2020, OpenNebula announced the release of a new Enterprise Edition for corporate users, along with a Community Edition.[2] OpenNebula CE is free and open-source software, released under the Apache License version 2. OpenNebula CE comes with free access to patch releases containing critical bug fixes but with no access to the regular EE maintenance releases. Upgrades to the latest minor/major version is only available for CE users with non-commercial deployments or with significant open source contributions to the OpenNebula Community.[3] OpenNebula EE is distributed under a closed-source license and requires a commercial Subscription.[4]
History
[edit]The OpenNebula Project was started as a research venture in 2005 by Ignacio M. Llorente and Ruben S. Montero. The first public release of the software occurred in 2008. The goals of the research were to create efficient services for managing virtual machines on distributed infrastructures. It was also important that these services had the ability to scale at high levels. Open-source development and an active community of developers have since helped mature the project. As the project matured it began to become more and more adopted and in March 2010 the primary writers of the project founded C12G Labs, now known as OpenNebula Systems, which provides value-added professional services to enterprises adopting or utilizing OpenNebula.
Description
[edit]OpenNebula orchestrates storage, network, virtualization, monitoring, and security[5] technologies to deploy multi-tier services (e.g. compute clusters[6][7]) as virtual machines on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies. According to the European Commission's 2010 report "... only few cloud dedicated research projects in the widest sense have been initiated – most prominent amongst them probably OpenNebula ...".[8]
The toolkit includes features for integration, management, scalability, security and accounting. It also claims standardization, interoperability and portability, providing cloud users and administrators with a choice of several cloud interfaces (Amazon EC2 Query, OGF Open Cloud Computing Interface and vCloud) and hypervisors (VMware vCenter, KVM, LXD/LXC and AWS Firecracker), and can accommodate multiple hardware and software combinations in a data center.[9]
OpenNebula is sponsored by OpenNebula Systems (formerly C12G).
OpenNebula is widely used by a variety of industries, including cloud providers, telecommunication, information technology services, government, banking, gaming, media, hosting, supercomputing, research laboratories, and international research projects[citation needed].
Development
[edit]Major upgrades generally occur every 3-5 years and each upgrade generally has 3-5 updates. The OpenNebula project is mainly open-source and possible thanks to the active community of developers and translators supporting the project. Since version 5.12 the upgrade scripts are under a closed source license, which makes upgrading between versions impossible without a subscription unless you can prove you are operating a non-profit cloud or made a significant contribution to the project.
Release history
[edit]- Version TP and TP2, technology previews, offered host and VM management features, based on Xen hypervisor.
- Version 1.0 was the first stable release, introduced KVM and EC2 drivers, enabling hybrid clouds.
- Version 1.2 added new structure for the documentation and more hybrid functionality.
- Version 1.4 added public cloud APIs on top of oned to build public cloud and virtual network management.
- Version 2.0 added mysql backend, LDAP authentication, management of images and virtual networks.
- Version 2.2 added integration guides, ganglia monitoring and OCCI (converted as add-ons in later releases), Java bindings for the API and the Sunstone GUI.
- Version 3.0 added a migration path from previous versions, VLAN, ebtables and OVS integration for virtual networks, ACLs and accounting subsystem, VMware driver, Virtual Data Centers and federation across data centers.
- Version 3.2 added firewalling for VMs (deprecated later on by security groups).
- Version 3.4 introduced iSCSI datastore, cluster as a first class citizen and quotas.
- Version 3.6 added Virtual Routers, LVM datastores and the public OpenNebula marketplace integration.
- Version 3.8 added the OneFlow components for service management and OneGate for application insight.
- Version 4.0 added support for Ceph and Files datastore and the onedb tool.
- Version 4.2 added a new self service portal (Cloud View) and VMFS datastore.
- Version 4.4 released in 2014, brought a number of innovations in Open Cloud, improved cloud bursting, and implemented the use of multiple system datastores for storage load policies.
- Version 4.6 allowed users to have different instances of OpenNebula in geographically dispersed and different data centers, this was known as the Federation of OpenNebula. A new cloud portal for cloud consumers was also introduced and in App market support was provided to import OVAs.
- Version 4.8 began offering support for Microsoft Azure and IBM. Developers, it also continued evolving and improving the platform by incorporating support for OneFlow in cloud view. This meant end users could now define virtual machine applications and services elastically.
- Version 4.10 integrated the support portal with the Sunstone GUI. Login token was also developed, and support was provided for VMS and vCenter.
- Version 4.12 offered new functionality to implement security groups and improve vCenter integration. Show back model was also deployed to track and analyze clouds due to different departments.
- Version 4.14 introduced a newly redesigned and modularized graphical interface code, Sunstone. This was intended to improve code readability and ease the task of adding new components.
- Version 5.0 'Wizard' introduced marketplaces as means to share images across different OpenNebula instances. Management of Virtual Routers with a network topology visual tool in Sunstone.
- Version 5.2 'Excession' added a IPAM subsystem to aid in network integrations, and also added LDAP group dynamic mapping.
- Version 5.4 'Medusa' introduced Full storage and network management for vCenter, and support for VM Groups to define affinity between VMs and hypervisors. Own implementation of RAFT for HA of the controller.
- Version 5.6 'Blue Flash' focused on scalability improvements, as well as UX improvements.
- Version 5.8 'Edge' added support for LXD for infrastructure containers, automatic NIC selection and Distributed Datacenters (DDC), which is the ability to use bare metal providers to build remote clusters in edge and hybrid cloud environments.
- Version 5.10 'Boomerang' added NUMA and CPU pinning, NSX integration, revamped hook subsystem based ion 0MQ, DPDK support and 2FA authentication for Sunstone.
- Version 5.12 'Firework' removal of upgrade scripts, added support to AWS Firecracker micro-VMs, a new integration with Docker Hub, Security Group integration (NSX), several improvements to Sunstone, a revamped OneFlow component, and an improved monitoring subsystem.
- Version 6.0 'Mutara' new multi-cloud architecture based on "Edge Clusters", enhanced Docker and Kubernetes support, new FireEdge webUI, revamped OneFlow, new backup capabilities.
- Version 6.2 'Red Square' improvements to LXC driver, new support to workload portability, beta preview of the new Sunstone GUI.
- Version 6.4 'Archeon' new support to the automatic deployment and management of edge clusters based on Ceph using on-premises infrastructure or AWS bare-metal resources, addition of the notion of network states, improvements to the new Sunstone GUI, to the LXC driver, and to the integration with VMware vCenter, and new module for WHMCS (only for EE).
- Version 6.6 'Electra' new integration of Prometheus for advanced monitoring combined with a new set of Grafana dashboards (only for EE), new native support for incremental backups based on datastore back-ends and the development of new drivers for restic (only for EE) and rsync, and several improvements for Telco Cloud environments, including enhanced management of virtual networks and VNFs.
- Version 6.8 'Rosette' new Virtual Data Center (VDC) and User tabs in the FireEdge Sunstone GUI (e.g. to display accounting and showback information), introduction of backup jobs for creating a unified backup policies across multiple VMs, and several improvements in the KVM driver (e.g. to fine-tune CPU flags, optimize disks, customize VM video, or boost Windows performance).
- Version 6.10 'Bubble' features enhanced backups (incremental backups, in-place restores, selective disk restore, custom locations), improved PCI passthrough (simplified device management, expanded GPU support), better recovery for powered-off or suspended VMs, multi-tenancy upgrades (custom quotas, restricted attributes), and support for Ubuntu 24 and Debian 12. Additional improvements include new components in Community Edition (Prometheus integration and Restic backup support from the Enterprise Edition), simplified deployment (new playbooks and roles for easy OpenNebula cloud setup), and efficient VMware migration (enhanced OneSwap tool for a streamlined vCenter Server to OpenNebula Cloud migration). Plus, the FireEdge Sunstone UI is now updated with advanced features and a modern tech stack.
- Version 7.0 'Phoenix' is a major upgrade for sovereign, AI-ready, and edge cloud infrastructures. It introduces AI-powered workload automation, VMware re-virtualization with enhanced backup support, and full NVIDIA vGPU compatibility for GPU-accelerated AI. Hybrid multi-cloud provisioning is improved with expanded provider support and ARM compatibility. The Sunstone UI is revamped for better usability and real-time metrics, while Kubernetes integration and networking features have been strengthened.
Internal architecture
[edit]Basic components
[edit]
- Host: Physical machine running a supported hypervisor.
- Cluster: Pool of hosts that share datastores and virtual networks.
- Template: Virtual Machine definition.
- Image: Virtual Machine disk image.
- Virtual Machine: Instantiated Template. A Virtual Machine represents one life-cycle, and several Virtual Machines can be created from a single Template.
- Virtual Network: A group of IP leases that VMs can use to automatically obtain IP addresses. It allows the creation of Virtual Networks by mapping over the physical ones. They will be available to the VMs through the corresponding bridges on hosts. Virtual network can be defined in three different parts:
Components and deployment model
[edit]
The OpenNebula Project's deployment model resembles classic cluster architecture which utilizes
- A front-end (master node)
- Hypervisor enabled hosts (worker nodes)
- Datastores
- A physical network
Front-end machine
[edit]The master node, sometimes referred to as the front-end machine, executes all the OpenNebula services. This is the actual machine where OpenNebula is installed. OpenNebula services on the front-end machine include the management daemon (oned), scheduler (sched), the web interface server (Sunstone server), and other advanced components. These services are responsible for queuing, scheduling, and submitting jobs to other machines in the cluster. The master node also provides the mechanisms to manage the entire system. This includes adding virtual machines, monitoring the status of virtual machines, hosting the repository, and transferring virtual machines when necessary. Much of this is possible due to a monitoring subsystem which gathers information such as host status, performance, and capacity use. The system is highly scalable and is only limited by the performance of the actual server.[citation needed]
Hypervisor enabled-hosts
[edit]The worker nodes, or hypervisor enabled-hosts, provide the actual computing resources needed for processing all jobs submitted by the master node. OpenNebula hypervisor enabled-hosts use a virtualization hypervisor such as Vmware, Xen, or KVM. The KVM hypervisor is natively supported and used by default. Virtualization hosts are the physical machines that run the virtual machines and various platforms can be used with OpenNebula. A Virtualization Subsystem interacts with these hosts to take the actions needed by the master node.
Storage
[edit]
The datastores simply hold the base images of the Virtual Machines. The datastores must be accessible to the front-end; this can be accomplished by using one of a variety of available technologies such as NAS, SAN, or direct attached storage.
Three different datastore classes are included with OpenNebula, including system datastores, image datastores, and file datastores. System datastores hold the images used for running the virtual machines. The images can be complete copies of an original image, deltas, or symbolic links depending on the storage technology used. The image datastores are used to store the disk image repository. Images from the image datastores are moved to or from the system datastore when virtual machines are deployed or manipulated. The file datastore is used for regular files and is often used for kernels, ram disks, or context files.
Physical networks
[edit]Physical networks are required to support the interconnection of storage servers and virtual machines in remote locations. It is also essential that the front-end machine can connect to all the worker nodes or hosts. At the very least two physical networks are required as OpenNebula requires a service network and an instance network. The instance network allows the virtual machines to connect across different hosts. The network subsystem of OpenNebula is easily customizable to allow easy adaptation to existing data centers.
See also
[edit]References
[edit]- ^ OpenNebula's Release Schedule
- ^ "Introducing OpenNebula Enterprise Edition". OpenNebula website. 4 June 2020. Archived from the original on 16 June 2020. Retrieved 16 June 2020.
- ^ "Get Migration Packages". OpenNebula website. Retrieved 7 July 2020.
- ^ "Upgrade Your OpenNebula Cloud". OpenNebula website. Retrieved 7 July 2020.
- ^ "Key Features about OpenNebula". Discover OpenNebula. Retrieved 10 December 2019.
- ^ R. Moreno-Vozmediano, R. S. Montero, and I. M. Llorente. "Multi-Cloud Deployment of Computing Clusters for Loosely-Coupled MTC Applications", Transactions on Parallel and Distributed Systems. Special Issue on Many Task Computing (in press, doi:10.1109/TPDS.2010.186)
- ^ R. S. Montero, R. Moreno-Vozmediano, and I. M. Llorente. "An Elasticity Model for High Throughput Computing Clusters", J. Parallel and Distributed Computing (in press, doi:10.1016/j.jpdc.2010.05.005)
- ^ "The Future of Cloud Computing" (PDF). European Commission Expert Group Report. 25 January 2010. Retrieved 12 December 2017.
- ^ B. Sotomayor, R. S. Montero, I. M. Llorente, I. Foster. "Virtual Infrastructure Management in Private and Hybrid Clouds", IEEE Internet Computing, vol. 13, no. 5, pp. 14-22, September/October 2009. doi:10.1109/MIC.2009.119)
External links
[edit]OpenNebula
View on GrokipediaIntroduction
Overview
OpenNebula is an open-source cloud management platform designed to orchestrate and manage heterogeneous resources across data centers, public clouds, and edge computing infrastructures.[8] It enables enterprises to deploy and operate private, hybrid, and edge clouds, primarily supporting Infrastructure as a Service (IaaS) models as well as multi-tenant Kubernetes environments for containerized workloads.[8] The platform emphasizes simplicity in deployment and operation, scalability to handle large-scale infrastructures, and vendor independence through its open architecture, allowing users to avoid lock-in to specific providers.[1] It unifies the agility of public cloud services with the security and control of private clouds, facilitating seamless integration of diverse environments.[8] OpenNebula supports a range of virtualization technologies, including KVM, LXC containers, AWS Firecracker for lightweight virtual machines, and VMware vCenter for hybrid setups.[8] Key benefits include robust multi-tenancy for isolated user environments, automatic provisioning of resources, elasticity to scale workloads dynamically, and on-demand management of compute, storage, and networking assets.[8] Originally developed as a research project, OpenNebula has evolved into a mature, production-ready platform widely adopted in enterprise settings.[3]Editions
OpenNebula is available in two primary editions: the Community Edition and the Enterprise Edition, each tailored to different user needs in cloud management.[9] The Community Edition provides a free, open-source distribution under the Apache License 2.0, offering full core functionality for users managing their own deployments without commercial support.[9] It includes binary and source packages, with updates released every six months and patch versions addressing critical bug fixes, making it ideal for non-profit organizations, educational institutions, or testing environments.[9] Community support is available through public forums, but users must handle self-management and upgrades independently.[9] In contrast, the Enterprise Edition is a commercial offering designed for production environments, incorporating the open-source core under Apache License 2.0 for source code while requiring a subscription for binary packages under commercial terms.[9] It delivers a hardened, tested version with additional bug fixes, minor enhancements, long-term support releases, and enterprise-specific integrations not available in the Community Edition.[10] Key differences include professional services such as SLA-based support, deployment assistance, training, consulting, technical account management, and priority access to maintenance updates and upgrades.[11] The Enterprise Edition also provides enhanced security features and warranty assurances, ensuring reliability for large-scale operations.[11] Both editions can be downloaded from the official OpenNebula website, though the Enterprise Edition requires an active subscription—priced on a per-cloud basis with host-based licensing—for full access to upgrades, tools, and support.[12] From version 5.12 onward, major upgrades are restricted to Enterprise subscribers or qualified non-commercial users and significant community contributors, emphasizing the platform's sustainability strategy.[12]History
Origins
OpenNebula originated as a research project in 2005 at the Distributed Systems Architecture (DSA) Research Group of the Universidad Complutense de Madrid in Spain, led by Ignacio M. Llorente and Rubén S. Montero.[13][14] The initiative stemmed from efforts to address challenges in virtual infrastructure management, with an initial emphasis on developing efficient and scalable services for deploying and managing virtual machines across large-scale distributed systems.[13] This work built on prior research in grid computing and virtualization, aiming to create decentralized tools that could handle dynamic resource allocation without relying on proprietary solutions.[15] The project's transition to an open-source model occurred with its first public technology preview release in March 2008, under the Apache License 2.0, motivated by the burgeoning cloud computing paradigm's demand for flexible, vendor-agnostic platforms that avoided lock-in and supported heterogeneous environments.[13][15] This shift enabled broader collaboration among researchers and early adopters, fostering innovation in infrastructure-as-a-service (IaaS) technologies while aligning with European research initiatives like the RESERVOIR project, which sought to integrate cloud and grid computing.[13] By prioritizing modularity and extensibility, the open-source approach positioned OpenNebula as a foundational tool for academic and experimental cloud deployments in the late 2000s.[16] To sustain development and provide enterprise-grade support, the original developers founded C12G Labs in March 2010 in Madrid, Spain, focusing on commercial services such as consulting, training, and customized integrations for OpenNebula users.[5] The company, initially headquartered in Madrid, was renamed OpenNebula Systems in September 2014 to better reflect its core technology and later expanded operations to include a headquarters in Burlington, Massachusetts, USA, enhancing its global reach.[5][17] This corporate backing marked the evolution from pure research to a supported ecosystem, while the project continued to grow internationally through community contributions.[4]Key Milestones
OpenNebula's first stable release (version 1.0) in July 2008 marked a pivotal shift from its origins in academic research to a collaborative open-source community project, enabling broader adoption of cloud management technologies.[18][19] The project reached its 10th anniversary in November 2017, underscoring a decade of continuous innovation, community contributions, and the establishment of thousands of cloud infrastructures worldwide.[20] By 2025, OpenNebula had achieved significant adoption, powering clouds for more than 5,000 organizations globally across diverse sectors including research institutions, telecommunications providers, and financial services.[4] Organizational growth accelerated through strategic affiliations, with OpenNebula Systems becoming a day-1 member of Gaia-X, a participant in the European Open Science Cloud (EOSC) and the Important Project of Common European Interest on Cloud Infrastructure and Services (IPCEI-CIS), and a corporate member of the Linux Foundation, LF Edge, and Cloud Native Computing Foundation (CNCF); the company also leads as chair of the European Alliance for Industrial Data, Edge and Cloud.[4][7] In recent years, OpenNebula has emphasized advancements in AI-ready hybrid clouds and edge computing strategies, prominently featured at events such as Data Center World 2025 where it demonstrated re-virtualization solutions for modern infrastructure.[21] Spanning over 10 years of dedicated research and development into the 2020s, OpenNebula has prioritized sovereign cloud solutions to enhance digital autonomy, including federated AI factories and reference architectures for European data sovereignty.[4][7]Development
Release History
OpenNebula's initial development featured two technology previews in 2008. The first Technology Preview (TP) was released on March 26, 2008, providing basic host and virtual machine management capabilities based on the Xen hypervisor.[22] This was followed by TP2 on June 17, 2008, which expanded on these features for virtual infrastructure management.[22] The project's first stable release, version 1.0, arrived on July 24, 2008, introducing core cloud computing functionalities for data center virtualization and dynamic resource allocation.[23] Early major versions followed a rapid cycle, with upgrades approximately every 1-2 years and 3-5 minor updates per major release to incorporate community feedback and stability improvements. For instance, version 2.0 launched in October 2010 as a significant upgrade, followed by 3.0 in October 2011.[22] This pattern continued through the 4.x and 5.x series, culminating in version 5.12 on July 21, 2020—the first Long Term Support (LTS) release, which received extended maintenance until its end-of-life on February 10, 2023.[24][25] In recent years, the release cadence has shifted toward quarterly updates, with major versions emerging every 3-5 years to align with enterprise needs.[26] The latest major release, 7.0 "Phoenix," was issued on July 3, 2025, bringing advancements in AI workload support, edge computing, and hybrid cloud orchestration.[27] A subsequent patch, 7.0.1, followed on October 27, 2025, enhancing enterprise-grade features and AI cloud integrations.[28] Changes to the release process began with version 5.12, where upgrade scripts for the Enterprise Edition became partially closed-source, accessible only via paid subscriptions to ensure professional support and security; the Community Edition, however, maintains fully open-source availability.[29] The post-7.0 roadmap prioritizes deeper containerization support, hybrid cluster management, and 5G integration to enable efficient telco edge deployments.[30][31]| Version | Release Date | Key Notes | Type |
|---|---|---|---|
| TP1 | March 26, 2008 | Xen-based host/VM management | Development |
| TP2 | June 17, 2008 | Expanded virtual infrastructure | Development |
| 1.0 | July 24, 2008 | First stable, basic cloud features | Stable |
| 5.12 | July 21, 2020 | First LTS, supported to 2023 | LTS |
| 7.0 "Phoenix" | July 3, 2025 | AI, edge, hybrid enhancements | Major |
| 7.0.1 | October 27, 2025 | Enterprise and AI cloud updates | Patch |
Community and Ecosystem
OpenNebula's community is driven by a collaborative model involving developers, translators, and users who contribute through the project's GitHub repository, where code enhancements, documentation improvements, and localization efforts are submitted under the Apache License 2.0.[32] Users also participate by reporting bugs, requesting features, and providing feedback via GitHub issues, while translators support multilingual portal content to broaden accessibility.[32] The contributor base encompasses academics, enterprises, and non-profits, fostering over a decade of collaborative innovation since the project's inception.[33] Notable contributors include organizations such as Telefónica I+D, universities like the University of Chicago and Clemson University, and research centers including SARA Supercomputing Center and INRIA, alongside individual champions from entities like AWS, Datadog, and academic institutions such as Ghent University.[33][34] These participants, recognized through the Champion Program, enhance community engagement by developing tools, offering technical support, and promoting adoption globally.[34] The ecosystem features partnerships that integrate OpenNebula with broader open-source initiatives, including corporate membership in the Linux Foundation, Cloud Native Computing Foundation (CNCF), and Linux Foundation Europe, as well as Day-1 membership in Gaia-X and participation in the European Open Science Cloud (EOSC).[4][35] OpenNebula Systems has joined the Linux Foundation Edge Initiative and Project Sylva to advance edge and telco cloud technologies, supporting standards for federated, secure infrastructures in Europe.[36][37] The Connect Partner Program further enables technology and business collaborations, with third-party providers offering compatible hardware, software, and services to extend OpenNebula's capabilities.[38] Support channels include the official community forum for discussions and troubleshooting, comprehensive documentation at docs.opennebula.io, and events such as the annual OpenNebulaConf conference since 2013, along with regular webinars and TechDays on topics like storage and integrations.[39][40] Non-profits and community users access upgrade tools through the free Community Edition, supplemented by volunteer-driven advice and the Champion Program for enhanced guidance.[40] Adoption spans diverse environments, from research labs like RISE in Sweden for AI and edge infrastructure to telco clouds at Ooma for virtualization transitions, with deployments supporting hundreds of virtual machines in enterprises such as CEWE and weekly clusters in government settings like the Flemish Department of Environment and Spatial Planning.[41] This widespread use has spurred community extensions, including the OpenNebula Kubernetes Engine (OneKE), a CNCF-certified distribution for seamless Kubernetes cluster deployment on OpenNebula.[42] Community contributions continue to influence release cycles, enabling iterative improvements based on user input.[32]Features
Core Capabilities
OpenNebula provides robust resource orchestration capabilities, enabling the management of virtual machines, containers, and physical resources through automatic provisioning and elasticity mechanisms. This includes centralized governance for deploying and scaling workloads across clusters, with features like live migration and affinity rules to optimize performance and resource utilization. These functionalities ensure efficient handling of heterogeneous environments, supporting over 2,500 nodes in a single instance for enterprise-scale operations.[43] Multi-tenancy in OpenNebula is achieved through secure isolation of resources and data between users and groups, incorporating role-based access control to manage permissions effectively. Administrators can define fine-grained access control lists (ACLs), quotas, and virtual data centers (VDCs) to enforce isolation and compliance for multiple teams or tenants. This setup allows for delegated administration, where specific users or groups are granted controlled access to subsets of infrastructure without compromising overall security.[44] The platform supports hybrid cloud environments by facilitating seamless integration between on-premises infrastructure and public clouds, promoting workload portability and vendor independence. Users can provision and manage resources across federated zones, enabling burst capacity to public providers while maintaining unified control over hybrid setups. This approach simplifies migration and scaling of applications between local and remote resources. As of OpenNebula 7.0.1 (October 2025), hybrid cloud provisioning has been further simplified.[43][45] For edge computing, OpenNebula enables the deployment of lightweight clusters in distributed environments, optimized for low-latency operations such as 5G edge nodes. It provides a unified framework for orchestrating multi-cluster setups, allowing efficient management of resources closer to end-users to reduce latency and enhance performance in telco and IoT scenarios. This capability supports scalable, vendor-agnostic edge infrastructure without requiring complex custom configurations.[46] OpenNebula is designed with AI readiness in mind, offering built-in support for GPU orchestration to handle scalable AI workloads in hybrid configurations. Features like GPU passthrough and virtual GPU partitioning allow direct or shared access to accelerators for tasks such as model training and inference, ensuring high-performance computing while optimizing resource allocation across on-premises and cloud resources. OpenNebula 7.0.1 enhances GPU acceleration for improved performance in AI factories and sovereign AI deployments.[47][45] This enables cost-effective scaling for AI factories and sovereign AI deployments. Monitoring and automation tools are integrated natively into OpenNebula, providing capabilities for resource scaling, health checks, and policy-driven operations. Built-in telemetry tracks system metrics, enabling proactive adjustments through event-driven hooks and distributed resource scheduling. These features automate capacity management and ensure reliability, with support for overcommitment and predictive optimization to maintain operational efficiency.[43]Integrations and Extensibility
OpenNebula provides open APIs, including the primary XML-RPC interface, which enables programmatic control over core resources such as virtual machines, virtual networks, images, users, and hosts. This API allows developers to integrate OpenNebula with external applications for automation and orchestration tasks. Additionally, the OpenNebula Cloud API (OCA) offers simplified wrappers around the XML-RPC methods in multiple programming languages, including Ruby, Java, Python, and Go, facilitating easier integration while supporting JSON data exchange in client implementations. For web-based management, OpenNebula includes Sunstone, a graphical user interface that provides an intuitive dashboard for administrators and users to monitor and configure cloud resources without direct API calls.[48][49][50] In terms of hypervisor compatibility, OpenNebula offers full support for KVM as its primary virtualization technology, enabling efficient management of virtual machines on Linux-based hosts. It also provides complete integration with LXC containers through LXD for lightweight system-level virtualization, VMware vCenter for leveraging existing enterprise VMware environments, and AWS Firecracker for microVM-based deployments in serverless, edge, and enterprise operations. These hypervisors allow OpenNebula to operate across diverse infrastructure setups, from bare-metal servers to virtualized clusters.[43][51][52] For cloud federation and hybrid cloud capabilities, OpenNebula includes drivers that enable seamless integration with public cloud providers, supporting hybrid bursting where private resources extend to public infrastructure during peak loads. Specific drivers facilitate connections to Amazon Web Services (AWS) for EC2 instance provisioning and Microsoft Azure for virtual machines and storage, allowing users to define policies for automatic resource scaling across environments. This federation model uses a master-slave zone architecture to synchronize data centers, ensuring consistent management of users, groups, and virtual data centers (VDCs) across boundaries.[53][54][55] OpenNebula enhances container orchestration through native integration with Kubernetes via the OpenNebula Kubernetes Engine (OneKE), a CNCF-certified distribution based on RKE2 that deploys and manages Kubernetes clusters directly within OpenNebula environments. OneKE supports hybrid deployments, allowing clusters to span on-premises and edge resources while providing built-in monitoring and scaling. Furthermore, OpenNebula accommodates Docker containers for application packaging and Helm charts for simplified deployment of complex Kubernetes applications, enabling users to orchestrate containerized workloads alongside traditional VMs.[56][57][58] The platform's extensibility is achieved through a modular plugin architecture that allows administrators to develop and integrate custom drivers for storage, networking, monitoring, and authorization systems. This driver-based design supports the addition of third-party infrastructure components without modifying the core codebase, promoting adaptability to specific data center needs. OpenNebula also features a marketplace system for distributing applications and blueprints, including public repositories like the official OpenNebula Marketplace with over 48 pre-configured appliances (e.g., for WordPress or Kubernetes setups) and private marketplaces for custom sharing via HTTP or S3 backends. These elements enable rapid deployment of reusable cloud services and foster community-driven extensions.[59][55][60] Regarding standards compliance, OpenNebula aligns with the Open Cloud Computing Interface (OCCI) through dedicated ecosystem projects that implement OCCI 1.1 for interoperable resource management across IaaS providers. This support enables standardized API calls for compute, storage, and network operations, enhancing portability in multi-cloud setups. Similarly, while not natively embedded, OpenNebula integrates with Topology and Orchestration Specification for Cloud Applications (TOSCA) via model-driven tools in collaborative projects, allowing description and deployment of portable cloud applications that map to OpenNebula's infrastructure. These alignments ensure compatibility with broader cloud ecosystems and reduce vendor lock-in. OpenNebula 7.0.1 adds simplified SAML 2.0 integration for enhanced federated authentication compliance.[61][62][45]Architecture
Core Components
Hosts in OpenNebula represent the physical machines that provide the underlying compute resources for virtualization, running hypervisors such as KVM or VMware to host virtual machines (VMs).[63] These hosts are registered in the system using their hostname and associated drivers for information management and VM execution, allowing OpenNebula to monitor their CPU, memory, and storage capacity while enabling the scheduler to deploy VMs accordingly.[63] Clusters serve as logical groupings of multiple hosts within OpenNebula, facilitating resource pooling by sharing common datastores and virtual networks across the group. This organization supports efficient load balancing, high availability, and simplified management of distributed resources without altering the individual host configurations. Virtual Machine Templates define reusable configurations for instantiating VMs, specifying attributes such as CPU count, memory allocation, disk attachments, and network interfaces.[64] These templates enable administrators to standardize VM deployments, allowing multiple instances to be created from a single definition while permitting user-specific customizations like varying memory sizes within predefined limits.[64] Images in OpenNebula encapsulate the storage elements for VMs, functioning as disk files or block devices that hold operating systems, data, or configuration files, categorized into system images for bootable OS disks, regular images for persistent data storage, and file types for contextual elements like scripts.[65] They are managed within datastores or marketplaces, supporting persistency options where changes are either retained for exclusive VM use or discarded to allow multi-VM sharing, and progress through states like ready, used, or locked during operations.[65] Virtual Machines (VMs) are the instantiable compute entities in OpenNebula, created from templates and managed throughout their lifecycle, which includes states such as pending (awaiting deployment), running (actively executing), stopped (powered off with state preserved), suspended (paused with files on host), and done (completed and archived).[66] This state machine governs operations like migration, snapshotting, and resizing, ensuring controlled resource allocation and recovery from failures.[66] Virtual Networks provide the logical connectivity framework for VMs in OpenNebula, assigning IP leases and linking virtual network interfaces to physical host devices via modes like bridging or VXLAN for isolation and traffic management.[67] They integrate with security groups and IP address management to enable secure, scalable networking across clusters, supporting both public and private connectivity without direct exposure to underlying hardware details.[67]Deployment Models
OpenNebula supports a range of deployment models tailored to different scales and environments, from small private clouds to large-scale enterprise and telco infrastructures. These models emphasize openness and flexibility, allowing deployment on any datacenter with support for both virtualized and containerized workloads across physical or virtual resources.[68] The basic deployment model consists of a single front-end node managing worker hosts, suitable for small to medium private clouds with up to 2500 servers and 10,000 virtual machines. This topology uses local or shared storage and basic networking, providing a straightforward setup for testing or initial production environments without high availability requirements.[68][53] For larger or mission-critical setups, the advanced reference architecture employs a Cloud Management Cluster comprising multiple front-end nodes for high availability, alongside a Cloud Infrastructure layer handling hosts, storage, and networks. This model reduces downtime through redundant core services and supports horizontal scaling, making it ideal for environments requiring robust fault tolerance.[68][69] Hybrid and edge models enable federated clusters that span on-premises datacenters, public clouds, and edge sites, facilitating unified management and workload portability. These deployments support disaster recovery via mechanisms like Ceph storage mirroring and allow seamless integration of virtual machines and Kubernetes-based containers on bare-metal or virtualized resources, avoiding vendor lock-in.[70][53] In telco cloud scenarios, OpenNebula integrates with 5G networks for network function virtualization (NFV), supporting both virtual network functions (VNFs) and containerized network functions (CNFs) through enhanced platform awareness (EPA), GPU acceleration, and technologies like DPDK and SR-IOV. Key architectures include highly distributed NFV deployments across geo-distributed points of presence (PoPs) and 5G edge setups for open radio access networks (O-RAN) and multi-access edge computing (MEC), enabling low-latency applications such as user plane functions (UPF) and content delivery networks (CDN).[71][72] Scalability options range from single-node testing environments to multi-site enterprises via federated zones, where a master zone coordinates slave zones for shared user management and resource access. Blueprint guidance, such as the Open Cloud Reference Architecture, provides architects with recommended configurations for these models, including automation tools like OneDeploy for streamlined installation.[53][73][74]Front-end Management
The front-end node serves as the central server in OpenNebula, orchestrating cloud operations by running essential services such as the core daemon (oned), schedulers, and various drivers responsible for resource monitoring and decision-making.[75] The oned daemon acts as the primary engine, managing interactions with cluster nodes, virtual networks, storages, users, and groups through an XML-RPC API exposed on port 2633.[76] Drivers, including those for virtual machine management (VMM), authentication (Auth), information monitoring (IM), marketplace (Market), datastores (Datastore), virtual network management (VNM), and transfer management (TM), are executed from the/var/lib/one/remotes/ directory to handle specific operational tasks.[75]
For high availability, OpenNebula supports multi-node front-end configurations using a distributed consensus protocol like Raft integrated into the oned daemon, which tolerates at least one failure across three or more nodes.[69] This setup requires an odd number of identical servers (typically three or five) with shared filesystems, a floating IP for the leader node, and database synchronization via tools like onedb backup and onedb restore for MySQL or MariaDB backends.[69] The system ensures continuous operation by electing a new leader if the current one fails, with replicated logs maintaining consistency across the cluster.[69]
User interactions with the front-end are facilitated through multiple management interfaces, including the Sunstone web UI (powered by FireEdge on port 2616), the command-line interface (CLI) via the one command suite, and programmatic APIs such as XML-RPC and REST.[77][75] Core services extend to the scheduler, which handles virtual machine placement using policies like the Rank Scheduler to optimize allocation on available hosts, and hooks that trigger custom scripts in response to resource state changes or API calls.[78][79]
Deployment on the front-end requires a Linux-based operating system with dependencies including Ruby for the oned daemon and XML libraries for data handling, alongside optional components like MySQL or MariaDB for the database.[75] Scalability is achieved through clustering in high-availability modes, allowing the front-end to manage larger infrastructures without single points of failure.[69]
Compute Hosts
In OpenNebula, compute hosts serve as the physical worker nodes that provide the underlying resources for virtual machine execution, managed through supported hypervisors to enable virtualization on the cloud infrastructure.[63] These hosts are typically standard servers equipped with compatible hardware, where the primary hypervisor is KVM for full virtualization, alongside LXC for lightweight container-based workloads and integration with VMware for hybrid environments.[63] To set up a host, administrators add it to the system via the front-end using theonehost create command, specifying the hostname along with the information manager (IM) and virtualization manager (VM) drivers, such as --im kvm --vm kvm for KVM-based setups.[63]
Monitoring of compute hosts is handled by dedicated agents that run periodically on each node to gather resource utilization data, including CPU (e.g., total cores, speed in MHz, free/used percentages), memory (total, used, and free in KB), and storage metrics (e.g., read/write bytes and I/O operations).[80] These agents execute probes from directories like im/kvm-probes.d/host/monitor and transmit the collected data as structured messages (e.g., MONITOR_HOST) to the OpenNebula front-end via SSH, where it is stored in a time-series database for use in scheduling decisions and resource allocation.[80] The front-end then aggregates this information, allowing administrators to view host states and metrics through commands like onehost show or the Sunstone interface.[63]
Hosts can be organized into clusters to facilitate efficient resource management, with grouping achieved by adding hosts to a cluster using onecluster host-add or via the web interface.[63] This clustering supports load balancing by enabling the scheduler to distribute virtual machines across hosts based on availability and policies, while also enhancing fault tolerance through organized failover and high-availability configurations that maintain operations during node failures. OpenNebula accommodates heterogeneous hardware in clusters, allowing nodes with varying CPU architectures, memory capacities, or hypervisors to coexist without requiring uniform setups.
The lifecycle of compute hosts is managed entirely through the OpenNebula front-end, supporting dynamic addition and removal to scale the infrastructure as needed.[63] New hosts are enabled with onehost enable after creation, entering an active state for VM deployment, while existing ones can be temporarily disabled (onehost disable) for maintenance or fully removed (onehost delete) without disrupting the overall system, provided no running VMs are present.[63] States such as offline can also be set for long-term decommissioning, ensuring seamless integration of diverse hardware during expansions.[63]
Security for compute hosts relies on secure communication channels and hypervisor-enforced isolation to protect the cloud environment.[81] OpenNebula uses passwordless SSH for front-end to host interactions, configured by generating and distributing the oneadmin user's public key to target hosts, ensuring encrypted and authenticated connections without interactive prompts.[63] Additionally, hypervisors like KVM provide inherent VM isolation through hardware-assisted virtualization features, such as EPT for memory protection, preventing inter-VM interference on the same host.[80]
Storage
OpenNebula manages storage through datastores, which are logical storage units configured to handle different types of data for virtual machines (VMs). There are three primary datastore types: the System datastore, which stores the disks of running VMs cloned from images; the Image datastore, which holds the repository of base operating system images, persistent data volumes, and CD-ROM images; and the File datastore, which manages plain files such as kernels, ramdisks, and contextualization files for VMs.[82] These datastores can be implemented using various backends, including NFS for shared file-based storage accessible across hosts, Ceph for distributed object storage that enables scalability and high availability, and LVM for block-based storage on SAN environments with support for thin provisioning. For instance, NFS setups typically mount shared volumes on the front-end and hosts, while Ceph utilizes RADOS Block Devices (RBD) pools for VM disks, and LVM creates logical volumes from shared LUNs to optimize I/O performance.[82][83][84][85] Image management in OpenNebula allows administrators to upload images via the CLI using commands likeoneimage create --path <file> --datastore <ID>, supporting formats such as QCOW2 for thin provisioning and RAW for direct block access. Cloning operations, performed with oneimage clone <name> <new_name> --datastore <ID>, enable duplication of images to new datastores or for creating persistent copies, while snapshotting facilitates backups and rollbacks through commands like oneimage snapshot-flatten to merge changes or snapshot-revert for restoration, though images with active snapshots cannot be cloned until flattened.[86]
Integration with shared storage is handled via specialized drivers: the Ceph driver supports distributed setups with replication for disaster recovery (DR) mirroring across pools, ensuring data redundancy; NFS drivers enable seamless access for live migration and high availability; and LVM drivers provide block-level operations with thin snapshots for efficient space usage in DR scenarios. These drivers ensure compatibility with VM attachment on hosts, allowing disks to be provisioned dynamically.[84][83][85]
Allocation in datastores supports dynamic provisioning through quotas set in the datastore template, such as SIZE in MB to limit total storage (e.g., 20480 for 20 GB) and IMAGES for the maximum number of images, applied per user or group to prevent overuse. Transfers between datastores are achieved by cloning images to a target datastore or using the Marketplace app as an intermediary for moving files across different backends, with usage tracked via attributes like DATASTORE_USED.[87][88]
Best practices recommend separating System datastores from Image and File datastores to enhance performance, as System datastores handle high-I/O runtime disks while Image datastores focus on static repositories, reducing contention during VM deployments.[89]
