Hubbry Logo
OpenNebulaOpenNebulaMain
Open search
OpenNebula
Community hub
OpenNebula
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
OpenNebula
OpenNebula
from Wikipedia
OpenNebula
DevelopersOpenNebula Systems, OpenNebula Community
Initial releaseJuly 24, 2008; 17 years ago (2008-07-24)
Stable release
7.0.1[1] / 27 October 2025; 1 day ago (2025-10-27)
Repository
Written inC++, Ruby, Shell script, lex, yacc, JavaScript
Operating systemLinux
PlatformHypervisors (VMware vCenter, KVM, LXD/LXC, and AWS Firecracker)
Available inEnglish, Czech, French, Slovak, Spanish, Chinese, Thai, Turkish, Portuguese, Turkish, Russian, Dutch, Estonian, Japanese
TypeCloud computing
LicenseApache License version 2
Websiteopennebula.io

OpenNebula is an open source cloud computing platform for managing heterogeneous data center, public cloud and edge computing infrastructure resources. OpenNebula manages on-premises and remote virtual infrastructure to build private, public, or hybrid implementations of infrastructure as a service (IaaS) and multi-tenant Kubernetes deployments. The two primary uses of the OpenNebula platform are data center virtualization and cloud deployments based on the KVM hypervisor, LXC system containers, and AWS Firecracker microVMs. The platform is also capable of offering the cloud infrastructure necessary to operate a cloud on top of existing VMware infrastructure. In early June 2020, OpenNebula announced the release of a new Enterprise Edition for corporate users, along with a Community Edition.[2] OpenNebula CE is free and open-source software, released under the Apache License version 2. OpenNebula CE comes with free access to patch releases containing critical bug fixes but with no access to the regular EE maintenance releases. Upgrades to the latest minor/major version is only available for CE users with non-commercial deployments or with significant open source contributions to the OpenNebula Community.[3] OpenNebula EE is distributed under a closed-source license and requires a commercial Subscription.[4]

History

[edit]

The OpenNebula Project was started as a research venture in 2005 by Ignacio M. Llorente and Ruben S. Montero. The first public release of the software occurred in 2008. The goals of the research were to create efficient services for managing virtual machines on distributed infrastructures. It was also important that these services had the ability to scale at high levels. Open-source development and an active community of developers have since helped mature the project. As the project matured it began to become more and more adopted and in March 2010 the primary writers of the project founded C12G Labs, now known as OpenNebula Systems, which provides value-added professional services to enterprises adopting or utilizing OpenNebula.

Description

[edit]

OpenNebula orchestrates storage, network, virtualization, monitoring, and security[5] technologies to deploy multi-tier services (e.g. compute clusters[6][7]) as virtual machines on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies. According to the European Commission's 2010 report "... only few cloud dedicated research projects in the widest sense have been initiated – most prominent amongst them probably OpenNebula ...".[8]

The toolkit includes features for integration, management, scalability, security and accounting. It also claims standardization, interoperability and portability, providing cloud users and administrators with a choice of several cloud interfaces (Amazon EC2 Query, OGF Open Cloud Computing Interface and vCloud) and hypervisors (VMware vCenter, KVM, LXD/LXC and AWS Firecracker), and can accommodate multiple hardware and software combinations in a data center.[9]

OpenNebula is sponsored by OpenNebula Systems (formerly C12G).

OpenNebula is widely used by a variety of industries, including cloud providers, telecommunication, information technology services, government, banking, gaming, media, hosting, supercomputing, research laboratories, and international research projects[citation needed].

Development

[edit]

Major upgrades generally occur every 3-5 years and each upgrade generally has 3-5 updates. The OpenNebula project is mainly open-source and possible thanks to the active community of developers and translators supporting the project. Since version 5.12 the upgrade scripts are under a closed source license, which makes upgrading between versions impossible without a subscription unless you can prove you are operating a non-profit cloud or made a significant contribution to the project.

Release history

[edit]
  • Version TP and TP2, technology previews, offered host and VM management features, based on Xen hypervisor.
  • Version 1.0 was the first stable release, introduced KVM and EC2 drivers, enabling hybrid clouds.
  • Version 1.2 added new structure for the documentation and more hybrid functionality.
  • Version 1.4 added public cloud APIs on top of oned to build public cloud and virtual network management.
  • Version 2.0 added mysql backend, LDAP authentication, management of images and virtual networks.
  • Version 2.2 added integration guides, ganglia monitoring and OCCI (converted as add-ons in later releases), Java bindings for the API and the Sunstone GUI.
  • Version 3.0 added a migration path from previous versions, VLAN, ebtables and OVS integration for virtual networks, ACLs and accounting subsystem, VMware driver, Virtual Data Centers and federation across data centers.
  • Version 3.2 added firewalling for VMs (deprecated later on by security groups).
  • Version 3.4 introduced iSCSI datastore, cluster as a first class citizen and quotas.
  • Version 3.6 added Virtual Routers, LVM datastores and the public OpenNebula marketplace integration.
  • Version 3.8 added the OneFlow components for service management and OneGate for application insight.
  • Version 4.0 added support for Ceph and Files datastore and the onedb tool.
  • Version 4.2 added a new self service portal (Cloud View) and VMFS datastore.
  • Version 4.4 released in 2014, brought a number of innovations in Open Cloud, improved cloud bursting, and implemented the use of multiple system datastores for storage load policies.
  • Version 4.6 allowed users to have different instances of OpenNebula in geographically dispersed and different data centers, this was known as the Federation of OpenNebula. A new cloud portal for cloud consumers was also introduced and in App market support was provided to import OVAs.
  • Version 4.8 began offering support for Microsoft Azure and IBM. Developers, it also continued evolving and improving the platform by incorporating support for OneFlow in cloud view. This meant end users could now define virtual machine applications and services elastically.
  • Version 4.10 integrated the support portal with the Sunstone GUI. Login token was also developed, and support was provided for VMS and vCenter.
  • Version 4.12 offered new functionality to implement security groups and improve vCenter integration. Show back model was also deployed to track and analyze clouds due to different departments.
  • Version 4.14 introduced a newly redesigned and modularized graphical interface code, Sunstone. This was intended to improve code readability and ease the task of adding new components.
  • Version 5.0 'Wizard' introduced marketplaces as means to share images across different OpenNebula instances. Management of Virtual Routers with a network topology visual tool in Sunstone.
  • Version 5.2 'Excession' added a IPAM subsystem to aid in network integrations, and also added LDAP group dynamic mapping.
  • Version 5.4 'Medusa' introduced Full storage and network management for vCenter, and support for VM Groups to define affinity between VMs and hypervisors. Own implementation of RAFT for HA of the controller.
  • Version 5.6 'Blue Flash' focused on scalability improvements, as well as UX improvements.
  • Version 5.8 'Edge' added support for LXD for infrastructure containers, automatic NIC selection and Distributed Datacenters (DDC), which is the ability to use bare metal providers to build remote clusters in edge and hybrid cloud environments.
  • Version 5.10 'Boomerang' added NUMA and CPU pinning, NSX integration, revamped hook subsystem based ion 0MQ, DPDK support and 2FA authentication for Sunstone.
  • Version 5.12 'Firework' removal of upgrade scripts, added support to AWS Firecracker micro-VMs, a new integration with Docker Hub, Security Group integration (NSX), several improvements to Sunstone, a revamped OneFlow component, and an improved monitoring subsystem.
  • Version 6.0 'Mutara' new multi-cloud architecture based on "Edge Clusters", enhanced Docker and Kubernetes support, new FireEdge webUI, revamped OneFlow, new backup capabilities.
  • Version 6.2 'Red Square' improvements to LXC driver, new support to workload portability, beta preview of the new Sunstone GUI.
  • Version 6.4 'Archeon' new support to the automatic deployment and management of edge clusters based on Ceph using on-premises infrastructure or AWS bare-metal resources, addition of the notion of network states, improvements to the new Sunstone GUI, to the LXC driver, and to the integration with VMware vCenter, and new module for WHMCS (only for EE).
  • Version 6.6 'Electra' new integration of Prometheus for advanced monitoring combined with a new set of Grafana dashboards (only for EE), new native support for incremental backups based on datastore back-ends and the development of new drivers for restic (only for EE) and rsync, and several improvements for Telco Cloud environments, including enhanced management of virtual networks and VNFs.
  • Version 6.8 'Rosette' new Virtual Data Center (VDC) and User tabs in the FireEdge Sunstone GUI (e.g. to display accounting and showback information), introduction of backup jobs for creating a unified backup policies across multiple VMs, and several improvements in the KVM driver (e.g. to fine-tune CPU flags, optimize disks, customize VM video, or boost Windows performance).
  • Version 6.10 'Bubble' features enhanced backups (incremental backups, in-place restores, selective disk restore, custom locations), improved PCI passthrough (simplified device management, expanded GPU support), better recovery for powered-off or suspended VMs, multi-tenancy upgrades (custom quotas, restricted attributes), and support for Ubuntu 24 and Debian 12. Additional improvements include new components in Community Edition (Prometheus integration and Restic backup support from the Enterprise Edition), simplified deployment (new playbooks and roles for easy OpenNebula cloud setup), and efficient VMware migration (enhanced OneSwap tool for a streamlined vCenter Server to OpenNebula Cloud migration). Plus, the FireEdge Sunstone UI is now updated with advanced features and a modern tech stack.
  • Version 7.0 'Phoenix' is a major upgrade for sovereign, AI-ready, and edge cloud infrastructures. It introduces AI-powered workload automation, VMware re-virtualization with enhanced backup support, and full NVIDIA vGPU compatibility for GPU-accelerated AI. Hybrid multi-cloud provisioning is improved with expanded provider support and ARM compatibility. The Sunstone UI is revamped for better usability and real-time metrics, while Kubernetes integration and networking features have been strengthened.

Internal architecture

[edit]

Basic components

[edit]
OpenNebula Internal Architecture
  • Host: Physical machine running a supported hypervisor.
  • Cluster: Pool of hosts that share datastores and virtual networks.
  • Template: Virtual Machine definition.
  • Image: Virtual Machine disk image.
  • Virtual Machine: Instantiated Template. A Virtual Machine represents one life-cycle, and several Virtual Machines can be created from a single Template.
  • Virtual Network: A group of IP leases that VMs can use to automatically obtain IP addresses. It allows the creation of Virtual Networks by mapping over the physical ones. They will be available to the VMs through the corresponding bridges on hosts. Virtual network can be defined in three different parts:
  1. Underlying of physical network infrastructure.
  2. The logical address space available (IPv4, IPv6, dual stack).
  3. Context attributes (e.g. net mask, DNS, gateway). OpenNebula also comes with a Virtual Router appliance to provide networking services like DHCP, DNS etc.

Components and deployment model

[edit]
OpenNebula Deployment Model

The OpenNebula Project's deployment model resembles classic cluster architecture which utilizes

  • A front-end (master node)
  • Hypervisor enabled hosts (worker nodes)
  • Datastores
  • A physical network

Front-end machine

[edit]

The master node, sometimes referred to as the front-end machine, executes all the OpenNebula services. This is the actual machine where OpenNebula is installed. OpenNebula services on the front-end machine include the management daemon (oned), scheduler (sched), the web interface server (Sunstone server), and other advanced components. These services are responsible for queuing, scheduling, and submitting jobs to other machines in the cluster. The master node also provides the mechanisms to manage the entire system. This includes adding virtual machines, monitoring the status of virtual machines, hosting the repository, and transferring virtual machines when necessary. Much of this is possible due to a monitoring subsystem which gathers information such as host status, performance, and capacity use. The system is highly scalable and is only limited by the performance of the actual server.[citation needed]

Hypervisor enabled-hosts

[edit]

The worker nodes, or hypervisor enabled-hosts, provide the actual computing resources needed for processing all jobs submitted by the master node. OpenNebula hypervisor enabled-hosts use a virtualization hypervisor such as Vmware, Xen, or KVM. The KVM hypervisor is natively supported and used by default. Virtualization hosts are the physical machines that run the virtual machines and various platforms can be used with OpenNebula. A Virtualization Subsystem interacts with these hosts to take the actions needed by the master node.

Storage

[edit]
OpenNebula Storage

The datastores simply hold the base images of the Virtual Machines. The datastores must be accessible to the front-end; this can be accomplished by using one of a variety of available technologies such as NAS, SAN, or direct attached storage.

Three different datastore classes are included with OpenNebula, including system datastores, image datastores, and file datastores. System datastores hold the images used for running the virtual machines. The images can be complete copies of an original image, deltas, or symbolic links depending on the storage technology used. The image datastores are used to store the disk image repository. Images from the image datastores are moved to or from the system datastore when virtual machines are deployed or manipulated. The file datastore is used for regular files and is often used for kernels, ram disks, or context files.

Physical networks

[edit]

Physical networks are required to support the interconnection of storage servers and virtual machines in remote locations. It is also essential that the front-end machine can connect to all the worker nodes or hosts. At the very least two physical networks are required as OpenNebula requires a service network and an instance network. The instance network allows the virtual machines to connect across different hosts. The network subsystem of OpenNebula is easily customizable to allow easy adaptation to existing data centers.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
OpenNebula is an open-source and platform that provides unified management for heterogeneous infrastructures, including virtualized data centers, public s, and edge environments, enabling the orchestration of compute, storage, and networking resources. It is available in a free Community Edition and a supported Enterprise Edition for mission-critical deployments. The project originated in 2005 as an internal research initiative at the Complutense University of 's Distributed Systems Architecture Research Group and evolved into an open-source effort with the launch of the OpenNebula.org in 2007, culminating in its first software release in 2008. In 2010, OpenNebula Systems (formerly C12G Labs) was founded by the project's creators in , , to support its commercial development and enterprise adoption, marking over 15 years of continuous innovation by 2025. Key features of OpenNebula include support for virtual machines via hypervisors like KVM, container orchestration with tools such as Docker and , and serverless computing, all managed through a single control panel that facilitates private, hybrid, and multi-cloud deployments. It emphasizes vendor neutrality, multi-tenancy, federation across sites, and automatic provisioning, allowing users to integrate with existing infrastructures without lock-in. The platform's open cloud architecture unifies public cloud simplicity with private cloud security and control, supporting scalability for hundreds of thousands of cores. OpenNebula has been adopted by over 5,000 organizations worldwide, including enterprises like EveryMatrix and academic institutions such as , for building sovereign and efficient cloud solutions. As the only major open-source cloud management platform developed in , it plays a pivotal role in initiatives like , the European Open Science Cloud (EOSC), and the Important Project of Common European Interest on Cloud Infrastructure and Services (IPCEI-CIS), where it contributes to digital sovereignty and advancements. The project is actively maintained on with contributions from 171 developers and is backed by European research efforts such as ONEnextgen and SovereignEdge.Cognit.

Introduction

Overview

OpenNebula is an open-source cloud management platform designed to orchestrate and manage heterogeneous resources across data centers, public clouds, and infrastructures. It enables enterprises to deploy and operate private, hybrid, and edge clouds, primarily supporting (IaaS) models as well as multi-tenant environments for containerized workloads. The platform emphasizes simplicity in deployment and operation, scalability to handle large-scale infrastructures, and vendor independence through its open architecture, allowing users to avoid lock-in to specific providers. It unifies the agility of public services with the and control of private clouds, facilitating seamless integration of diverse environments. OpenNebula supports a range of technologies, including KVM, containers, AWS for lightweight virtual machines, and VMware for hybrid setups. Key benefits include robust multi-tenancy for isolated user environments, automatic provisioning of resources, elasticity to scale workloads dynamically, and on-demand management of compute, storage, and networking assets. Originally developed as a research project, OpenNebula has evolved into a mature, production-ready platform widely adopted in enterprise settings.

Editions

OpenNebula is available in two primary editions: the Community Edition and the Enterprise Edition, each tailored to different user needs in cloud management. The Community Edition provides a free, open-source distribution under the 2.0, offering full core functionality for users managing their own deployments without commercial support. It includes binary and source packages, with updates released every six months and patch versions addressing critical bug fixes, making it ideal for non-profit organizations, educational institutions, or testing environments. Community support is available through public forums, but users must handle self-management and upgrades independently. In contrast, the Enterprise Edition is a commercial offering designed for production environments, incorporating the open-source core under 2.0 for while requiring a subscription for binary packages under commercial terms. It delivers a hardened, tested version with additional bug fixes, minor enhancements, long-term support releases, and enterprise-specific integrations not available in the Community Edition. Key differences include such as SLA-based support, deployment assistance, , consulting, technical account management, and priority access to maintenance updates and upgrades. The Enterprise Edition also provides enhanced security features and warranty assurances, ensuring reliability for large-scale operations. Both editions can be downloaded from the official OpenNebula website, though the Enterprise Edition requires an active subscription—priced on a per-cloud basis with host-based licensing—for full access to upgrades, tools, and support. From version 5.12 onward, major upgrades are restricted to Enterprise subscribers or qualified non-commercial users and significant community contributors, emphasizing the platform's strategy.

History

Origins

OpenNebula originated as a research project in 2005 at the Distributed Systems Architecture (DSA) Research Group of the Universidad Complutense de Madrid in Spain, led by Ignacio M. Llorente and Rubén S. Montero. The initiative stemmed from efforts to address challenges in virtual infrastructure management, with an initial emphasis on developing efficient and scalable services for deploying and managing virtual machines across large-scale distributed systems. This work built on prior research in grid computing and virtualization, aiming to create decentralized tools that could handle dynamic resource allocation without relying on proprietary solutions. The project's transition to an open-source model occurred with its first public technology preview release in March 2008, under the Apache License 2.0, motivated by the burgeoning paradigm's demand for flexible, vendor-agnostic platforms that avoided lock-in and supported heterogeneous environments. This shift enabled broader collaboration among researchers and early adopters, fostering innovation in infrastructure-as-a-service (IaaS) technologies while aligning with European research initiatives like the project, which sought to integrate and . By prioritizing and extensibility, the open-source approach positioned OpenNebula as a foundational tool for academic and experimental deployments in the late 2000s. To sustain development and provide enterprise-grade support, the original developers founded C12G Labs in March 2010 in , , focusing on commercial services such as consulting, training, and customized integrations for OpenNebula users. The company, initially headquartered in , was renamed OpenNebula Systems in September 2014 to better reflect its core technology and later expanded operations to include a headquarters in , , enhancing its global reach. This corporate backing marked the evolution from pure research to a supported ecosystem, while the project continued to grow internationally through community contributions.

Key Milestones

OpenNebula's first stable release (version 1.0) in July 2008 marked a pivotal shift from its origins in academic research to a collaborative open-source , enabling broader adoption of technologies. The project reached its 10th anniversary in November 2017, underscoring a decade of continuous innovation, community contributions, and the establishment of thousands of infrastructures worldwide. By 2025, OpenNebula had achieved significant adoption, powering clouds for more than 5,000 organizations globally across diverse sectors including institutions, providers, and . Organizational growth accelerated through strategic affiliations, with OpenNebula Systems becoming a day-1 member of , a participant in the European Open Science Cloud (EOSC) and the Important Project of Common European Interest on Infrastructure and Services (IPCEI-CIS), and a corporate member of the , LF Edge, and (CNCF); the company also leads as chair of the European Alliance for Industrial Data, Edge and . In recent years, OpenNebula has emphasized advancements in AI-ready hybrid clouds and strategies, prominently featured at events such as Data Center World 2025 where it demonstrated re-virtualization solutions for modern infrastructure. Spanning over 10 years of dedicated research and development into the 2020s, OpenNebula has prioritized sovereign cloud solutions to enhance digital autonomy, including federated AI factories and reference architectures for European .

Development

Release History

OpenNebula's initial development featured two technology previews in 2008. The first Technology Preview (TP) was released on March 26, 2008, providing basic host and management capabilities based on the hypervisor. This was followed by TP2 on June 17, 2008, which expanded on these features for virtual infrastructure management. The project's first stable release, version 1.0, arrived on July 24, 2008, introducing core functionalities for and dynamic resource allocation. Early major versions followed a rapid cycle, with upgrades approximately every 1-2 years and 3-5 minor updates per major release to incorporate community feedback and stability improvements. For instance, launched in October 2010 as a significant upgrade, followed by 3.0 in October 2011. This pattern continued through the 4.x and 5.x series, culminating in version 5.12 on July 21, 2020—the first (LTS) release, which received extended maintenance until its end-of-life on February 10, 2023. In recent years, the release cadence has shifted toward quarterly updates, with major versions emerging every 3-5 years to align with enterprise needs. The latest major release, 7.0 "Phoenix," was issued on July 3, 2025, bringing advancements in AI workload support, edge computing, and hybrid cloud orchestration. A subsequent patch, 7.0.1, followed on October 27, 2025, enhancing enterprise-grade features and AI cloud integrations. Changes to the release process began with version 5.12, where upgrade scripts for the Enterprise Edition became partially closed-source, accessible only via paid subscriptions to ensure professional support and security; the Community Edition, however, maintains fully open-source availability. The post-7.0 roadmap prioritizes deeper support, hybrid cluster management, and integration to enable efficient telco edge deployments.
VersionRelease DateKey NotesType
TP1March 26, 2008Xen-based host/VM managementDevelopment
TP2June 17, 2008Expanded virtual infrastructureDevelopment
1.0July 24, 2008First stable, basic featuresStable
5.12July 21, 2020First LTS, supported to 2023LTS
7.0 "Phoenix"July 3, 2025AI, edge, hybrid enhancementsMajor
7.0.1October 27, 2025Enterprise and AI updatesPatch

Community and Ecosystem

OpenNebula's community is driven by a collaborative model involving developers, translators, and users who contribute through the project's repository, where code enhancements, documentation improvements, and localization efforts are submitted under the Apache License 2.0. Users also participate by reporting bugs, requesting features, and providing feedback via issues, while translators support multilingual portal content to broaden accessibility. The contributor base encompasses academics, enterprises, and non-profits, fostering over a decade of collaborative innovation since the project's inception. Notable contributors include organizations such as Telefónica I+D, universities like the and , and research centers including SARA Supercomputing Center and INRIA, alongside individual champions from entities like AWS, , and academic institutions such as . These participants, recognized through the Champion Program, enhance by developing tools, offering , and promoting adoption globally. The ecosystem features partnerships that integrate OpenNebula with broader open-source initiatives, including corporate membership in the , (CNCF), and , as well as Day-1 membership in and participation in the European Open Science Cloud (EOSC). OpenNebula Systems has joined the Edge Initiative and Project Sylva to advance edge and telco cloud technologies, supporting standards for federated, secure infrastructures in . The Connect Partner Program further enables technology and business collaborations, with third-party providers offering compatible hardware, software, and services to extend OpenNebula's capabilities. Support channels include the official forum for discussions and troubleshooting, comprehensive at docs.opennebula.io, and events such as the annual OpenNebulaConf conference since 2013, along with regular webinars and TechDays on topics like storage and integrations. Non-profits and community users access upgrade tools through the free Community Edition, supplemented by volunteer-driven advice and the Champion Program for enhanced guidance. Adoption spans diverse environments, from research labs like RISE in for AI and edge to telco clouds at Ooma for transitions, with deployments supporting hundreds of virtual machines in enterprises such as CEWE and weekly clusters in government settings like the Flemish Department of Environment and Spatial Planning. This widespread use has spurred community extensions, including the OpenNebula Kubernetes Engine (OneKE), a CNCF-certified distribution for seamless cluster deployment on OpenNebula. Community contributions continue to influence release cycles, enabling iterative improvements based on user input.

Features

Core Capabilities

OpenNebula provides robust resource orchestration capabilities, enabling the management of virtual machines, containers, and physical resources through automatic provisioning and elasticity mechanisms. This includes centralized for deploying and scaling workloads across clusters, with features like and affinity rules to optimize performance and resource utilization. These functionalities ensure efficient handling of heterogeneous environments, supporting over 2,500 nodes in a single instance for enterprise-scale operations. Multi-tenancy in OpenNebula is achieved through secure isolation of resources and data between users and groups, incorporating to manage permissions effectively. Administrators can define fine-grained lists (ACLs), quotas, and virtual data centers (VDCs) to enforce isolation and compliance for multiple teams or tenants. This setup allows for delegated administration, where specific users or groups are granted controlled access to subsets of infrastructure without compromising overall security. The platform supports hybrid cloud environments by facilitating seamless integration between on-premises infrastructure and public clouds, promoting workload portability and vendor independence. Users can provision and manage resources across federated zones, enabling burst capacity to public providers while maintaining unified control over hybrid setups. This approach simplifies migration and scaling of applications between local and remote resources. As of OpenNebula 7.0.1 (October 2025), hybrid cloud provisioning has been further simplified. For , OpenNebula enables the deployment of lightweight clusters in distributed environments, optimized for low-latency operations such as edge nodes. It provides a unified framework for orchestrating multi-cluster setups, allowing efficient management of resources closer to end-users to reduce latency and enhance performance in telco and IoT scenarios. This capability supports scalable, vendor-agnostic edge infrastructure without requiring complex custom configurations. OpenNebula is designed with AI readiness in mind, offering built-in support for GPU to handle scalable AI workloads in hybrid configurations. Features like GPU passthrough and virtual GPU partitioning allow direct or shared access to accelerators for tasks such as model training and inference, ensuring while optimizing across on-premises and cloud resources. OpenNebula 7.0.1 enhances GPU acceleration for improved performance in AI factories and AI deployments. This enables cost-effective scaling for AI factories and deployments. Monitoring and automation tools are integrated natively into OpenNebula, providing capabilities for resource scaling, health checks, and policy-driven operations. Built-in telemetry tracks system metrics, enabling proactive adjustments through event-driven hooks and distributed resource scheduling. These features automate capacity management and ensure reliability, with support for overcommitment and predictive optimization to maintain operational efficiency.

Integrations and Extensibility

OpenNebula provides open , including the primary interface, which enables programmatic control over core resources such as virtual machines, virtual networks, images, users, and hosts. This allows developers to integrate OpenNebula with external applications for and tasks. Additionally, the OpenNebula Cloud (OCA) offers simplified wrappers around the methods in multiple programming languages, including , , Python, and Go, facilitating easier integration while supporting data exchange in client implementations. For web-based management, OpenNebula includes , a that provides an intuitive for administrators and users to monitor and configure cloud resources without direct calls. In terms of hypervisor compatibility, OpenNebula offers full support for KVM as its primary virtualization technology, enabling efficient management of virtual machines on Linux-based hosts. It also provides complete integration with containers through LXD for lightweight system-level virtualization, vCenter for leveraging existing enterprise VMware environments, and for microVM-based deployments in serverless, edge, and enterprise operations. These hypervisors allow OpenNebula to operate across diverse setups, from bare-metal servers to virtualized clusters. For cloud federation and hybrid cloud capabilities, OpenNebula includes drivers that enable seamless integration with public cloud providers, supporting hybrid bursting where private resources extend to public infrastructure during peak loads. Specific drivers facilitate connections to (AWS) for EC2 instance provisioning and for virtual machines and storage, allowing users to define policies for automatic resource scaling across environments. This federation model uses a master-slave zone architecture to synchronize data centers, ensuring consistent management of users, groups, and virtual data centers (VDCs) across boundaries. OpenNebula enhances container orchestration through native integration with via the OpenNebula Kubernetes Engine (OneKE), a CNCF-certified distribution based on RKE2 that deploys and manages clusters directly within OpenNebula environments. OneKE supports hybrid deployments, allowing clusters to span on-premises and edge resources while providing built-in monitoring and scaling. Furthermore, OpenNebula accommodates Docker containers for application packaging and Helm charts for simplified deployment of complex applications, enabling users to orchestrate containerized workloads alongside traditional VMs. The platform's extensibility is achieved through a modular plugin that allows administrators to develop and integrate custom drivers for storage, networking, monitoring, and systems. This driver-based supports the addition of third-party components without modifying the core codebase, promoting adaptability to specific needs. OpenNebula also features a system for distributing applications and blueprints, including public repositories like the official OpenNebula Marketplace with over 48 pre-configured appliances (e.g., for or setups) and private marketplaces for custom sharing via HTTP or S3 backends. These elements enable rapid deployment of reusable cloud services and foster community-driven extensions. Regarding standards compliance, OpenNebula aligns with the Open Cloud Computing Interface (OCCI) through dedicated ecosystem projects that implement OCCI 1.1 for interoperable across IaaS providers. This support enables standardized calls for compute, storage, and network operations, enhancing portability in multi-cloud setups. Similarly, while not natively embedded, OpenNebula integrates with Topology and Orchestration Specification for Cloud Applications () via model-driven tools in collaborative projects, allowing description and deployment of portable cloud applications that map to OpenNebula's infrastructure. These alignments ensure compatibility with broader cloud ecosystems and reduce . OpenNebula 7.0.1 adds simplified integration for enhanced federated authentication compliance.

Architecture

Core Components

Hosts in OpenNebula represent the physical machines that provide the underlying compute resources for , running hypervisors such as KVM or to host virtual machines (VMs). These hosts are registered in the system using their hostname and associated drivers for and VM execution, allowing OpenNebula to monitor their CPU, , and storage capacity while enabling the scheduler to deploy VMs accordingly. Clusters serve as logical groupings of multiple hosts within OpenNebula, facilitating resource pooling by sharing common datastores and virtual networks across the group. This organization supports efficient load balancing, , and simplified management of distributed resources without altering the individual host configurations. Virtual Machine Templates define reusable configurations for instantiating VMs, specifying attributes such as CPU count, memory allocation, disk attachments, and network interfaces. These templates enable administrators to standardize VM deployments, allowing multiple instances to be created from a single definition while permitting user-specific customizations like varying memory sizes within predefined limits. Images in OpenNebula encapsulate the storage elements for VMs, functioning as disk files or block devices that hold operating systems, , or configuration files, categorized into system images for bootable OS disks, regular images for persistent storage, and file types for contextual elements like scripts. They are managed within datastores or marketplaces, supporting persistency options where changes are either retained for exclusive VM use or discarded to allow multi-VM sharing, and progress through states like ready, used, or locked during operations. Virtual Machines (VMs) are the instantiable compute entities in OpenNebula, created from templates and managed throughout their lifecycle, which includes states such as pending (awaiting deployment), running (actively executing), stopped (powered off with state preserved), suspended (paused with files on host), and done (completed and archived). This state machine governs operations like migration, snapshotting, and resizing, ensuring controlled and recovery from failures. Virtual Networks provide the logical connectivity framework for VMs in OpenNebula, assigning IP leases and linking virtual network interfaces to physical host devices via modes like bridging or VXLAN for isolation and . They integrate with security groups and to enable secure, scalable networking across clusters, supporting both public and private connectivity without direct exposure to underlying hardware details.

Deployment Models

OpenNebula supports a range of deployment models tailored to different scales and environments, from small private clouds to large-scale enterprise and telco infrastructures. These models emphasize openness and flexibility, allowing deployment on any datacenter with support for both virtualized and containerized workloads across physical or virtual resources. The basic deployment model consists of a single front-end node managing worker hosts, suitable for small to medium private clouds with up to 2500 servers and 10,000 virtual machines. This uses local or shared storage and basic networking, providing a straightforward setup for testing or initial production environments without requirements. For larger or mission-critical setups, the advanced reference architecture employs a Management Cluster comprising multiple front-end nodes for , alongside a Infrastructure layer handling hosts, storage, and networks. This model reduces through redundant core services and supports horizontal scaling, making it ideal for environments requiring robust . Hybrid and edge models enable federated clusters that span on-premises datacenters, public clouds, and edge sites, facilitating unified management and workload portability. These deployments support disaster recovery via mechanisms like Ceph storage mirroring and allow seamless integration of virtual machines and Kubernetes-based containers on bare-metal or virtualized resources, avoiding . In telco cloud scenarios, OpenNebula integrates with networks for (NFV), supporting both virtual network functions (VNFs) and containerized network functions (CNFs) through enhanced platform awareness (EPA), GPU acceleration, and technologies like DPDK and SR-IOV. Key architectures include highly distributed NFV deployments across geo-distributed points of presence (PoPs) and edge setups for open radio access networks (O-RAN) and (MEC), enabling low-latency applications such as user plane functions (UPF) and content delivery networks (CDN). Scalability options range from single-node testing environments to multi-site enterprises via federated zones, where a master zone coordinates slave zones for shared user management and resource access. Blueprint guidance, such as the Open Cloud Reference Architecture, provides architects with recommended configurations for these models, including automation tools like OneDeploy for streamlined installation.

Front-end Management

The front-end node serves as the central server in OpenNebula, orchestrating cloud operations by running such as the daemon (oned), schedulers, and various drivers responsible for resource monitoring and decision-making. The oned daemon acts as the primary engine, managing interactions with cluster nodes, virtual networks, storages, users, and groups through an exposed on port 2633. Drivers, including those for management (VMM), (Auth), information monitoring (IM), (Market), datastores (Datastore), virtual network management (VNM), and transfer management (TM), are executed from the /var/lib/one/remotes/ directory to handle specific operational tasks. For , OpenNebula supports multi-node front-end configurations using a distributed consensus protocol like integrated into the oned daemon, which tolerates at least one failure across three or more nodes. This setup requires an odd number of identical servers (typically three or five) with shared filesystems, a floating IP for the leader node, and database synchronization via tools like onedb backup and onedb restore for or backends. The system ensures continuous operation by electing a new leader if the current one fails, with replicated logs maintaining consistency across the cluster. User interactions with the front-end are facilitated through multiple management interfaces, including the Sunstone web UI (powered by FireEdge on port 2616), the (CLI) via the one command suite, and programmatic APIs such as and . Core services extend to the scheduler, which handles placement using policies like the Rank Scheduler to optimize allocation on available hosts, and hooks that trigger custom scripts in response to resource state changes or API calls. Deployment on the front-end requires a Linux-based operating system with dependencies including for the oned daemon and XML libraries for data handling, alongside optional components like or for the database. Scalability is achieved through clustering in high-availability modes, allowing the front-end to manage larger infrastructures without single points of failure.

Compute Hosts

In OpenNebula, compute hosts serve as the physical worker nodes that provide the underlying resources for execution, managed through supported s to enable on the cloud infrastructure. These hosts are typically standard servers equipped with compatible hardware, where the primary is KVM for full virtualization, alongside for lightweight container-based workloads and integration with for hybrid environments. To set up a host, administrators add it to the system via the front-end using the onehost create command, specifying the hostname along with the information manager (IM) and virtualization manager (VM) drivers, such as --im kvm --vm kvm for KVM-based setups. Monitoring of compute hosts is handled by dedicated agents that run periodically on each node to gather utilization , including CPU (e.g., total cores, speed in MHz, free/used percentages), (total, used, and free in KB), and storage metrics (e.g., read/write bytes and I/O operations). These agents execute probes from directories like im/kvm-probes.d/host/monitor and transmit the collected as structured messages (e.g., MONITOR_HOST) to the OpenNebula front-end via SSH, where it is stored in a time-series database for use in scheduling decisions and . The front-end then aggregates this information, allowing administrators to view host states and metrics through commands like onehost show or the interface. Hosts can be organized into clusters to facilitate efficient , with grouping achieved by adding hosts to a cluster using onecluster host-add or via the web interface. This clustering supports load balancing by enabling the scheduler to distribute virtual machines across hosts based on availability and policies, while also enhancing through organized and high-availability configurations that maintain operations during node failures. OpenNebula accommodates heterogeneous hardware in clusters, allowing nodes with varying CPU architectures, capacities, or hypervisors to coexist without requiring uniform setups. The lifecycle of compute hosts is managed entirely through the OpenNebula front-end, supporting dynamic addition and removal to scale the infrastructure as needed. New hosts are enabled with onehost enable after creation, entering an active state for VM deployment, while existing ones can be temporarily disabled (onehost disable) for or fully removed (onehost delete) without disrupting the overall system, provided no running VMs are present. States such as offline can also be set for long-term decommissioning, ensuring seamless integration of diverse hardware during expansions. Security for compute hosts relies on secure communication channels and hypervisor-enforced isolation to protect the environment. OpenNebula uses passwordless SSH for front-end to host interactions, configured by generating and distributing the oneadmin user's public key to target hosts, ensuring encrypted and authenticated connections without interactive prompts. Additionally, hypervisors like KVM provide inherent VM isolation through hardware-assisted virtualization features, such as EPT for , preventing inter-VM interference on the same host.

Storage

OpenNebula manages storage through datastores, which are logical storage units configured to handle different types of data for virtual machines (VMs). There are three primary datastore types: the System datastore, which stores the disks of running VMs cloned from images; the Image datastore, which holds the repository of base operating system images, persistent data volumes, and CD-ROM images; and the File datastore, which manages plain files such as kernels, ramdisks, and contextualization files for VMs. These datastores can be implemented using various backends, including NFS for shared file-based storage accessible across hosts, Ceph for distributed object storage that enables scalability and , and LVM for block-based storage on SAN environments with support for . For instance, NFS setups typically mount shared volumes on the front-end and hosts, while Ceph utilizes RADOS Block Devices () pools for VM disks, and LVM creates logical volumes from shared LUNs to optimize I/O performance. Image management in OpenNebula allows administrators to images via the CLI using commands like oneimage create --path <file> --datastore <ID>, supporting formats such as QCOW2 for and RAW for direct block access. Cloning operations, performed with oneimage clone <name> <new_name> --datastore <ID>, enable duplication of images to new datastores or for creating persistent copies, while snapshotting facilitates backups and rollbacks through commands like oneimage snapshot-flatten to merge changes or snapshot-revert for restoration, though images with active snapshots cannot be cloned until flattened. Integration with shared storage is handled via specialized drivers: the Ceph driver supports distributed setups with replication for disaster recovery (DR) mirroring across pools, ensuring ; NFS drivers enable seamless access for and ; and LVM drivers provide block-level operations with thin snapshots for efficient space usage in DR scenarios. These drivers ensure compatibility with VM attachment on hosts, allowing disks to be provisioned dynamically. Allocation in datastores supports dynamic provisioning through quotas set in the datastore template, such as SIZE in MB to limit total storage (e.g., 20480 for 20 GB) and IMAGES for the maximum number of images, applied per user or group to prevent overuse. Transfers between datastores are achieved by images to a target datastore or using the app as an intermediary for moving files across different backends, with usage tracked via attributes like DATASTORE_USED. Best practices recommend separating System datastores from Image and File datastores to enhance performance, as System datastores handle high-I/O runtime disks while Image datastores focus on static repositories, reducing contention during VM deployments.

Networking

OpenNebula's networking subsystem enables the configuration of virtual and physical networks to facilitate VM connectivity while ensuring service isolation. Virtual networks serve as logical overlays that abstract the underlying infrastructure, allowing administrators to define isolated environments for virtual machines (VMs). These networks provide dynamic IP and MAC address leases through address ranges (ARs), where administrators specify IPv4 or IPv6 pools, and OpenNebula automatically generates MAC addresses for Ethernet-based ranges. For example, an AR might allocate addresses from 10.0.0.150 to 10.0.0.200 with a size of 51, ensuring efficient resource utilization without manual assignment. Isolation in virtual networks is achieved through technologies such as VLANs via the 802.1Q driver or VXLAN overlays, which encapsulate traffic to segment VMs across physical hosts. In 802.1Q mode, OpenNebula assigns VLAN IDs from a configured pool (e.g., starting from ID 2, excluding reserved ranges like 0, 1, 4095), tagging ports on bridges like virbr0 for secure separation. VXLAN extends this by creating overlay networks with virtual network identifiers (VNIs) from a similar pool, supporting live updates to attributes like MTU and enabling scalable isolation in large deployments. These mechanisms ensure VM traffic remains contained, preventing unauthorized inter-VM communication while allowing contextualization features like DNS server assignment (e.g., 10.0.0.23) for seamless integration. Physical networks form the foundational in OpenNebula, comprising the hardware-level connections on compute hosts that underpin virtual overlays. These are typically divided into segments for external access—bridged directly to internet-facing NICs—and internal segments for private VM-to-VM interactions, configured via parameters in the oned.conf file such as NETWORK_SIZE (default 254 for sizing) and MAC_PREFIX (e.g., 02:00 for generated addresses). networks often use the host's primary physical interface (PHYDEV) for outbound , while internal ones rely on isolated bridges to maintain separation, with OpenNebula monitoring and mapping virtual demands to these physical resources. OpenNebula supports multiple networking models to accommodate diverse environments, including flat (direct attachment without encapsulation), bridged (using bridges for VM traffic passthrough), and (SDN) via for advanced isolation. In bridged mode, VM interfaces connect transparently to the host's bridge (e.g., onebr0), enabling straightforward external connectivity without additional overhead. integration provides tagging on ports and basic filtering rules, ideal for multi-tenant setups. is enforced through security groups, which apply firewall rules (e.g., iptables-based) to network interfaces, and access control lists (ACLs) that restrict operations like NIC attachment based on user permissions. For instance, a security group might default to all networks upon creation, filtering traffic at the VM level to prevent spoofing of IP/MAC addresses. Integration with physical NICs occurs through drivers specified in virtual network templates, where attributes like PHYDEV designate the uplink interface (e.g., eth0) for bridging or . OpenNebula also supports hybrid cloud extensions, allowing private virtual networks to interconnect with providers like AWS or Azure, enabling seamless bursting of workloads across on-premises and remote infrastructures. Advanced features include virtual routers, deployed as appliances to handle routing between virtual networks and provide load balancing for services; for example, a service virtual router can manage IP forwarding and NAT across multiple VNETs, distributing traffic to backend VMs. (QoS) parameters, such as OUTBOUND_AVG_BW (e.g., 1000 Kbps), further optimize bandwidth allocation on these connections.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.