Hubbry Logo
Cloud computingCloud computingMain
Open search
Cloud computing
Community hub
Cloud computing
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Cloud computing
Cloud computing
from Wikipedia

Cloud computing metaphor: the group of networked elements providing services does not need to be addressed or managed individually by users; instead, the entire provider-managed suite of hardware and software can be thought of as an amorphous cloud.

Cloud computing is "a paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand," according to ISO.[1] It is commonly referred to as "the cloud".[2]

Characteristics

[edit]

In 2011, the National Institute of Standards and Technology (NIST) identified five "essential characteristics" for cloud systems.[3] Below are the exact definitions according to NIST:[3]

  • On-demand self-service: "A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider."
  • Broad network access: "Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations)."
  • Resource pooling: " The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand."
  • Rapid elasticity: "Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time."
  • Measured service: "Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

By 2023, the International Organization for Standardization (ISO) had expanded and refined the list.[4]

History

[edit]

The history of cloud computing extends to the 1960s, with the initial concepts of time-sharing becoming popularized via remote job entry (RJE). The "data center" model, where users submitted jobs to operators to run on mainframes, was predominantly used during this era. This was a time of exploration and experimentation with ways to make large-scale computing power available to more users through time-sharing, optimizing the infrastructure, platform, and applications, and increasing efficiency for end users.[5]

The "cloud" metaphor for virtualized services dates to 1994, when it was used by General Magic for the universe of "places" that mobile agents in the Telescript environment could "go". The metaphor is credited to David Hoffman, a General Magic communications specialist, based on its long-standing use in networking and telecom.[6] The expression cloud computing became more widely known in 1996 when Compaq Computer Corporation drew up a business plan for future computing and the Internet. The company's ambition was to supercharge sales with "cloud computing-enabled applications". The business plan foresaw that online consumer file storage would likely be commercially successful. As a result, Compaq decided to sell server hardware to internet service providers.[7]

In the 2000s, the application of cloud computing began to take shape with the establishment of Amazon Web Services (AWS) in 2002, which allowed developers to build applications independently. In 2006 Amazon Simple Storage Service, known as Amazon S3, and the Amazon Elastic Compute Cloud (EC2) were released. In 2008 NASA's development of the first open-source software for deploying private and hybrid clouds.[8][9]

The following decade saw the launch of various cloud services. In 2010, Microsoft launched Microsoft Azure, and Rackspace Hosting and NASA initiated an open-source cloud-software project, OpenStack. IBM introduced the IBM SmartCloud framework in 2011, and Oracle announced the Oracle Cloud in 2012. In December 2019, Amazon launched AWS Outposts, a service that extends AWS infrastructure, services, APIs, and tools to customer data centers, co-location spaces, or on-premises facilities.[10][11]

Value proposition

[edit]

Cloud computing can enable shorter time to market by providing pre-configured tools, scalable resources, and managed services, allowing users to focus on their core business value instead of maintaining infrastructure. Cloud platforms can enable organizations and individuals to reduce upfront capital expenditures on physical infrastructure by shifting to an operational expenditure model, where costs scale with usage. Cloud platforms also offer managed services and tools, such as artificial intelligence, data analytics, and machine learning, which might otherwise require significant in-house expertise and infrastructure investment.[12][13][14]

While cloud computing can offer cost advantages through effective resource optimization, organizations often face challenges such as unused resources, inefficient configurations, and hidden costs without proper oversight and governance. Many cloud platforms provide cost management tools, such as AWS Cost Explorer and Azure Cost Management, and frameworks like FinOps have emerged to standardize financial operations in the cloud. Cloud computing also facilitates collaboration, remote work, and global service delivery by enabling secure access to data and applications from any location with an internet connection.[12][13][14]

Cloud providers offer various redundancy options for core services, such as managed storage and managed databases, though redundancy configurations often vary by service tier. Advanced redundancy strategies, such as cross-region replication or failover systems, typically require explicit configuration and may incur additional costs or licensing fees.[12][13][14]

Cloud environments operate under a shared responsibility model, where providers are typically responsible for infrastructure security, physical hardware, and software updates, while customers are accountable for data encryption, identity and access management (IAM), and application-level security. These responsibilities vary depending on the cloud service model—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS)—with customers typically having more control and responsibility in IaaS environments and progressively less in PaaS and SaaS models, often trading control for convenience and managed services.[12][13][14]

Adoption and suitability

[edit]

The decision to adopt cloud computing or maintain on-premises infrastructure depends on factors such as scalability, cost structure, latency requirements, regulatory constraints, and infrastructure customization.[15][16][17][18]

Organizations with variable or unpredictable workloads, limited capital for upfront investments, or a focus on rapid scalability benefit from cloud adoption. Startups, SaaS companies, and e-commerce platforms often prefer the pay-as-you-go operational expenditure (OpEx) model of cloud infrastructure. Additionally, companies prioritizing global accessibility, remote workforce enablement, disaster recovery, and leveraging advanced services such as AI/ML and analytics are well-suited for the cloud. In recent years, some cloud providers have started offering specialized services for high-performance computing and low-latency applications, addressing some use cases previously exclusive to on-premises setups.[15][16][17][18]

On the other hand, organizations with strict regulatory requirements, highly predictable workloads, or reliance on deeply integrated legacy systems may find cloud infrastructure less suitable. Businesses in industries like defense, government, or those handling highly sensitive data often favor on-premises setups for greater control and data sovereignty. Additionally, companies with ultra-low latency requirements, such as high-frequency trading (HFT) firms, rely on custom hardware (e.g., FPGAs) and physical proximity to exchanges, which most cloud providers cannot fully replicate despite recent advancements. Similarly, tech giants like Google, Meta, and Amazon build their own data centers due to economies of scale, predictable workloads, and the ability to customize hardware and network infrastructure for optimal efficiency. However, these companies also use cloud services selectively for certain workloads and applications where it aligns with their operational needs.[15][16][17][18]

In practice, many organizations are increasingly adopting hybrid cloud architectures, combining on-premises infrastructure with cloud services. This approach allows businesses to balance scalability, cost-effectiveness, and control, offering the benefits of both deployment models while mitigating their respective limitations.[15][16][17][18]

Challenges and limitations

[edit]

One of the main challenges of cloud computing, in comparison to more traditional on-premises computing, is data security and privacy. Cloud users entrust their sensitive data to third-party providers, who may not have adequate measures to protect it from unauthorized access, breaches, or leaks. Cloud users also face compliance risks if they have to adhere to certain regulations or standards regarding data protection, such as GDPR or HIPAA.[19]

Another challenge of cloud computing is reduced visibility and control. Cloud users may not have full insight into how their cloud resources are managed, configured, or optimized by their providers. They may also have limited ability to customize or modify their cloud services according to their specific needs or preferences.[19] Complete understanding of all technology may be impossible, especially given the scale, complexity, and deliberate opacity of contemporary systems; however, there is a need for understanding complex technologies and their interconnections to have power and agency within them.[20] The metaphor of the cloud can be seen as problematic as cloud computing retains the aura of something noumenal and numinous; it is something experienced without precisely understanding what it is or how it works.[21]

Additionally, cloud migration is a significant challenge. This process involves transferring data, applications, or workloads from one cloud environment to another, or from on-premises infrastructure to the cloud. Cloud migration can be complicated, time-consuming, and expensive, particularly when there are compatibility issues between different cloud platforms or architectures. If not carefully planned and executed, cloud migration can lead to downtime, reduced performance, or even data loss.[22]

Cloud migration challenges

[edit]

According to the 2024 State of the Cloud Report by Flexera, approximately 50% of respondents identified the following top challenges when migrating workloads to public clouds:[23]

  1. "Understanding application dependencies"
  2. "Comparing on-premise and cloud costs"
  3. "Assessing technical feasibility."

Implementation challenges

[edit]

Applications hosted in the cloud are susceptible to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment.[24]

Cloud cost overruns

[edit]

In a report by Gartner, a survey of 200 IT leaders revealed that 69% experienced budget overruns in their organizations' cloud expenditures during 2023. Conversely, 31% of IT leaders whose organizations stayed within budget attributed their success to accurate forecasting and budgeting, proactive monitoring of spending, and effective optimization.[25]

The 2024 Flexera State of Cloud Report identifies the top cloud challenges as managing cloud spend, followed by security concerns and lack of expertise. Public cloud expenditures exceeded budgeted amounts by an average of 15%. The report also reveals that cost savings is the top cloud initiative for 60% of respondents. Furthermore, 65% measure cloud progress through cost savings, while 42% prioritize shorter time-to-market, indicating that cloud's promise of accelerated deployment is often overshadowed by cost concerns.[23]

Service Level Agreements

[edit]

Typically, cloud providers' Service Level Agreements (SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues, human errors, like misconfigurations, natural disasters, force majeure events, or security breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by service. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, the company typically does not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA.[26][27][28][29]

Leaky abstractions

[edit]

Cloud computing abstractions aim to simplify resource management, but leaky abstractions can expose underlying complexities. These variations in abstraction quality depend on the cloud vendor, service and architecture. Mitigating leaky abstractions requires users to understand the implementation details and limitations of the cloud services they utilize.[30][31][32]

Service lock-in within the same vendor

[edit]

Service lock-in within the same vendor occurs when a customer becomes dependent on specific services within a cloud vendor, making it challenging to switch to alternative services within the same vendor when their needs change.[33][34]

Security and privacy

[edit]
Cloud suppliers security and privacy agreements must be aligned to the demand(s) requirements and regulations.

Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information.[35] Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services. Solutions to privacy include policy and legislation as well as end-users' choices for how data is stored.[35] Users can encrypt data that is processed or stored within the cloud to prevent unauthorized access.[35] Identity management systems can also provide practical solutions to privacy concerns in cloud computing. These systems distinguish between authorized and unauthorized users and determine the amount of data that is accessible to each entity.[36] The systems work by creating and describing identities, recording activities, and getting rid of unused identities.

According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and APIs, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities. In a cloud provider platform being shared by different users, there may be a possibility that information belonging to different customers resides on the same data server. Additionally, Eugene Schultz, chief technology officer at Emagined Security, said that hackers are spending substantial time and effort looking for ways to penetrate the cloud. "There are some real Achilles' heels in the cloud infrastructure that are making big holes for the bad guys to get into". Because data from hundreds or thousands of companies can be stored on large cloud servers, hackers can theoretically gain control of huge stores of information through a single attack—a process he called "hyperjacking". Some examples of this include the Dropbox security breach, and iCloud 2014 leak.[37] Dropbox had been breached in October 2014, having over seven million of its users passwords stolen by hackers in an effort to get monetary value from it by Bitcoins (BTC). By having these passwords, they are able to read private data as well as have this data be indexed by search engines (making the information public).[37]

There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership.[38] Physical control of the computer equipment (private cloud) is more secure than having the equipment off-site and under someone else's control (public cloud). This delivers great incentive to public cloud computing service providers to prioritize building and maintaining strong management of secure services.[39] Some small businesses that do not have expertise in IT security could find that it is more secure for them to use a public cloud. There is the risk that end users do not understand the issues involved when signing on to a cloud service (persons sometimes do not read the many pages of the terms of service agreement, and just click "Accept" without reading). This is important now that cloud computing is common and required for some services to work, for example for an intelligent personal assistant (Apple's Siri or Google Assistant). Fundamentally, private cloud is seen as more secure with higher levels of control for the owner, however public cloud is seen to be more flexible and requires less time and money investment from the user.[40]

The attacks that can be made on cloud computing systems include man-in-the middle attacks, phishing attacks, authentication attacks, and malware attacks. One of the largest threats is considered to be malware attacks, such as Trojan horses. Recent research conducted in 2022 has revealed that the Trojan horse injection method is a serious problem with harmful impacts on cloud computing systems.[41]

Service models

[edit]
Comparison of on-premise, IaaS, PaaS, and SaaS
Cloud computing service models arranged as layers in a stack

The National Institute of Standards and Technology recognized three cloud service models in 2011: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).[3] The International Organization for Standardization (ISO) later identified additional models in 2023, including "Network as a Service", "Communications as a Service", "Compute as a Service", and "Data Storage as a Service".[4]

Infrastructure as a service (IaaS)

[edit]

Infrastructure as a service (IaaS) refers to online services that provide high-level APIs used to abstract various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup, etc. A hypervisor runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. Linux containers run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers. The use of containers offers higher performance than virtualization because there is no hypervisor overhead. IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.[42]

The NIST's definition of cloud computing describes IaaS as "where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)."[3]

IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the number of resources allocated and consumed.[43]

Platform as a service (PaaS)

[edit]

The NIST's definition of cloud computing defines Platform as a Service as:[3]

The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

PaaS vendors offer a development environment to application developers. The provider typically develops toolkit and standards for development and channels for distribution and payment. In the PaaS models, cloud providers deliver a computing platform, typically including an operating system, programming-language execution environment, database, and the web server. Application developers develop and run their software on a cloud platform instead of directly buying and managing the underlying hardware and software layers. With some PaaS, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually.[44][need quotation to verify]

Some integration and data management providers also use specialized applications of PaaS as delivery models for data. Examples include iPaaS (Integration Platform as a Service) and dPaaS (Data Platform as a Service). iPaaS enables customers to develop, execute and govern integration flows.[45] Under the iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware.[46] dPaaS delivers integration—and data-management—products as a fully managed service.[47] Under the dPaaS model, the PaaS provider, not the customer, manages the development and execution of programs by building data applications for the customer. dPaaS users access data through data-visualization tools.[48]

Software as a service (SaaS)

[edit]

The NIST's definition of cloud computing defines Software as a Service as:[3]

The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

In the software as a service (SaaS) model, users gain access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis or using a subscription fee.[49] In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications differ from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand.[50] Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access-point. To accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.

The pricing model for SaaS applications is typically a monthly or yearly flat fee per user,[51] so prices become scalable and adjustable if users are added or removed at any point. It may also be free.[52] Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and from personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS comes with storing the users' data on the cloud provider's server. As a result,[citation needed] there could be unauthorized access to the data.[53] Examples of applications offered as SaaS are games and productivity software like Google Docs and Office Online. SaaS applications may be integrated with cloud storage or File hosting services, which is the case with Google Docs being integrated with Google Drive, and Office Online being integrated with OneDrive.[54]

Serverless computing

[edit]

Serverless computing allows customers to use various cloud capabilities without the need to provision, deploy, or manage hardware or software resources, apart from providing their application code or data. ISO/IEC 22123-2:2023 classifies serverless alongside Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) under the broader category of cloud service categories. Notably, while ISO refers to these classifications as cloud service categories, the National Institute of Standards and Technology (NIST) refers to them as service models.[3][4]

Deployment models

[edit]
Cloud computing types

"A cloud deployment model represents the way in which cloud computing can be organized based on the control and sharing of physical or virtual resources."[4] Cloud deployment models define the fundamental patterns of interaction between cloud customers and cloud providers. They do not detail implementation specifics or the configuration of resources.[4]

Private

[edit]

Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally.[3] Undertaking a private cloud project requires significant engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. It can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities. Self-run data centers[55] are generally capital intensive. They have a significant physical footprint, requiring allocations of space, hardware, and environmental controls. These assets have to be refreshed periodically, resulting in additional capital expenditures. They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management,[56] essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".[57][58]

Public

[edit]

Cloud services are considered "public" when they are delivered over the public Internet, and they may be offered as a paid subscription, or free of charge.[59] Architecturally, there are few differences between public- and private-cloud services, but security concerns increase substantially when services (applications, storage, and other resources) are shared by multiple customers. Most public-cloud providers offer direct-connection services that allow customers to securely link their legacy data centers to their cloud-resident applications.[60][61]

Several factors like the functionality of the solutions, cost, integrational and organizational aspects as well as safety & security are influencing the decision of enterprises and organizations to choose a public cloud or on-premises solution.[62]

Hybrid

[edit]

Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premises resources,[63][64] that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed or dedicated services with cloud resources.[3] Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers.[65] A hybrid cloud service crosses isolation and provider boundaries so that it cannot be simply put in one category of private, public, or community cloud service. It allows one to extend either the capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service.

Varied use cases for hybrid cloud composition exist. For example, an organization may store sensitive client data in house on a private cloud application, but interconnect that application to a business intelligence application provided on a public cloud as a software service.[66] This example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business service through the addition of externally available public cloud services. Hybrid cloud adoption depends on a number of factors such as data security and compliance requirements, level of control needed over data, and the applications an organization uses.[67]

Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity needs that can not be met by the private cloud.[68] This capability enables hybrid clouds to employ cloud bursting for scaling across clouds.[3] Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization pays for extra compute resources only when they are needed.[69] Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands.[70]

Community

[edit]

Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether it is managed internally or by a third-party, and hosted internally or externally, the costs are distributed among fewer users compared to a public cloud (but more than a private cloud). As a result, only a portion of the potential cost savings of cloud computing is achieved. [3]

Multi cloud

[edit]

According to ISO/IEC 22123-1: "multi-cloud is a cloud deployment model in which a customer uses public cloud services provided by two or more cloud service providers".  [71] Poly cloud refers to the use of multiple public clouds for the purpose of leveraging specific services that each provider offers. It differs from Multi cloud in that it is not designed to increase flexibility or mitigate against failures but is rather used to allow an organization to achieve more than could be done with a single provider.[72]

Market

[edit]

According to International Data Corporation (IDC), global spending on cloud computing services has reached $706 billion and is expected to reach $1.3 trillion by 2025.[73] Gartner estimated that global public cloud services end-user spending would reach $600 billion by 2023.[74] According to a McKinsey & Company report, cloud cost-optimization levers and value-oriented business use cases foresee more than $1 trillion in run-rate EBITDA across Fortune 500 companies as up for grabs in 2030.[75] In 2022, more than $1.3 trillion in enterprise IT spending was at stake from the shift to the cloud, growing to almost $1.8 trillion in 2025, according to Gartner.[76]

The European Commission's 2012 Communication identified several issues which were impeding the development of the cloud computing market:[77]: Section 3 

The Communication set out a series of "digital agenda actions" which the Commission proposed to undertake in order to support the development of a fair and effective market for cloud computing services.[77]: Pages 6–14 

Cloud Computing Vendors

[edit]

As of 2025, the three largest cloud computing providers by market share, commonly referred to as hyperscalers, are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.[78][79] These companies dominate the global cloud market due to their extensive infrastructure, broad service offerings, and scalability.

In recent years, organizations have increasingly adopted alternative cloud providers, which offer specialized services that distinguish them from hyperscalers. These providers may offer advantages such as lower costs, improved cost transparency and predictability, enhanced data sovereignty (particularly within regions such as the European Union to comply with regulations like the General Data Protection Regulation (GDPR)), stronger alignment with local regulatory requirements, or industry-specific services.[80]

Alternative cloud providers are often part of multi-cloud strategies, where organizations use multiple cloud services—both from hyperscalers and specialized providers—to optimize performance, compliance, and cost efficiency. However, they do not necessarily serve as direct replacements for hyperscalers, as their offerings are typically more specialized.[80]

Similar concepts

[edit]

The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs and helps the users focus on their core business instead of being impeded by IT obstacles.[81] The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.[81]

Cloud computing uses concepts from utility computing to provide metrics for the services used. Cloud computing attempts to address QoS (quality of service) and reliability problems of other grid computing models.[81]

Cloud computing shares characteristics with:

  • Client–server modelClient–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).[82]
  • Computer bureau – A service bureau providing computer services, particularly from the 1960s to 1980s.
  • Grid computing – A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks.
  • Fog computing – Distributed computing paradigm that provides data, compute, storage and application services closer to the client or near-user edge devices, such as network routers. Furthermore, fog computing handles data at the network level, on smart devices and on the end-user client-side (e.g. mobile devices), instead of sending data to a remote location for processing.
  • Utility computing – The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."[83][84]
  • Peer-to-peer – A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client-server model).
  • Cloud sandbox – A live, isolated computer environment in which a program, code or file can run without affecting the application in which it runs.

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Cloud is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable resources—such as networks, servers, storage, applications, and services—that can be rapidly provisioned and released with minimal management effort or interaction. This shifts from locally managed hardware to remote, elastic , primarily delivered via the , allowing users to scale resources dynamically without owning physical assets. The essential characteristics include on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service, enabling efficient utilization through multi-tenancy and pay-per-use economics. Cloud services are categorized into three main models: , which provides virtualized computing resources like servers and storage; , offering development platforms with underlying infrastructure abstracted; and , delivering fully managed applications accessible via the web. These models facilitate deployment types such as , private, hybrid, and multi-cloud environments, with clouds dominating due to their and cost-effectiveness. Modern cloud computing traces its practical origins to the mid-2000s, with (AWS) launching Elastic Compute Cloud (EC2) in 2006, marking the commercialization of on-demand infrastructure, followed by Microsoft's Azure in 2010 and Google Cloud Platform's expansion. By 2025, the global cloud infrastructure market is led by AWS with approximately 31-32% share, Microsoft Azure at 20-23%, and Google Cloud at 11-13%, reflecting rapid adoption driven by and the COVID-19 acceleration of . While cloud computing achieves significant efficiencies through and innovation in distributed systems, it introduces risks including data breaches from misconfigurations, account hijacking, insecure APIs, and privacy concerns arising from data centralization in third-party facilities subject to varying jurisdictional controls. and dependency on a concentrated of providers further amplify systemic vulnerabilities, such as widespread outages or geopolitical data access disputes, underscoring the trade-offs between convenience and control.

Fundamentals

Definition and Essential Characteristics

Cloud computing is a way to use computer services—like storing files, running apps, or using powerful computers—over the internet instead of on your own device. Compare it to electricity: you plug in and use it without knowing how the power plant works or owning one. The "cloud" refers to big remote servers in data centers managed by companies like Google, Amazon, or Microsoft. This allows access from any internet-connected device, makes things easier and cheaper (pay only for what you use), and provides scalable power without buying hardware. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable resources—such as networks, servers, storage, applications, and services—that can be rapidly provisioned and released with minimal management effort or service provider interaction. This definition, established by the National Institute of Standards and Technology (NIST) in 2011, emphasizes the delivery of resources over the without requiring users to own or manage underlying hardware. The model is defined by five essential characteristics according to NIST: 1. on-demand self-service, whereby consumers provision resources unilaterally without human intervention; 2. broad network access, enabling access via standard mechanisms from diverse devices like laptops and mobiles; 3. resource pooling, where providers pool computing resources to serve multiple consumers with dynamically assigned and reassigned resources according to demand; 4. rapid elasticity, allowing resources to out or in automatically to match demand; and 5. measured service, providing transparency in resource usage via metering for pay-per-use billing. However, many sources commonly cite six characteristics by treating multi-tenancy—secure sharing of infrastructure among multiple users or organizations with isolation to prevent interference—as a distinct feature, often incorporated within resource pooling in the NIST definition: 6. multi-tenancy, whereby multiple tenants share resources securely while maintaining data and process isolation. These traits enable empirical , as resources adjust in near real-time to workload fluctuations, contrasting with rigid traditional setups. In distinction from traditional on-premises , cloud computing shifts costs from capital expenditures (CapEx) on hardware purchases to operational expenditures (OpEx) for usage-based consumption, eliminating the need for upfront ownership of physical assets and enabling global . Typical intra-region network latency remains under 100 milliseconds, supporting responsive applications, while service level agreements (SLAs) from major providers guarantee up to 99.99% uptime, equating to at most 4.38 minutes of monthly downtime.

Underlying Technologies

Virtualization forms the foundational abstraction layer in cloud computing, enabling the creation of multiple virtual machines (VMs) on a single physical server by emulating hardware resources through a hypervisor. This technology partitions physical compute, memory, and storage, allowing efficient resource utilization via time-sharing and isolation mechanisms. Type-1 hypervisors, which run directly on hardware, include proprietary solutions like VMware vSphere, introduced in the late 1990s for server consolidation, and open-source options such as Kernel-based Virtual Machine (KVM), integrated into the Linux kernel to leverage hardware-assisted virtualization extensions like Intel VT-x. Cloud data centers primarily utilize CPU-centric servers, such as x86 architectures, with virtualization supporting general-purpose workloads including web services, databases, and broad data processing; AI capabilities are supplementary via optional GPU instances like AWS EC2 P series. In contrast, AI data centers prioritize accelerators such as GPUs (e.g., NVIDIA H100 or Blackwell series) and TPUs (e.g., Google TPU), deployed in specialized systems including NVIDIA DGX servers and SuperPOD clusters, with high-density racks optimized for parallel processing and matrix computations essential to AI training and inference. Containerization extends virtualization principles with operating-system-level isolation, packaging applications and dependencies into lightweight, portable units without full OS emulation, thus reducing overhead compared to traditional VMs. Docker, released as in 2013, popularized this approach by standardizing container formats using features like and namespaces for and resource limits. Container orchestration tools automate deployment, scaling, and management of these containers across clusters; , open-sourced by in 2014 based on its internal Borg system, provides declarative configuration for container lifecycle management, service discovery, and fault tolerance through components like pods, nodes, and controllers. Networking in cloud infrastructures relies on (SDN), which decouples the —handling routing decisions—from the data plane of physical switches, enabling centralized, programmable configuration via APIs for dynamic traffic management. SDN facilitates virtual overlays, such as VXLAN for Layer 2 extension across data centers, and integrates with load balancers that distribute incoming requests across backend instances using algorithms like round-robin or least connections to prevent bottlenecks. Storage systems underpin data persistence with distinct paradigms: block storage, which exposes raw volumes for high-performance I/O suitable for databases via protocols like ; , exemplified by launched in 2006, storing unstructured data as immutable objects with metadata for scalable, distributed access; and distributed file systems like Hadoop Distributed File System (HDFS) or cloud-native equivalents for shared POSIX-compliant access. Hyperscale data centers, housing millions of servers in facilities exceeding 100 megawatts, incorporate redundancy architectures such as N+1 configurations, where an additional power supply, cooling unit, or generator backs up the minimum required (N) components to tolerate single failures without downtime. These setups employ uninterruptible power supplies (UPS), diesel generators, and cooling systems like chillers in fault-tolerant topologies, ensuring continuous operation amid hardware faults or maintenance. Automation via RESTful APIs, often following standards like OpenAPI, allows programmatic provisioning of these resources, integrating with infrastructure-as-code tools such as Terraform, which defines resources declaratively to support multiple cloud providers, and Pulumi, which uses general-purpose programming languages for cloud-agnostic infrastructure management.

Historical Evolution

Precursors and Early Concepts

The concept of shared computing resources emerged in the early 1960s through systems, which allowed multiple users to access a single interactively via terminals, contrasting with . This approach was pioneered by systems like the (CTSS), demonstrated in 1961 at MIT by Fernando Corbató and colleagues, enabling efficient resource utilization amid scarce hardware. laid foundational principles for compute power, influencing later distributed architectures. In 1961, John McCarthy proposed organizing computation as a akin to or , suggesting that excess capacity could be sold on demand to optimize usage and reduce costs for users without dedicated machines. Concurrently, envisioned interconnected networks of computers facilitating seamless data and resource sharing, as outlined in his work on man-computer symbiosis and intergalactic networks. These ideas gained infrastructural support with ARPANET's first successful connection on October 29, 1969, establishing packet-switching networking as a precursor to wide-area resource distribution. By the 1990s, extended these principles to harness distributed, heterogeneous resources across networks for large-scale computations, often analogized to electrical grids for on-demand power. Projects like , launched on May 17, 1999, exemplified by aggregating idle CPUs worldwide to analyze radio signals for , demonstrating scalable, pay-per-use-like resource pooling without centralized ownership. Early experiments foreshadowed commercial viability: , founded in March 1999 by , pivoted to a software-as-a-service (SaaS) model delivering via the , eliminating on-premises installations. Similarly, Amazon developed internal infrastructure in the early 2000s, including and automated scaling to manage e-commerce traffic spikes, which evolved from proprietary tools into reusable components before external commercialization.

Commercial Emergence (2006–2010)

(AWS) marked the commercial inception of modern cloud computing with the launch of Amazon Simple Storage Service (S3) on March 14, 2006, which provided developers with durable, scalable accessible via web services APIs on a pay-per-use pricing model. This service addressed longstanding challenges in data storage by eliminating the need for upfront hardware investments and enabling infinite scalability without . Five months later, on August 25, 2006, AWS introduced Elastic Compute Cloud (EC2) in beta, offering resizable instances that allowed users to rent computing resources on demand, further solidifying the infrastructure-as-a-service (IaaS) paradigm. Together, S3 and EC2 demonstrated a viable for commoditizing compute and storage, shifting from capital-intensive on-premises infrastructure to operational expenditure-based . Competitive responses followed as major technology firms recognized the potential. launched App Engine on April 7, 2008, in limited preview, introducing a platform-as-a-service (PaaS) offering that enabled developers to build and host web applications on infrastructure without managing underlying servers, initially supporting Python runtimes with automatic scaling. entered the fray with Windows Azure, announcing platform availability in November 2009 and reaching general availability on February 1, 2010, which provided a hybrid-compatible environment for deploying .NET and other applications across virtual machines and storage services. These launches validated the market for abstracted cloud services, though adoption remained nascent, with AWS maintaining primacy in IaaS due to its earlier availability and developer-friendly APIs. A landmark validation of cloud reliability occurred through Netflix's migration to AWS, initiated in August 2008 following a severe database corruption incident that exposed vulnerabilities in its on-premises systems. By 2010, Netflix had transitioned substantial portions of its streaming and backend operations to EC2 and S3, achieving through automated and elastic scaling that handled surging demand without downtime, thereby establishing empirical benchmarks for production-grade cloud workloads in media delivery. This shift underscored causal advantages in and cost efficiency, as Netflix reported reduced infrastructure overhead while serving millions of subscribers, influencing enterprise perceptions of cloud viability.

Rapid Expansion (2011–2020)

The period from 2011 to 2020 marked a phase of rapid scaling in cloud computing, driven by technological advancements enabling hybrid deployments and the proliferation of (PaaS) and (SaaS) models. Global end-user spending on cloud services expanded significantly, rising from approximately $40.7 billion in 2011 to $241 billion by 2020, reflecting widespread enterprise adoption amid improving reliability and cost efficiencies. This growth was fueled by the integration of on-premises systems with public clouds in hybrid architectures, which allowed organizations to retain control over sensitive while leveraging scalable external resources. Key open-source milestones facilitated this expansion. , initially released in October 2010, gained traction for building private and hybrid clouds, with its modular components enabling customizable infrastructure management for enterprises wary of full public cloud reliance. Docker's launch in 2013 introduced lightweight , simplifying application portability and deployment across hybrid environments, which accelerated adoption and reduced overhead. Complementing this, was announced by in June 2014, providing orchestration for containerized workloads; its integration into the (CNCF), formed in July 2015, standardized cloud-native practices and boosted hybrid scalability. Major vendors advanced enterprise offerings during this decade. IBM introduced SmartCloud in April 2011, emphasizing secure, hybrid cloud services for and infrastructure. followed with initial cloud platform services in June 2012, focusing on in PaaS formats to bridge legacy systems with cloud agility. These initiatives, alongside AWS and Azure expansions, shifted focus toward PaaS for developer productivity—evidenced by PaaS revenues surpassing $171 billion globally by the late 2010s—and SaaS for end-user applications, which dominated market segments with annual growth rates exceeding 30% in some regions. The in 2020 catalyzed a surge in adoption, as demands necessitated rapid scaling of resources for collaboration and data access, with public spending projected to grow 18% amid lockdowns. Hybrid models proved resilient, enabling seamless bursting to public clouds during peak loads while maintaining private , solidifying computing's role in operational continuity.

Maturation and Recent Advances (2021–2025)

Following the accelerated cloud migrations during the , the period from 2021 to 2025 saw refinements in cloud architectures emphasizing efficiency, scalability, and integration with emerging workloads. Global spending on cloud infrastructure services reached $106.9 billion in the third quarter of 2025, reflecting a 28% year-over-year increase primarily driven by AI and (AI/ML) demands. AI/ML-specific cloud services generated $47.3 billion in revenue for 2025, up 19.6% from the prior year, as enterprises shifted compute-intensive tasks to cloud platforms for faster model training and . This growth underscored a maturation where cloud providers optimized for generative AI, with hyperscalers like AWS, , and Google Cloud investing heavily in specialized accelerators and APIs. Serverless computing advanced significantly, with platforms like evolving to support longer execution times, enhanced concurrency, and tighter integration with AI services; by 2025, serverless adoption grew 3-7% across major providers, enabling developers to deploy event-driven applications without infrastructure provisioning. Hybrid edge-cloud models emerged as a key refinement, processing data closer to sources via and IoT integrations to reduce latency, with projected to expand rapidly for real-time applications in and autonomous systems. Kubernetes solidified its dominance in container orchestration, with over 60% of enterprises adopting it by 2025 as the for managing hybrid and multi-cloud workloads, supported by tools for AI-driven autoscaling and edge deployments. Multi-cloud strategies became ubiquitous, with 92-93% of organizations employing them across an average of 4.8 providers to mitigate and optimize costs, though this complexity contributed to operational challenges. reported rising dissatisfaction, predicting that 25% of organizations would face significant issues with cloud adoption by 2028 due to unrealistic expectations and escalating cost pressures, prompting a focus on FinOps practices for better . These trends highlighted a shift toward pragmatic maturation, balancing with and concerns in regulated sectors.

Technical Models

Service Models

![Comparison of on-premise, IaaS, PaaS, and SaaS][float-right] Cloud computing service models delineate the degrees of provided by cloud providers, ranging from raw to fully managed applications, with corresponding shifts in responsibility for management, configuration, and control between the provider and consumer. The U.S. National Institute of Standards and Technology (NIST) formalized three primary models— (IaaS), (PaaS), and (SaaS)—in its Special Publication 800-145, published on September 28, 2011. These models embody causal trade-offs: greater eases operational burdens and accelerates deployment but diminishes user control over underlying components, potentially constraining customization and optimization while increasing dependency on provider capabilities. IaaS delivers fundamental computing resources such as virtual machines (VMs), storage, and networking on a pay-as-you-go basis, allowing consumers to provision and manage operating systems, applications, and data while the provider handles physical hardware and virtualization. Prominent examples include Amazon Web Services (AWS) Elastic Compute Cloud (EC2), launched in 2006, Microsoft Azure Virtual Machines, and Google Compute Engine. This model affords the highest degree of control over the software stack, enabling fine-tuned configurations akin to on-premises environments, yet it demands substantial expertise in system administration, patching, and scaling, which can elevate complexity and resource overhead compared to higher abstractions. PaaS extends abstraction by supplying a managed runtime environment, including operating systems, , databases, and development tools, permitting consumers to focus on application code and data without provisioning or maintaining underlying . Key providers encompass , acquired by in 2010; , introduced in 2008; and . By offloading server management and auto-scaling to the provider, PaaS reduces deployment times and operational costs for developers, but it limits control over runtime specifics, potentially hindering integration with legacy systems or optimizations. SaaS furnishes complete, multi-tenant applications accessible via the internet, with the provider assuming responsibility for all layers from to software updates, , and , leaving consumers to handle only user access and configuration. Exemplars include , formerly Office 365, and CRM, which dominate enterprise adoption. This model maximizes ease and accessibility for end-users, obviating hardware investments and maintenance, though it yields minimal customization latitude and exposes users to vendor-specific limitations in functionality or . Function as a service (FaaS), often termed , represents an evolution beyond traditional models by enabling event-driven code execution without provisioning or managing servers, with providers automatically handling invocation, scaling, and billing per execution duration. , debuted in 2014, exemplifies this paradigm, alongside Azure Functions and Google Cloud Functions. Adoption surged in the 2020s, with the global serverless market valued at USD 24.51 billion in 2024 and projected to reach USD 52.13 billion by 2030, driven by cost efficiencies for sporadic workloads and architectures. FaaS further abstracts , minimizing idle capacity costs but introducing cold-start latencies and constraints on execution timeouts, which can complicate stateful or long-running applications.

Deployment Models

Public cloud deployment involves provisioning resources from third-party providers using shared, multi-tenant infrastructure accessible over the internet, exemplified by (AWS) public regions. This model achieves cost efficiency through pay-as-you-go pricing and resource pooling, making it empirically suitable for workloads with variable or unpredictable demands, as elasticity allows scaling without overprovisioning dedicated hardware. Private cloud deployment dedicates infrastructure to a single organization, either on-premises via software like or hosted by a , ensuring isolated, single-tenant environments. It suits regulated sectors such as and healthcare, where compliance requirements demand granular control over locality, configurations, and auditability, though at higher upfront costs due to the absence of shared economies. Hybrid cloud deployment orchestrates public and private clouds into an integrated system, enabling seamless data transfer and workload orchestration, such as bursting non-sensitive tasks to public resources during demand spikes while retaining sensitive operations privately. This approach addresses trade-offs in cost and control, with empirical evidence showing 73% of organizations adopting it by 2024 to optimize for both scalability and regulatory adherence. Multi-cloud deployment spans multiple public cloud providers, such as combining AWS for compute with for analytics, to enhance against provider outages and mitigate lock-in risks through diversified dependencies. While providing resilience via best-of-breed services and bargaining power on pricing, it increases complexity in , , and skill requirements, necessitating robust to avoid fragmented operations.

Economic and Operational Benefits

Core Value Propositions

Cloud computing's core economic value derives from shifting from capital expenditures (capex) for dedicated hardware to operational expenditures (opex) aligned with actual usage, thereby minimizing waste from underutilized on-premises servers. On-premises centers typically achieve server utilization rates of 10-15%, as organizations provision for peak loads that occur infrequently, leaving capacity idle for extended periods. In contrast, cloud providers leverage multi-tenancy and to maintain utilization rates exceeding 70-80%, distributing costs across numerous customers and reducing per-unit expenses through efficient resource pooling. The pay-per-use model further enhances efficiency by eliminating payments for idle resources, while elasticity enables automatic scaling to match demand fluctuations, such as traffic surges during Black Friday sales. This capability prevents overprovisioning costs associated with anticipating unpredictable spikes, allowing systems to provision additional compute or storage capacity dynamically without manual intervention. Operationally, cloud environments accelerate resource provisioning from months required for on-premises hardware procurement and setup to minutes via self-service APIs and . Providers also offer built-in global redundancy across distributed data centers, enhancing availability and disaster recovery compared to localized on-premises setups vulnerable to single-site failures. Empirical analyses confirm these propositions, with studies indicating total cost of ownership (TCO) reductions of 20-30% for migrated workloads suitable for cloud architectures, driven by lower maintenance, energy, and staffing overheads. Independent research attributes similar 30-40% TCO savings to optimized and avoidance of upfront investments. These gains hold for variable workloads but require careful workload selection to avoid inefficiencies in fixed, predictable use cases.

Drivers of Adoption

Cloud adoption among enterprises reached 94% by 2025, driven primarily by the demand for scalable resources to support analytics and applications, which require elastic compute and storage capacities beyond traditional on-premises limitations. Cloud platforms enable , allowing organizations to process petabyte-scale datasets and train complex models without upfront hardware investments, as resources provision dynamically based on workload fluctuations. This technical elasticity underpins initiatives, where firms leverage integrated services for real-time data ingestion and , reducing latency in AI-driven decision-making. DevOps practices have accelerated through cloud-native tools, such as container orchestration and / (CI/CD) pipelines, which streamline code deployment and automate infrastructure management across hybrid environments. These capabilities cut deployment cycles from weeks to hours, fostering iterative development suited to fast-paced software innovation, particularly in sectors like and where rapid updates are essential for competitiveness. For startups and small-to-medium enterprises (SMEs), cloud services provide operational by eliminating capital expenditures on servers and enabling quick pivots to market demands without physical constraints. Adoption among SMEs surpassed 82% in 2025, attributed to pay-as-you-go pricing models that offer cost predictability for variable workloads, shifting from unpredictable capital outlays to operational expenses aligned with usage. This model mitigates financial risks for fluctuating demand, such as seasonal spikes, while supporting lean teams in scaling applications globally. The surge in remote work following 2020 further catalyzed adoption, as cloud architectures facilitated secure, location-independent access to shared resources and collaboration tools, with cloud spending rising 37% in the first quarter of that year alone. Enterprises integrated virtual desktops and SaaS applications to maintain productivity amid distributed workforces, enabling seamless and reducing downtime from on-site dependencies. This shift underscored cloud's role in operational resilience, allowing firms to sustain continuity without geographic ties.

Risks and Criticisms

While cloud computing resolves traditional infrastructure constraints through scalability, cost-efficiency, agility, and diminished management overhead enabled by on-demand provisioning and pay-as-you-go models, it engenders novel governance impediments—including data sovereignty and shared responsibility delineations—compliance exigencies tied to disparate regulatory and jurisdictional frameworks, and digital forensics obstacles such as data volatility, impediments to evidence acquisition amid multi-tenancy, and juridical access restrictions.

Security and Privacy Issues

Cloud computing's shared responsibility model delineates duties between providers, who secure underlying infrastructure, and customers, who manage data, applications, and configurations; however, frequent customer-side failures, such as inadequate identity management and oversight gaps, expose systemic flaws where assumptions of provider omnipotence lead to unaddressed vulnerabilities. Misconfigurations, often resulting from this model's incomplete implementation, accounted for 23% of incidents in recent analyses, underscoring how customer-configured access controls and storage buckets remain primary breach vectors rather than inherent provider defects. Compromised credentials emerged as the leading cloud security threat in 2025, driving up to 67% of major data breaches through tactics like and , with a reported 300% surge in theft incidents enabling unauthorized access to environments. API vulnerabilities and insider threats compound this, as exposed interfaces and privileged user actions facilitate lateral movement; for instance, the Cloud Security Alliance's 2025 report identifies identity and access management failures, alongside misconfigurations, as recurrent patterns in real-world breaches like the 2024 Snowflake incident. Data breaches constituted 21% of reported cloud incidents in 2024, predominantly from these customer-managed lapses rather than infrastructure faults. Privacy concerns arise from data sovereignty constraints and multi-tenant isolation inadequacies, where shared environments risk cross-tenant data leakage despite virtualization safeguards; historical examples include the 2021 Azure Cosmos DB ChaosDB vulnerability, allowing arbitrary account access via misconfigured roles, highlighting persistent isolation enforcement challenges. Regulations like the EU's GDPR enforce to mitigate extraterritorial access risks, such as those under the U.S. , with non-compliance fines reaching 4% of global annual revenue or €20 million—cumulatively exceeding €5.65 billion across 2,245 violations by March 2025, some tied to cloud mishandling of personal data transfers. Mitigations emphasize customer adoption of zero-trust architectures, which verify all access regardless of origin, and to protect data in transit and at rest; yet, the notes that supply chain attacks persist as a top 2025 threat, exploiting third-party dependencies in customer ecosystems and evading traditional perimeters. Effective implementation requires continuous monitoring and automated configuration auditing to bridge shared responsibility gaps, as and evolving tactics continue to outpace static defenses.

Cost Control Failures

Organizations deploying cloud computing frequently encounter substantial cost overruns, with estimates indicating that 32% of cloud budgets are wasted annually due to inefficient utilization. This waste equates to approximately 21% of enterprise cloud spending, projected at $44.5 billion globally in 2025, primarily from underutilized . Primary causes include over-provisioning, where teams allocate excess capacity in anticipation of peak demands that rarely materialize, and the persistence of idle or unused instances that continue accruing charges. exacerbates these issues, as unauthorized deployments by non-IT personnel lead to fragmented, unmonitored sprawl outside central governance. Unpredictable billing structures contribute to widespread dissatisfaction, as models tied to usage often result in bills that deviate sharply from initial forecasts, eroding anticipated savings. research highlights that such discrepancies stem from inadequate metering and forecasting, with 25% of organizations expected to report significant dissatisfaction with cloud initiatives by 2028 due to these unmet expectations. Surveys indicate that 84% of organizations identify managing cloud spend as their primary challenge, reflecting a lag in implementing robust cost controls despite initial migration benefits. FinOps practices, which integrate financial accountability into cloud operations through ongoing optimization and collaboration, have gained traction but remain inconsistently adopted, with many firms still grappling with forecasting shortfalls and governance gaps. Without disciplined oversight, early cost advantages from diminish as expenditures balloon from unchecked sprawl, underscoring the need for proactive metering and rightsizing to sustain economic viability.

Vendor Dependencies and Lock-in

Vendor lock-in in cloud computing arises from customers' reliance on proprietary technologies, services, and ecosystems offered by dominant providers, creating significant barriers to switching or exiting. Key mechanisms include data gravity, where the accumulation of large data volumes and associated applications generates immense transfer costs and downtime risks, making migration prohibitive; and API incompatibilities, as providers like (AWS), , and Google Cloud develop unique service interfaces that resist straightforward portability. Efforts to mitigate lock-in through multi-cloud strategies and cloud-agnostic architectures have proliferated, with 89% of enterprises adopting such approaches by 2025 to distribute workloads across providers. Cloud-agnostic practices emphasize transferable core skills in compute virtualization, storage systems like object storage, networking, and orchestration, facilitating portability and reducing dependency on vendor-specific implementations. Complementary tools, such as Kubernetes for container orchestration and infrastructure-as-code frameworks, provide vendor-neutral abstractions that enhance flexibility. However, these introduce operational complexities, such as inconsistent tools, heightened overhead, and integration challenges that often erode anticipated benefits like cost savings or resilience. Persistent complaints highlight that multi-cloud does not fully eliminate dependencies, as workloads optimized for one provider's ecosystem remain tethered, and egress fees—though waived by Google Cloud in January 2024 and AWS in March 2024—previously amplified exit barriers. Switching costs exacerbate these issues, frequently reported as substantially higher than initial deployment due to refactoring applications, retraining staff, and logistics, with some analyses indicating cloud operational expenses can exceed on-premises equivalents by factors of up to 5x in unmanaged scenarios. This dependency fosters pricing power for incumbents, enabling gradual cost escalations post-adoption. Critics argue that concentration among the "Big Three" providers—AWS, Azure, and Google Cloud—diminishes by entrenching proprietary standards that hinder smaller entrants and innovation. Regulatory bodies have intensified scrutiny, exemplified by the and Markets Authority's (CMA) 2025 investigation into AWS and for practices reinforcing lock-in, including restrictive licensing that impedes multi-cloud viability. Such centralization not only amplifies systemic risks from outages but also prompts calls for interoperability mandates to restore market dynamism.

Environmental Impacts

Data centers supporting cloud computing consumed approximately 460 terawatt-hours (TWh) of electricity globally in 2022, equivalent to about 2% of worldwide electricity use. Projections for 2025 indicate continued growth, with estimates placing the sector's share at 2-3% amid rising demand from AI and data-intensive applications, though this remains a modest fraction compared to sectors like transportation or industry. In the United States, data centers accounted for 4% of total electricity in 2024, with hyperscale facilities—operated by major cloud providers—driving much of the increase due to their scale and density. Cooling requirements add a water consumption dimension, particularly for hyperscalers. U.S. data centers used 66 billion liters of water in 2023, with hyperscale operations comprising 84% of that total, primarily for evaporative cooling in cooling towers. A single hyperscale facility can consume millions of liters daily, comparable to small cities, though much of this is withdrawn and partially returned after evaporation losses. Cloud architectures mitigate some impacts through higher . Server utilization in large-scale cloud environments reaches 65%, compared to 12-15% in typical on-premises setups, enabling workload consolidation that reduces overall hardware needs and energy per computation. Major providers have accelerated adoption; (AWS) achieved 100% renewable energy matching for its operations in 2023, seven years ahead of its 2030 target, via investments in over 500 solar and projects. This shift enhances carbon efficiency, as cloud providers procure renewables to offset grid-supplied power, contrasting with on-premises reliance on local utilities. Despite these advances, rapid cloud growth challenges green technology deployment. Data center demand is projected to double electricity needs in some regions by 2030, potentially outpacing renewable capacity additions and straining grids. Regional variations exacerbate issues; in coal- and gas-heavy areas like the U.S. Midwest or grid (where 61% of power is fossil-based), data centers draw from high-emission sources unless offset by off-site renewables. Such dependencies highlight that while cloud enables efficiency gains, unchecked expansion without localized clean energy can perpetuate fossil fuel lock-in in underdeveloped grids.

Geopolitical Vulnerabilities

The dominance of U.S.-based cloud providers, which control over 60% of the global infrastructure-as-a-service market as of mid-2025, exposes users to geopolitical risks stemming from American legal . The Clarifying Lawful Overseas Use of Data (, enacted in 2018, empowers U.S. authorities to compel providers like , , and Google Cloud to disclose data stored anywhere worldwide, regardless of local laws, potentially overriding foreign jurisdictions. This has fueled concerns among non-U.S. governments about compelled data access for or purposes, particularly in allied nations wary of U.S. shifts. In response, the has advanced data localization mandates and digital sovereignty initiatives to mitigate reliance on U.S. hyperscalers. Regulations such as the EU Data Act (effective 2025) and GDPR enforcement emphasize data residency within EU borders, requiring providers to ensure metadata, backups, and logs remain under European control to prevent foreign access. These measures aim to counter reach, with initiatives like the project promoting EU-centric clouds, though implementation lags due to technical and economic hurdles. Sanctions illustrate acute disruptions from geopolitical tensions, as seen in Russia's experience following the 2022 invasion of . U.S. and restrictions, including Treasury's June 2024 determination limiting IT and services exports to , severed access for thousands of Russian firms to hyperscale platforms, forcing abrupt migrations to domestic alternatives amid operational blackouts. Similarly, espionage risks amplify vulnerabilities, with state actors exploiting hardware and software dependencies in infrastructure for cyber intrusions, as evidenced by documented campaigns targeting global providers for theft. Gartner's 2025 analysis identifies digital as a pivotal trend, predicting that over 50% of multinational organizations will adopt sovereign cloud strategies by 2029—up from under 10% currently—driven by AI regulations, privacy laws, and escalating U.S.- frictions. Non-U.S. providers like (holding approximately 4-5% global share) and have gained traction in and , respectively, with Alibaba reporting 18% revenue growth in Q1 2025, yet they trail U.S. leaders in scale, , and maturity. This lag perpetuates dependencies, underscoring the causal reality that fragmented alternatives struggle against the network effects of U.S.-dominated standards.

Mitigating Trade-offs in Scalability, Security, and Performance

Management approaches to alleviate compromises in scalability, security, and performance in cloud systems include adopting cloud-native architectures such as microservices, containers, Kubernetes, and serverless computing for elastic scaling and efficient resource utilization; implementing zero-trust security models and DevSecOps practices to integrate security throughout the development lifecycle without impeding performance; employing auto-scaling, load balancing, and AI-driven predictive scaling for dynamic resource adjustment; and utilizing continuous monitoring, observability tools, and FinOps practices for proactive optimization and cost governance. These strategies minimize inherent trade-offs by embedding security, automation, and financial accountability into scalable designs, as promoted by industry frameworks from organizations like the Cloud Native Computing Foundation and the FinOps Foundation.

Market Realities

Leading Providers and Shares

In the third quarter of 2025, global enterprise spending on infrastructure services reached $107 billion, reflecting a 28% increase year-over-year driven primarily by demand for AI workloads and capacity. This spending encompasses infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and hosted private services, with the top providers capturing the majority of the market through scale, , and ecosystem integration. Amazon Web Services (AWS) maintained its position as the leading provider with approximately 30% market share in Q3 2025, generating revenue of around $30 billion for the quarter. Launched in 2006 as the first major public cloud offering, AWS pioneered scalable IaaS with services like Elastic Compute Cloud (EC2) and Simple Storage Service (S3), establishing dominance through early mover advantage and extensive global infrastructure spanning over 100 availability zones. Microsoft Azure followed with about 22-23% share, benefiting from seamless integrations with enterprise software such as Office 365 and Active Directory, which facilitate hybrid cloud deployments for large organizations. Google Cloud Platform (GCP) held roughly 12-13% share, leveraging Google's strengths in machine learning tools like TensorFlow and BigQuery for data analytics, appealing particularly to AI-focused developers and tech firms. Other notable providers include at around 4% share, concentrated in with strengths in and cross-border data services, and Infrastructure at about 3%, emphasizing database compatibility and for enterprise migrations. Together, AWS, Azure, and GCP accounted for 63% of the market, underscoring their empirical dominance through superior capital expenditures on data centers and AI accelerators, which smaller competitors struggle to match.
ProviderQ3 2025 Market ShareKey Strengths
~30%Scalability, global reach, IaaS pioneer
~22-23%Enterprise hybrid integration
~12-13%AI/ML tools, data analytics
~4%Asia-Pacific dominance, e-commerce
~3%Database optimization, performance

Growth Metrics and Projections

The global cloud computing market reached an estimated USD 912.77 billion in 2025, reflecting robust historical growth driven by expanding digital and hyperscale investments. Cloud infrastructure revenues for the full year 2025 exceeded $400 billion. For leading providers like Amazon Web Services (AWS) and Microsoft Azure, primary growth drivers include cloud infrastructure expansion, AI integrations, recurring revenues from subscription-based services, and high operating margins from scale advantages. This figure marks a continuation of compound annual growth rates (CAGR) in the range of 18-20% over the preceding years, fueled in part by (AI) workloads that have prompted hyperscale providers to allocate hundreds of billions in capital expenditures for expanded compute capacity. Projections indicate the market will expand to between USD 1.6 trillion and USD 2.4 trillion by 2030, with CAGRs forecasted at 17-21% depending on the scope of public versus total cloud services; nearer-term, public cloud revenues are projected to reach $1.19 trillion in 2026. Enterprise adoption underpins much of this trajectory, with 94% of enterprises utilizing services as of 2025, while small and medium-sized businesses (SMBs) have shifted over 63% of their workloads to environments, often through software-as-a-service (SaaS) models that lower entry barriers. Regional dynamics show leading in growth velocity, with a projected CAGR of 22.2% through 2028, outpacing due to rapid digitization in markets like and . However, inefficiencies temper net gains: surveys reveal that 30% or more of cloud expenditures are wasted on underutilized resources and poor optimization, potentially inflating costs and constraining realized returns on investment. This waste, exacerbated by unchecked scaling, underscores the need for disciplined cost management to sustain projected expansions.

Competitive and Regulatory Dynamics

The cloud computing market features intense rivalry among dominant providers— (AWS), , and (GCP)—driving price reductions and rapid feature development to capture market share. In response to competitive pressures, Google Cloud has frequently offered lower pricing for compute and storage services compared to AWS and Azure, with analyses showing GCP's on-demand instances up to 20-30% cheaper in certain configurations as of 2025. This pricing dynamic, coupled with aggressive discounts for committed usage, has compelled AWS and Azure to match or undercut rivals in high-demand areas like AI-optimized instances, fostering a cycle of iterative improvements in scalability and performance. Open-source initiatives, such as the platform, serve as a counterforce to proprietary by enabling organizations to deploy customizable private or hybrid clouds without dependency on single providers. Launched in 2010 and maintained by a global community, OpenStack allows users to manage infrastructure via interoperable APIs, reducing migration costs and promoting multi-cloud strategies that dilute the dominance of hyperscalers. Adoption persists in enterprises seeking flexibility, with deployments supporting hybrid environments to avoid the technical debt of locked-in systems. Regulatory interventions increasingly shape competition, with the European Union's Data Act, fully applicable since September 2025, mandating data portability and fair contract terms for cloud services to curb lock-in and enhance switching between providers. This complements the (DMA), which designates certain platforms as gatekeepers and prompts calls to scrutinize cloud hyperscalers for anti-competitive bundling, though no cloud-specific designations had occurred by late 2025. In the United States, the (FTC) initiated a broad antitrust probe into in November 2024, examining Azure's licensing practices and potential abuse of in cloud and bundling. Similar scrutiny targets practices that may entrench incumbents, but enforcement remains ongoing without finalized remedies. These regulations impose compliance costs that disproportionately burden smaller entrants, as incumbents leverage scale to absorb legal and auditing expenses, potentially slowing overall by diverting resources from R&D to regulatory adherence. Empirical assessments indicate that proving compliance under stringent standards can raise barriers equivalent to millions in upfront investments for new providers, favoring established players with dedicated compliance teams. Despite this, niche challengers like CoreWeave have emerged, specializing in GPU-intensive AI workloads with purpose-built infrastructure that outperforms general-purpose clouds in and efficiency. Valued for its software optimizations and flexible , CoreWeave powers major AI firms and captures demand unmet by hyperscalers' broader offerings.

Future Trajectories

Integration with Emerging Technologies

Cloud computing platforms have increasingly integrated with (AI) and (ML) workloads, enabling scalable deployment of compute-intensive tasks through specialized GPU clusters. Major providers offer access to NVIDIA's high-performance GPUs, such as the H100 and A100 series, optimized for training large language models and other AI applications. For instance, NVIDIA DGX Cloud provides multi-node GPU scaling across leading hyperscalers, facilitating production-ready AI training. forecasts that AI/ML demand will drive increased cloud compute usage, with 50% of cloud compute projected to stem from such workloads by 2029. This integration is further propelled by multi-cloud strategies, where AI requirements encourage organizations to combine providers for optimal resource allocation, as highlighted in Gartner's 2025 cloud trends. Edge computing extends cloud capabilities to the periphery, addressing low-latency needs for (IoT) applications by processing data closer to the source. This hybrid model reduces transmission delays to milliseconds, essential for real-time analytics in industrial IoT and autonomous systems. In 2025, edge-cloud convergence is expected to enhance IoT scalability, with localized processing minimizing bandwidth strain on central clouds while maintaining centralized management. Providers like AWS and Azure support this through edge services that federate with core cloud infrastructure for seamless data flow. Emerging pilots in quantum computing leverage cloud platforms for accessible experimentation, with hyperscalers such as AWS, , and Google Cloud offering quantum-as-a-service. Azure, for example, emphasizes hybrid quantum-classical applications in 2025 to build quantum readiness via skilling and experimentation access. These efforts focus on complementing classical cloud computing for optimization problems unsolvable by traditional methods. Blockchain hybrids integrate decentralized ledgers with cloud for enhanced data integrity in multi-cloud environments, using tools like Google Cloud's Blockchain Node Engine for managed node hosting. Serverless architectures further support in emerging tech stacks, automating scaling for event-driven workloads and improving efficiency in AI/ML ops.

Sustainability and Efficiency Efforts

Major cloud providers pursue energy efficiency through metrics like (PUE), with modern facilities achieving values below 1.2. Google reports a trailing twelve-month PUE of 1.09 across its stable large-scale data centers. Industry analyses indicate that high-efficiency setups reach 1.2 or lower, outperforming broader averages of 1.55 to 1.59 reported since 2020. Renewable energy commitments form a core of these efforts, including carbon tracking and procurement strategies. Google targets 24/7 carbon-free energy operations by 2030, having matched 100% of its global electricity use with renewables for the eighth year in 2024. Amazon Web Services (AWS) aims for 100% renewable energy by 2025 via investments in wind and solar projects totaling 20 GW capacity. Microsoft Azure focuses on carbon-neutral grids and efficiency enhancements, though third-party evaluations note gaps in emissions and water metrics relative to policy claims. By 2025, tools like auto-scaling enable dynamic , reducing waste by provisioning compute capacity in response to real-time demand and scaling down during low usage. AI integration further optimizes this by predicting loads and enhancing hardware utilization, contributing to measurable per-unit reductions amid expanding . These optimizations, however, occur against rising absolute emissions driven by demand growth; Amazon's carbon footprint increased 6% in 2024 despite efficiency gains in data centers and AI chips. Projections suggest data center energy use could rise 20% by 2030, with GHG emissions up 13%, underscoring that relative improvements do not offset scale expansion without broader demand management. Skepticism persists regarding greenwashing, as providers' self-reported claims often lack independent verification, with accusations against for opaque emissions data and for fossil fuel ties undermining neutrality pledges. Empirical audits and standardized metrics are essential to distinguish substantive progress from promotional narratives.

Persistent Challenges and Innovations

Persistent challenges in cloud computing include acute talent shortages and accumulating complexity debt. As of 2025, over 90% of organizations face IT skills shortages projected to persist through 2026, potentially costing $5.5 trillion globally due to gaps in cloud expertise. Similarly, 87% of enterprises report insufficient specialized talent for cloud operations, exacerbating deployment delays and operational inefficiencies. Complexity debt arises from suboptimal architectural choices, legacy integrations, and unmanaged cloud waste, which reached $260 billion in 2024—nearly one-third of total spend—manifesting as that hinders agility and inflates maintenance costs. Security threats have evolved with AI integration, outpacing defenses in cloud environments. In 2025, 76% of organizations cannot match the speed of AI-powered attacks, including and automated exploits targeting cloud misconfigurations. Approximately 16% of reported cyber incidents now involve AI tools for evasion or generation, amplifying risks from weak development pipelines and vulnerabilities. Innovations aim to mitigate these issues through autonomous systems and enhanced cost controls, though adoption remains constrained by integration hurdles. Autonomous cloud management leverages AI for self-healing infrastructure and predictive optimization, with projections indicating over 80% of operations by 2030 to reduce and complexity. FinOps practices have advanced by codifying cost policies into engineering workflows, potentially unlocking $120 billion in value through real-time visibility and AI-driven allocation across cloud, SaaS, and data centers. via edge and extends processing to local nodes, bridging central clouds and devices to alleviate latency and central failure points, though it introduces new coordination challenges. Despite projected market growth to $1.6 trillion by the late 2020s, cloud adoption enters a phase of disillusionment where early hype yields to scrutiny over elusive ROI. Many enterprises report persistent underperformance in returns, with FinOps confidence not translating to consistent savings amid waste and overprovisioning. Verifiable ROI, grounded in empirical cost-benefit analyses rather than vendor promises, will determine sustained traction, as organizations prioritize measurable outcomes over expansive migrations.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.