The CIO of a major European logistics firm described her board presentation in early 2024 this way: "We showed them what it cost to do nothing. Not the capex of a migration — the compounding cost of staying where we were." The board approved the transformation programme within two weeks. Eighteen months later, the firm had reduced infrastructure operating costs by 41%, cut deployment cycles from six weeks to four hours, and retired eleven data centres.

That story is now common enough to be unremarkable. What is remarkable is how many organisations are still making the opposite calculation — treating cloud migration as a future initiative rather than an immediate economic imperative. The evidence from 2024 and 2025 is unambiguous: the cost of staying on legacy infrastructure is rising faster than the cost of migrating from it.

This report examines why that is the case, what good cloud transformation actually looks like, and where the most common — and most expensive — failure modes occur.

The scale of cloud adoption in 2025

The aggregate numbers from 2024 establish the scale of what is already underway. Global cloud spending reached $678 billion in 2024, according to Synergy Research Group — a figure that represents a 21% increase year-on-year and a near-doubling from $350 billion in 2020. Gartner projects that number will exceed $1.1 trillion by 2028, driven by AI infrastructure demand, enterprise workload migration, and the continued expansion of cloud-native application development.

By the end of 2025, IDC estimates that 85% of enterprise workloads will be hosted in cloud environments — up from 58% in 2021. The shift is not uniform across organisation types or geographies, but the direction is unambiguous. Organisations that are not actively executing cloud transformation programmes today are not holding position — they are falling behind a market that has already moved.

$678B
Global cloud spending in 2024 — up 21% year-on-year
Source: Synergy Research Group, 2024
$1.1T
Projected global cloud spend by 2028 as AI and workload migration accelerate
Source: Gartner Cloud Forecast, 2025
85%
Enterprise workloads cloud-hosted by end of 2025, up from 58% in 2021
Source: IDC Enterprise Cloud Adoption Report, 2025
66%
Global hyperscaler market share held by AWS, Azure, and Google Cloud combined
Source: Synergy Research Group Q4 2024

Among hyperscalers, the competitive landscape has stabilised into a clear tier structure. AWS commands approximately 31% of global cloud infrastructure market share, maintaining its position as the dominant provider by revenue, breadth of services, and enterprise penetration. Microsoft Azure holds around 25% — with particular strength in organisations already running Microsoft-stack environments — and has seen the fastest enterprise growth rate of the three, driven substantially by its deep integration with Microsoft 365 and its early positioning of Azure OpenAI services. Google Cloud (GCP) sits at approximately 10%, having grown its market share by 3 percentage points since 2022 on the strength of its data analytics, AI/ML tooling, and Kubernetes leadership. The remaining 34% is distributed across Oracle Cloud Infrastructure, IBM Cloud, Alibaba Cloud (dominant in APAC), and a range of regional providers.

What the market share figures obscure is the accelerating pace of multi-cloud adoption. Flexera's 2025 State of the Cloud Report found that 89% of enterprises now have a multi-cloud strategy — using two or more public cloud providers alongside, in many cases, private cloud or on-premise infrastructure. The architectural complexity this introduces is a direct driver of the FinOps discipline discussed later in this report.

The real cost of staying on legacy infrastructure

The most persistent obstacle to cloud transformation programmes is a cost comparison that is framed incorrectly. IT leaders present migration costs — the services, labour, and transition disruption of moving — against a status quo that appears stable. But legacy infrastructure is not a stable cost. It is a compounding one.

The true total cost of ownership (TCO) of on-premise infrastructure includes categories that rarely appear in the headline budget line. Physical footprint costs — data centre space, power, cooling, and physical security — typically represent 30–40% of the real cost of running on-premise compute, but are often absorbed into facilities budgets that IT leadership does not fully control or see. Power and cooling alone can account for 1.5–2× the cost of the hardware itself over a standard five-year hardware lifecycle, particularly in older data centres with power usage effectiveness (PUE) ratios above 1.8 — compared to hyperscaler facilities that routinely operate at PUE 1.1 to 1.3.

Hardware refresh cycles are the single largest source of capital expenditure in legacy environments. Enterprise server hardware depreciates over three to five years, meaning organisations are committing to five-year capital cycles at precisely the moment cloud providers are delivering new generations of compute, storage, and networking on a continuous basis. A server purchased in 2022 will be running on infrastructure that is two generations behind the equivalent cloud compute available in 2027 — with no option to upgrade without a full capital cycle.

Specialist staffing is the cost category that catches most organisations off-guard in TCO analyses. Maintaining on-premise infrastructure requires system administrators, storage engineers, network engineers, and security specialists whose skills are increasingly scarce and whose compensation is rising. Hyperscalers handle the physical layer, the hypervisor layer, the network fabric, and the storage management — eliminating entire job categories from the on-premise estate. The talent cost of maintaining legacy infrastructure is not just the salaries: it is the opportunity cost of those engineers not working on application development, automation, and business capability.

Perhaps the most underestimated cost is provisioning lead time. In a typical enterprise data centre environment, provisioning a new server — from hardware order to production-ready infrastructure — takes four to six weeks. In cloud, the same resource is available in minutes. The business value destroyed by six-week provisioning lead times is invisible in the infrastructure budget but very visible in the revenue and opportunity cost of projects delayed.

What the Research Shows on Legacy TCO

A 2024 Forrester Total Economic Impact study commissioned by AWS found that enterprises migrating to cloud infrastructure achieved an average reduction of 3–4× in per-unit compute cost within 24 months of migration completion. IDC research published in 2025 found that organisations running predominantly on-premise workloads spent an average of 72% of their IT budget on maintenance and operations — leaving only 28% for innovation and new capability development. Cloud-native organisations inverted that ratio, with 55–60% of IT spend directed toward new capability. The McKinsey Global Institute's 2024 cloud economics report estimated that enterprises delaying cloud migration beyond 2025 face a cumulative competitive cost disadvantage of $1.2 trillion collectively by 2030 — driven by slower product velocity, higher infrastructure costs, and talent retention challenges.

Legacy infrastructure also compounds technical debt in ways that are distinct from application-layer debt. Aging operating systems require increasingly expensive extended support agreements (Microsoft's Extended Security Updates for Windows Server 2012 R2, for example, reached $0.023 per core-hour for cloud instances — a cost that disappears entirely when the workload is migrated and modernised). Unsupported firmware, end-of-life storage arrays, and aging network hardware create risk profiles that require expensive mitigations — additional backup infrastructure, manual patching processes, and specialist contractors for hardware that mainstream support no longer covers.

Why lift-and-shift fails — and what works instead

The most common and most costly failure mode in cloud migration is not failed migration — it is successful migration that delivers none of the expected benefits. This is the lift-and-shift trap: moving on-premise workloads to cloud virtual machines without re-architecting them for the cloud environment. The workload moves. The costs don't.

Lift-and-shift — sometimes called "rehosting" — takes a virtual machine running on-premise and runs it on a cloud VM of equivalent specification. The application is unchanged. The operating system is unchanged. The architecture is unchanged. What changes is the billing model: from a capital expense to an operational one, and from a utilisation-independent cost to a utilisation-dependent one. For workloads that ran at 15% average CPU utilisation on-premise (a typical figure for enterprise servers, according to the Uptime Institute), that utilisation rate is now costing cloud compute dollars around the clock.

Cloud infrastructure and data centre migration

The failure patterns are consistent across industries. Organisations that lift-and-shift without rightsizing end up running cloud instances that are dramatically over-specified for actual workload demands. Organisations that lift-and-shift without eliminating redundant infrastructure find their cloud bill includes servers that existed on-premise solely for fault tolerance in a physical environment — fault tolerance that cloud architecture handles differently and more efficiently. Organisations that lift-and-shift applications with persistent local storage requirements face unexpected egress costs when those applications begin moving data between availability zones at enterprise scale.

"Moving your servers to the cloud without re-architecting them is not cloud transformation — it is cloud co-location. You get the bill without the benefit."

The corrective framework that has become industry standard is the 6Rs of cloud migration, originally developed by Gartner and subsequently refined by hyperscaler professional services teams. Each workload in an enterprise estate should be assessed against all six disposition options before migration begins:

Retain — Keep the workload on-premise. Not everything should move to cloud, particularly workloads with regulatory data residency requirements, extremely low latency requirements met only by on-premise proximity, or hardware dependencies that have no cloud equivalent. A disciplined migration programme identifies what to retain explicitly — not by default.

Retire — Decommission the workload entirely. Enterprise estates routinely contain applications that are no longer actively used, maintained by institutional habit rather than business need. A workload assessment typically identifies 10–20% of the estate as candidates for retirement — generating immediate cost savings before any migration work begins.

Rehost — Lift-and-shift, but only for workloads where it makes economic and architectural sense. Large, monolithic applications with complex interdependencies that would require 18+ months to refactor may be better candidates for rehosting as a first step, with modernisation planned as a second phase once the data centre footprint is reduced.

Replatform — Move to a managed cloud service without re-architecting the application logic. A self-managed MySQL database becomes Amazon RDS or Azure Database for MySQL. A self-managed message queue becomes Amazon SQS or Azure Service Bus. The application code is largely unchanged, but the operational overhead of managing the underlying service is eliminated.

Refactor — Re-architect the application to be cloud-native. This is the highest-effort option and the highest-value one for applications that justify the investment. Breaking a monolith into microservices, containerising workloads with Docker and Kubernetes, adopting event-driven architecture, and leveraging platform-native services like AWS Lambda or Azure Functions. The cost and velocity gains from refactoring are substantially larger than from any other disposition — but the timeframe and investment are also substantially larger.

Repurchase — Replace the existing application with a SaaS alternative. An on-premise CRM becomes Salesforce. An on-premise ERP becomes SAP S/4HANA Cloud. An on-premise HR system becomes Workday. For commodity business applications, repurchase frequently delivers faster time-to-value than migration and eliminates ongoing infrastructure management entirely.

The organisations that execute cloud transformation successfully treat workload assessment and sequencing as the programme's most critical phase — not as a preamble to the real work, but as the work itself. The sequencing of which workloads migrate in which wave, and in which disposition mode, determines the cost trajectory and benefit realisation timeline of the entire programme.

Cloud-native architecture: what it actually means and why it matters

Cloud-native architecture is one of the most frequently used and least precisely defined terms in enterprise technology. In practice, it refers to a set of architectural patterns and engineering practices that are designed specifically to exploit the properties of cloud infrastructure — elasticity, managed services, global distribution, and pay-per-use economics — rather than simply tolerating them.

The core components of cloud-native architecture are not individually new. What is new is their combination as a coherent system, and the degree to which hyperscaler platform services have made them accessible to organisations that could not previously have built and operated them independently.

Auto-Scaling & Elasticity

Cloud-native applications are designed to scale horizontally — adding and removing compute instances in response to real-time demand signals rather than provisioning for peak capacity. AWS Auto Scaling, Azure Virtual Machine Scale Sets, and GCP Managed Instance Groups can respond to load changes in under 60 seconds, matching capacity to demand continuously. Organisations that previously provisioned for Black Friday peak at 365-day cost now pay for Black Friday compute only during Black Friday. A major UK retailer reduced compute costs by 62% in 12 months using auto-scaling after migrating from fixed on-premise infrastructure.

DevOps & CI/CD Pipeline

Continuous Integration and Continuous Deployment pipelines automate the process of testing, building, and releasing software — eliminating the manual steps that extend deployment cycles to weeks in traditional enterprise environments. Cloud-native CI/CD tooling (GitHub Actions, AWS CodePipeline, Azure DevOps, Google Cloud Build) enables organisations to move from quarterly software releases to multiple deployments per day. Amazon famously deploys to production every 11.7 seconds on average. The business value is not speed for its own sake — it is the ability to respond to market signals, fix defects, and ship features at a pace that on-premise development cycles structurally prevent.

Container Orchestration

Docker containers package application code and its dependencies into portable, consistent units that run identically across development, testing, and production environments — eliminating the "works on my machine" class of deployment failures. Kubernetes, now the de facto standard for container orchestration, manages the scheduling, scaling, networking, and health management of containerised workloads across clusters of compute nodes. All three major hyperscalers offer managed Kubernetes services (Amazon EKS, Azure AKS, Google GKE) that eliminate the operational overhead of managing the Kubernetes control plane. Containerisation is the enabling technology for microservices architecture and is increasingly the default packaging format for new enterprise application development.

Serverless Architecture

Serverless computing — AWS Lambda, Azure Functions, Google Cloud Functions — executes application code in response to events without the need to provision, manage, or pay for an always-on server. Functions execute in milliseconds, scale from zero to millions of concurrent executions without configuration, and charge only for the compute time consumed during execution. For event-driven workloads, API backends, data processing pipelines, and scheduled tasks, serverless architecture reduces infrastructure cost to near-zero for idle periods and eliminates operational overhead entirely. Organisations migrating suitable workloads from always-on EC2 instances to Lambda typically see 70–85% reductions in compute cost for those specific workloads.

The deployment cycle is where the cumulative impact of cloud-native architecture becomes most visible. Traditional enterprise release cycles — constrained by manual testing gates, change advisory board approvals, and deployment windows that avoid business-hours risk — measure deployment frequency in weeks or months. A cloud-native engineering organisation with mature CI/CD pipelines, automated testing, feature flags, and blue/green deployment capability measures deployment frequency in hours or days. The DORA (DevOps Research and Assessment) metrics, now embedded in engineering culture at leading technology organisations, show that elite engineering teams deploy 973× more frequently than low performers — and have 6,570× faster recovery from failures.

Infrastructure-as-code (IaC) — the practice of defining infrastructure configuration in version-controlled code files using tools like Terraform, AWS CloudFormation, or Pulumi — is the connective tissue that makes cloud-native architecture repeatable and governable at scale. When infrastructure is defined as code, environments can be spun up and torn down in minutes, configuration drift is eliminated, and the entire infrastructure estate becomes auditable through version control history. For regulated industries, IaC is increasingly a compliance requirement as much as an engineering best practice.

Before and after: what the migration shift looks like

The following comparison reflects patterns observed across enterprise cloud transformation engagements — from financial services and retail to manufacturing and professional services. The contrast is not theoretical: these figures represent the operational reality before and after a disciplined, cloud-native migration programme.

Operation Legacy Infrastructure Cloud-Native
Provisioning time 4–6 weeks from hardware order to production-ready server, including procurement, physical installation, OS configuration, and network integration Minutes. Cloud compute, storage, and networking resources provisioned via API or console, available immediately — no procurement cycle, no physical installation
Cost model Capital expenditure on a 3–5 year hardware refresh cycle, sized for peak demand, consuming budget regardless of utilisation — typically 10–20% average utilisation on provisioned capacity Operational expenditure matched to actual consumption. Auto-scaling eliminates idle capacity cost. Reserved Instances and Savings Plans reduce on-demand pricing by 30–72% for predictable workloads
Deployment frequency Weekly to quarterly releases gated by manual testing, change advisory board approval processes, and scheduled deployment windows outside business hours Multiple deployments per day via CI/CD pipelines with automated testing, feature flags, and zero-downtime blue/green or canary deployment strategies
Disaster recovery Geographically separate DR data centre at 50–100% of primary infrastructure cost, tested quarterly, with RTO of hours to days and RPO measured in hours for most workloads Multi-AZ and multi-region resilience built into architecture. RTO and RPO measured in seconds to minutes for critical workloads. DR cost absorbed into standard infrastructure pricing with no separate facility overhead
Security patching Manual patch management across heterogeneous hardware and OS estate, typically on monthly cycles, with significant lag between vulnerability disclosure and patch deployment Managed services patched by hyperscaler automatically. Immutable infrastructure patterns eliminate configuration drift. AWS Systems Manager, Azure Update Manager, and GCP OS Config automate patch deployment at scale
Team productivity Infrastructure engineers spend 60–70% of time on maintenance, hardware management, and operational support — with limited capacity for automation, tooling, or capability development Infrastructure-as-code and managed services shift team focus from maintenance to automation and architecture. Organisations report 40–50% improvement in engineering throughput within 12 months of cloud-native adoption

The productivity figure deserves additional context. The shift in engineering team capacity — from infrastructure maintenance to application and automation work — is consistently cited as the most significant long-term benefit of cloud transformation by CIOs who have completed migrations. The cost savings from compute efficiency are real and measurable from day one. The compounding value of an engineering organisation that ships product faster and more reliably is harder to measure but orders of magnitude larger over a three-to-five year horizon.

The FinOps discipline: why cloud cost engineering matters more than migration

Cloud migration without FinOps practice is the second most common failure mode in enterprise cloud programmes — and unlike lift-and-shift, it afflicts organisations that have successfully migrated and re-architected their workloads. The pattern is consistent: cloud spend grows rapidly in the first 12–18 months post-migration as workloads move and new cloud-native capabilities are adopted. Without active cost engineering, that growth continues beyond the point that business value justifies it.

The scale of cloud waste is striking. Flexera's 2025 State of the Cloud Report found that organisations estimate they waste an average of 32% of their cloud spend — a figure that has remained stubbornly consistent for four consecutive years. For an enterprise spending $50 million per year on cloud infrastructure, that represents $16 million in annual waste. The Gartner Infrastructure and Operations research team has separately estimated that organisations without mature FinOps practices overspend their optimal cloud budget by 30–35% within 24 months of their initial migration.

The Scale of Cloud Cost Waste

Flexera's 2025 State of the Cloud Report surveyed 753 cloud decision-makers across enterprises with 1,000+ employees. 32% of cloud spend is estimated as wasted — driven by idle resources, oversized instances, unattached storage volumes, and unoptimised data transfer costs. Among the same respondents, optimising existing cloud use was ranked the top cloud initiative for the fourth consecutive year, above migrating more workloads or adopting new cloud services. AWS itself has estimated that the average enterprise customer is running instances at 40% of provisioned capacity — meaning 60% of purchased compute is generating no business value. Correcting this through rightsizing alone typically delivers 20–30% cost reduction within 90 days of implementation.

FinOps — a portmanteau of Finance and DevOps, formalised by the FinOps Foundation — is the organisational practice of bringing financial accountability to cloud spending through cross-functional collaboration between engineering, finance, and business stakeholders. It is not a tool or a platform, though tooling supports it. It is an operating model that makes cloud spending visible, attributable, and optimisable.

The core FinOps practices that deliver the most immediate cost reduction are well-established. Rightsizing — adjusting instance types and sizes to match actual workload requirements rather than provisioned specifications — is typically the first exercise and delivers 20–30% cost reduction on compute spend within 90 days. Cloud cost management platforms (AWS Cost Explorer, Azure Cost Management, Google Cloud Cost Management, and third-party tools including Apptio Cloudability, CloudHealth by VMware, and Spot.io) provide rightsizing recommendations based on actual utilisation data that most organisations have never reviewed systematically.

Reserved Instances and Savings Plans — AWS's pricing models for committing to a consistent level of compute usage over one or three years — reduce on-demand pricing by 30–72% depending on commitment term and flexibility. Azure offers Reserved VM Instances with equivalent savings. For predictable baseline workloads, the decision to purchase reserved capacity is straightforward arithmetic, but it requires the cross-functional FinOps process to make happen: engineering teams need to commit to usage levels, finance teams need to approve multi-year commitments, and procurement teams need to manage the purchasing cycle.

Spot Instances (AWS) and Spot VMs (Azure) offer access to spare hyperscaler compute capacity at 60–90% discounts versus on-demand pricing, with the trade-off that they can be reclaimed with two minutes' notice. For fault-tolerant, interruption-tolerant workloads — batch processing, data analytics, CI/CD build pipelines, and machine learning training jobs — Spot pricing is transformative. Organisations running data engineering pipelines on Spot infrastructure routinely achieve 70–80% compute cost reductions versus on-demand equivalents.

Tagging strategy and cost allocation are the governance foundations of FinOps. Without consistent resource tagging — associating every cloud resource with the business unit, application, environment, and team that owns it — cost data is an undifferentiated mass that no one is accountable for. With consistent tagging, every dollar of cloud spend is attributable to a cost centre, enabling showback (showing teams what they are spending) and chargeback (charging teams for what they consume). The cultural shift from shared infrastructure cost pools to team-level accountability for cloud spending is where FinOps delivers its most durable behavioural change.

Cloud cost management and FinOps engineering

The maturity model for FinOps practice, as defined by the FinOps Foundation, runs from Crawl (basic visibility, reactive cost management) through Walk (proactive optimisation, reserved capacity management, regular reviews) to Run (real-time cost optimisation, automated rightsizing, engineering teams with cost accountability embedded in development workflows). Most enterprises begin at Crawl. The organisations achieving the largest cost reductions — and sustaining them — are those that have progressed through Walk to Run, where cost engineering is a first-class engineering discipline rather than a finance-team report.

Key Takeaways from This Report

  • Global cloud spending reached $678B in 2024 and is projected to exceed $1.1T by 2028 — organisations not executing cloud transformation are falling behind a market that has already moved
  • The true TCO of legacy infrastructure includes physical footprint, power, cooling, hardware refresh cycles, specialist staffing, and provisioning lead times that rarely appear in the headline IT budget — typically making on-premise compute 3–4× more expensive per unit than cloud equivalents
  • Lift-and-shift is the most common cloud migration failure mode: moving workloads without re-architecting them transfers cost without transferring benefit — organisations must assess every workload against the 6Rs (Retain, Retire, Rehost, Replatform, Refactor, Repurchase)
  • Cloud-native architecture — microservices, containers, Kubernetes, serverless, CI/CD, and infrastructure-as-code — reduces deployment cycles from weeks to hours and enables engineering teams to redirect 40–50% more capacity toward product delivery
  • The before/after contrast is stark: provisioning from 6 weeks to minutes; DR from hours to seconds; deployment frequency from quarterly to multiple per day; security patching from manual to automated
  • 32% of enterprise cloud spend is wasted — FinOps practice (rightsizing, reserved capacity, Spot instances, tagging, cost allocation) is the discipline that converts cloud migration into sustained financial performance
  • The organisations achieving the most durable cloud ROI are those that treat FinOps as a first-class engineering and business discipline — not as a cost-reduction project, but as an ongoing operating model

Our view

The question of whether to pursue cloud transformation has been settled by the economics. The questions that remain — and where we see organisations most consistently struggle — are questions of sequencing, discipline, and governance. Cloud transformation programmes that fail do not fail because the technology doesn't work. They fail because the workload assessment was rushed, the migration was sequenced for speed rather than value, lift-and-shift was treated as good enough, and FinOps was treated as someone else's responsibility. The gap between organisations that achieve substantial, sustained cloud ROI and those that achieve the cloud bill without the cloud benefit is almost entirely a gap in programme discipline — not a gap in technology capability.

What separates the organisations that succeed is a clarity of purpose that runs from the board level to the engineering team. Cloud transformation is not an IT infrastructure project. It is a business transformation programme that happens to involve IT infrastructure. The organisations we see achieving the most significant outcomes are those where cloud strategy is owned by the executive team, where success is measured in business outcomes rather than workload migration percentages, where FinOps is embedded in engineering culture rather than delegated to a finance team, and where the migration programme is designed around value sequencing — moving the workloads that deliver the most business benefit earliest, generating the financial returns that fund subsequent waves.

The imperative is real, and the window for capturing the full competitive advantage of cloud transformation is not indefinitely open. As cloud-native capabilities — particularly AI infrastructure, real-time data platforms, and global edge compute — become embedded in the operating models of leading enterprises, the gap between cloud-native and legacy-bound organisations will become increasingly structural. The organisations that act with discipline now — not with haste, but with the rigour that the investment demands — are the ones that will define competitive advantage in their markets over the next decade.