Cloud Repatriation: Striking a Strategic Balance

January 14, 2025
Cloud

Public cloud computing promised us a revolution of cost savings, scalability, and agility. For years, we’ve embraced a cloud-first strategy, moving workloads, applications, and data to public clouds. The hype cycle has come full circle and a new trend, public cloud repatriation, is emerging. As organizations seek to optimize the value of their technology investments, it has many of us evaluating the appropriateness of this “cloud-last” approach. In this blog, we will explore what cloud repatriation is, what’s driving it, and how we can leverage what we’ve learned from our mistakes to strike a strategic balance moving forward.

What is Cloud Repatriation?

Chances are you’ve heard the term used in a variety of circumstances but what does cloud repatriation mean? Well, it means something different to each organization (at least it should), but we’ll get into that later. For definition purposes, cloud repatriation refers to migrating workloads out of Public Cloud environments back to on-premises infrastructure or private data centers. Ultimately, I think the largest question to answer is, where should these workloads live? Before we explore workload strategies, let’s first examine what’s driving organizations to consider cloud repatriation.

What’s Driving It?

Cost/Budget – At the top of the list, no surprise here, is cost. My professional suggestion and advisement is to first turn your thoughts inward and get honest about what you are evaluating from a cost perspective. A Total Cost of Ownership (TCO) analysis should extend far beyond the compute and storage components. Cloud-first embodied the opportunity cost of not doing something different. Traditional technology procurement was time-consuming, lacked scale and predictability, and did little to foster innovation. By accounting for all the components-data center facilities, WAN, LAN, compute, storage, security and compliance, backup and disaster recovery, as well as the operational staff and programs needed to maintain 24/7 operations gain a more accurate picture of your TCO. The real goal of the cloud was to give us back time to reinvest in further data and application modernization, as opposed to saving money on hardware. My observation is that few organizations have been successful in executing that vision. Many stopped at a lift and shift from on-premises infrastructure to a Public Cloud that wasn’t purpose-built for that use case. In many circumstances the approach has resulted in unpredictable cost and performance, forcing customers to evaluate their choices.

Strategically, we anticipate another driver within the budget category to be Artificial Intelligence (AI). Organizations are increasingly reallocating budget dollars toward AI initiatives to stay competitive and innovate. By optimizing costs in other areas, such as cloud computing, businesses can fund AI projects.

Performance & Latency – Another side effect some organizations have encountered with a move to the cloud, is the introduction of latency causing a negative effect on performance for some workloads. Latency is attributed mainly to the geographic distance introduced with a move to the cloud. However, shared networking can also impact workload performance within the public cloud. Moving these workloads close to the point of use in private data centers or edge environments can improve response times and provide more consistent performance. Having better control over the infrastructure allows administrators to fine-tune for performance based on latency-sensitive workloads.

Security & Compliance – Enhanced control over sensitive data and the ability to meet regulatory requirements are the story here. Public cloud environments, while secure, operate on a shared infrastructure that may not provide the transparency and granularity of control some organizations desire. By bringing those workloads and data back to private data centers or on-premises, organizations can implement access controls and custom security protocols of their choosing. It should be noted this practice can often introduce the need for additional tooling, staffing, and expertise in the InfoSec discipline. Remember, a true TCO requires an honest assessment of all costs!

Use Case – Public cloud environments weren’t intended to rehost virtual machines in the same ways we hosted them on private infrastructure. The public cloud was meant for refactoring workloads. Refactoring involves modifying an application to take advantage of the cloud-native features inherently built into them. For example, serverless computing and containerization. Rehosting is the “lift and shift” approach that often nets an over-provisioning of instances to ensure performance, leading to higher costs. Accepting that we used the wrong tool for the job by rehosting workloads into public cloud is the first step in striking a strategic balance for the future.

The Path Forward!

The future is hybrid, and at US Signal, we’ve been hard at work building a collection of services to complement this strategic direction. If you are repatriating, take the time to analyze the needs of each workload and where they should live. At a minimum, examine the needs of each workload for cost, security and compliance, performance, resiliency, and operational impact. Most organizations will likely find that their workload are best distributed across various environments, including on-premises or edge locations, private clouds or colocation facilities, public clouds, and, in some cases, repurchased as SaaS solutions. It’s untenable to think we should return to on-premises only. Many organizations today lack the infrastructure readiness, staffing, and expertise to take on that task. Why take two steps backward?

Hybrid cloud is about striking a strategic balance. Don’t make the same mistake twice. Choose a service provider, like US Signal, that offers comprehensive solutions tailored to your needs, including edge computing, private cloud, colocation, public cloud, backup and disaster recovery, managed and professional services, and robust managed security. We’ve got the high-speed fiber optic network to support you from your edge, through our network of 16 private data centers in the U.S. and into the public cloud. From our new Open Source cloud offering, called OpenCloud, to traditional private cloud services powered by Nutanix and VMware, to SASE, and our partnership with Microsoft as a Direct CSP, we are ready to meet our customers where their workloads need to live.