Developers love cloud. According to a recent O’Reilly study, 88 percent of developers have made some moves to the cloud today, while 25 percent plan to move all their applications over time.

As companies make these moves, teams will adopt multiple cloud providers to get the best fit for them. This can be part of a thought out and joined-up strategy, or it can be caused by haphazard adoption where individual teams pick a cloud provider that suits them without talking to other departments. DevOps teams then have to create a more detailed and planned approach around all those components and choices. Either way, in a survey conducted by Accelerated Strategies, roughly 20 percent of companies they polled currently have a multi-cloud strategy in place. More than 50 percent are considering adding one or more cloud infrastructure providers to the mix in 2021.

The three drivers for multi-cloud strategies

So how will this trend affect the market? There are several areas to consider.

The first is how DevOps teams and software developers make choices around their applications. Developers want to build new applications and get them running. DevOps teams have to support these services and keep them running. For DevOps teams, adopting a multi-cloud strategy will involve understanding all the moving parts that currently exist and what might be needed in future. For instance, using multiple suppliers lets them access specific technologies from different cloud providers, as well as providing more flexibility on where those new application components run.

The second is around cost control. Rather than spending on commodity services with one provider, using multiple providers can reduce the overall bill over time. According to IBM research, 66 percent of companies currently save money by working with smaller hyperscale providers rather than relying solely on larger hyperscalers that focus on enterprise customers.

The third is that developers and IT teams choose to work with multiple cloud companies to avoid lock-in to a single provider. According to Flexera, 68 percent of CIOs are concerned about lock-in to a single cloud provider’s tools. Similarly, 83 percent of users running on multiple clouds want the freedom to move workloads between cloud providers as and when they want to, based on research from Turbonomic. While there might be some specialist services that are only available from a specific cloud, the core primitives of cloud services are comparable and compatible across large and small hyperscale providers. This compatibility helps companies achieve their goals and stay flexible.

How DevOps teams can drive better results

With these trends in mind, how can companies see the right results from implementing a multi-cloud strategy? It starts with design, and how you can get the right goals in place for each team involved.

For instance, scaling up an application should include how to use multiple providers alongside each other from the start. However, this is not about the technical achievement of running over different providers. Approaching this as a solely technology-led project misses the point. Instead, the first goal should be how to retain flexibility and control rather than handing it over to a third party.

The second goal should be based on the added benefit of potential cost savings. Using multiple providers can reduce spending compared to relying on a hyperscale provider alone. From a technical perspective, there are some cloud services that have been commoditized, such as storage and compute. These are broadly compatible, so you can encourage your team to look at different cloud provider services alongside each other for the same function.

For example, some cloud providers have variable pricing for different locations or regions, while others adopt a flat rate structure regardless of which cloud location you choose. Picking a flat rate provider therefore reduces costs overall compared to using a single cloud provider.

Another goal to consider is portability, where workloads can be moved from one place to another and run without any changes to the application. Software containers are designed to run on a range of services, from internal data center deployments and private cloud through to public cloud. Gartner predicted that by 2022, more than 75% of global organizations will be running containerized applications in production.

However, while the potential is there, these implementations are more complex and with many more moving parts. DevOps teams can therefore use orchestration tools like Kubernetes to overcome those issues.

Kubernetes is the most popular container orchestration tool to manage containers, automating many of the processes required around container management while keeping that ability to run in multiple places. Many cloud providers now have their own managed Kubernetes services to simplify this process further, so DevOps teams can concentrate on supporting those applications rather than wrestling with the infrastructure directly. This can also support more portability for containers between cloud services and between public and private cloud instances.

For developers and DevOps teams, the future for IT is about finding the best fit for their workloads. For some, this will mean going ‘all in’ on one of the giant hyperscalers. For others - especially those at smaller companies with less complex needs - smaller hyperscale providers will help them achieve this. However, more than half of companies are due to add another cloud provider to their lists of suppliers in the next year according to Accelerated Strategies.

To get the right strategy in place, teams have to understand the technical goals that they have, all the providers that can get involved (not just the big hyperscalers), and the cost models that those providers have. This can help more developers move their applications to cloud, but not at the expense of unnecessary spending or lock-in to a single provider.

Get a monthly roundup of Workforce & Skills news, direct to your inbox.