IT organizations are good at evaluating and selecting hardware and software. Acquisition of infrastructure and applications for traditional data center deployment follows a rigorous process that examines every bit, byte, and bolt to ascertain exactly what is provided, how it will fit into the existing computing environment, and how it will perform to meet business and technical needs.

But as enterprises adopt cloud computing and create hybrid clouds that link on-premises private clouds with public cloud services, many fail to put the public portion of the environment under the same microscope. As a result, they risk failing to achieve the performance, maintain the security, and save the money they expect.

Cloud hosting.jpg
– Thinkstock

Hybrid clouds aren’t just a mash-up of on-premises computing with public infrastructure-as-a-service (IaaS) and software-as-a-service (SaaS) offerings. They are unified computing environments. Private cloud infrastructure and services link to public clouds and share data, applications, tools, and processes. Platform middleware abstracts the underlying infrastructure, so application developers don’t have to adapt apps to the specific deployment environment or even know what the eventual deployment location will be.

DevOps pipelines flow new and changed apps into any part of the hybrid environment, so workload placement decisions are made based on business needs and economics. And if the needs or economics change, the app can be redeployed to a different part of the hybrid cloud.

Such an environment would have been a pipe dream a few years ago, but today hybrid clouds can be assembled from open source components or implemented via off-the-shelf software or services from several providers. At Intel, our IT organization took the open source approach to create an application platform that lets us deploy apps anywhere in a multi-cloud environment, and we currently have more than 3,500 instances on the new platform.

Existing commercial solutions can link your private cloud to Microsoft Azure, Amazon Web Services, IBM Cloud, Google Cloud Platform, Rackspace’s managed cloud, and others to create the kind of computing environment described above.

But while any IT infrastructure guru would put new hardware through the paces before buying it, in the cloud, details like processor hardware, network topology and bandwidth, security protections, performance benchmarks, scalability, vendor lock-ins, and cost controls are often obscured behind generalized service descriptions. Achieving the results you want requires drilling deeper.

Get the performance you need

The availability of powerful, workload optimized processors is central to the success of growing workloads like high performance computing (HPC), analytics, image processing, and machine learning. And the ability to order up a cluster of high performance processors is attractive to scientists, engineers, and analysts needing instant startup at a low up-front cost.

To assess the suitability of the cloud for scientific computing, Exabyte performed a study running the Linpack benchmark optimized for 32-node HPC configurations in a number of public clouds and compared the results to an onsite supercomputer. The good news: they concluded the cloud offered a viable alternative to the acquisition of costly onsite systems for scientific computing. The bad news: measured benchmark results were highly variable across the tested cloud providers ranging from less than 2.5 terflops to more than 17 terflops - a factor of seven.

Clearly, finding the best cloud home for an HPC workload requires careful examination of what you’re buying. That’s also true for the new generation of applications powering the digital transformation enterprises face - big data analytics, streaming data from the Internet of Things, and the application of artificial intelligence to grow efficiency and competitiveness in almost every industry. As processor suppliers like Intel embed accelerators and optimizations for analytics, machine learning, and encryption in processors, running these new apps on hardware lacking the optimizations could render them essentially unusable.

When examining your growth path, determine if workloads can be shifted to more powerful but compatible processors. Service providers typically offer instances spanning a range of performance options. Consider not just the processor speed, but look for accelerators specific to your big data or AI workload, and determine if third-party software and underlying libraries have been optimized for them. Determine how you will scale out. That requires suitable network bandwidth. But even then, peak per-node performance and simple, low cost administration is enhanced when the workload can be distributed across a network of compatible platforms.

Maintain security and compliance

Data privacy and security is probably the biggest concern when enterprises consider moving data and applications into public clouds - and for good reason. To achieve efficiency, public IaaS and SaaS services rely on multi-tenancy - multiple customers sharing the same infrastructure and, in the case of SaaS, probably the same data base. The result is a larger, more vulnerable attack surface with much of it outside your control.

Fortunately, public cloud providers recognize this and strive to provide extensive security protections and processes. Most can make a good case that the security they provide is more extensive than what you provide in your private data centers. Even if they’re right, you’re still delegating data security and compliance responsibilities - or which you remain ultimately responsible - to them. For some apps, that’s a show stopper, and the solution is simply to run the application in your private cloud. But understanding the security the service provider offers and having a clear definition of their responsibilities and yours is essential when you put any kind of corporate data in someone else’s hands.

When evaluating cloud solutions, determine if hardware-based security is in place to ensure operating systems, hypervisors, and your VMs launch on a trusted platform that hasn’t been compromised. Drill down to learn what security monitoring tools and processes the provider uses. Discover how they respond to incidents, and make sure you know what your responsibilities are. Find out how you will link alerts and incident response activities into your own security operations center, so you can assure a coordinated response that protects your customers and your brand. Learn how your data can be encrypted - both at rest and in transit through the network. Determine if underlying processors offer accelerators for encryption to maintain peak performance. And understand how you will manage the keys.

Control the costs

One of the compelling features of public cloud services is the elasticity - the ability to scale utilization up or down as business needs dictate - coupled with a pay-for-what-you-use cost model. But The Wall Street Journal reports Boris Goldberg, co-founder of Microsoft subsidiary Cloudyn, believes 60 percent of cloud software servers can be reduced or terminated simply because organizations have oversubscribed. To avoid over-spending, you must monitor and control utilization.

The provider should offer instances that provide performance and capacity to match your workload. Drill down to understand the pricing structure, and learn how you will monitor utilization and costs. In addition to the monitoring tools available from the provider, explore monitoring and optimization tools available from third parties. And be sure to factor in the cost of upgrading your network to assure the connectivity between your private network and public cloud services.

Hybrid clouds give you the flexibility to deploy apps in whichever part of your environment best meets business needs at the lowest cost. So maintain a program to reevaluate workload placement decisions periodically to determine if changing economics or business needs suggest applications deployed into public clouds should be moved into your data centers. And make sure you understand what obstacles and costs you’ll encounter when re-hosting the application. The services you use in public clouds will tend to lock you into that provider to some extent, so be sure you know what the lock-ins are and have a plan to deal with them.

Focus on the business

At Intel, we aligned our internal cloud strategy with our business needs. We created an application platform that lets us place applications where they deliver the most business value, and we undertook an application rationalization program to determine the best home for each major app. It’s working because we created a uniform application environment spanning our on-premises private cloud and multiple public cloud services and because we learned to examine public cloud offerings under the same microscope we apply in our own data centers to be sure we maintained the compatibility and the value we needed.