Archived Content

The following content is from an older version of this website, and may not display correctly.

The partnership between Intel and Oracle that led to the creation of a model of Xeon processor specially designed – for now – for use in Oracle’s Exadata servers is clearly intended to be a prototype for future partnerships and exclusive product SKUs.

This was the message delivered by Intel during an event at its Jones Farm campus in Oregon in July.

Your new tailor
Intel is seeing the emergence of what it calls ‘workload environments’: effectively, classes of use that demand specific, unique configurations of storage, memory and communications bandwidth. In some cases, the granularity of these configurations will reach into the processor itself.

“These workload environments are the things that, increasingly, customers are coming to us and saying ‘I really want to do something in the de-dupe environment. What can I do to optimize that?’” Dylan Larson, director of Intel’s Datacenter Group, says. Dedicated hosting is another example. “‘I want high density, I want to have lots of nodes and I want to do it as inexpensively as possible’. And then you’ve got these things [related to] the E7 product line, these massive analytics and in-memory architectures. Those places are all areas where our customers, and I think the end users, are looking for a more tailored opportunity.”

Since their inception, the objective of general-purpose processors has been to enable as many varieties of workloads as possible, with a feature set that reaches out to the greatest possible number of common use cases. But with the PC market waning (a fact that most Intel representative on hand repeatedly avoided the opportunity to deny), the company finds itself faced with the task of tailoring its product to these use cases in response to customer requests.

How many customers are we talking about? When asked this question directly no Intel representative was willing to discuss numbers.

When the subject was left on the table to linger for a while, however, they characterized the number as something even they are having difficulty coming to terms with.

In prior years it was the operating system that set the stage for the workloads run on processors. But virtualization created a layer of abstraction between the ‘user application’, as we used to call it, and the processor.

As a result, the applications with which people directly interact run in envelopes like virtual machines, or more recently within Docker containers.

Expediting the provisioning of server workloads
Workloads, on the other hand, are broader categories – not specific applications but big categories like data warehousing, analytics, software-defined networks (SDN) and business process management (BPM). Workloads are easier to characterize. They have unique textures. Unlike the case with making Web servers work better in Linux or ERP applications work better in Windows, it becomes feasible to change the processor to better facilitate the workload.

Now, we could imagine the culmination of this phenomenon being ‘flavors’ of server processors that are specially geared to a narrow, manageable variety of workloads.

But more likely we’ll see Intel partnering with the vendors that are leaders or significant challengers in their respective workload category, and we may see branded servers emerging that play to those categories – as opposed to specialized processors made into servers later.

While virtualization has managed to drive up utilization rates in the data center, Larson says its principal purpose is changing to address the automation of workload provisioning: of taking a server and adapting it to the purpose of running a class of application.

“I think [virtualization] is becoming the de facto provisioning approach for new services capabilities,” he says.

“And I think we’re going to continue to see that effort go [in that direction]. At the same time we still see, especially folks from EMEA (Europe, Middle East and Africa), one-node-per-customer kind of environment, versus not going to that virtualization environment. There is a big focus there in some of the web hosters in the EMEA region. And I think that is an interesting phenomenon. It speaks well to adding new, more customizable designs into the infrastructure.”

The topic of general discussion among these customers, he says, is software-defined infrastructure – or, as Cisco might put it, ‘software-defined everything’.

“But I think from our perspective, the point is, what can we do at the lowest levels of the platform to expose that information northbound so that we can optimize the way services can provision it, and be able to provide higher levels of assurance, higher levels of security, higher levels of awareness so that you can manage the way the workload gets provisioned?” Larson says.

One of the biggest categories requiring some form of infrastructure provisioning automation, Larson says, is big data.

“We’ve got to help people unlock the data. The computational problem that we face is massive,” he says.

“As you look at the size of the data and how much data is being saved, stored, [and] provisioned across these disparate data center architectures, we’ll continue to put an emphasis on how to unlock this potential, whether it’s with optimizations that we do in the microarchitecture [or] in the way we work with the software community and things we do to bring these products to market on a global scale.”
Intel's

This article first appeared in FOCUS issue 37, out this month. Sign up for your digital edition here.