Archived Content

The following content is from an older version of this website, and may not display correctly.
FOCUS: EMC is talking about the beginning of this journey to the Third Platform. Can you explain what this vision for Platform 3 is?
Amitabh Srivastava: If I use the terminology IDC uses for Platform 1, 2, 3, Platform 1 is the mainframe, Platform 2 is the PC and client-server and Platform 3 is where you get the mobile, social, big data and other pieces that are now coming. The main difference between Platform 2 and 3 is with 3 you see massive scale – the number of users and number of apps. You will see billions of users, hundreds of millions of apps.

There are certain companies that say they are already a Platform 3 company  and they started this way (think web-scale providers like eBay and Facebook). But a majority of our customers are enterprises and they have very complex environments. They have to worry about compliance, they are building these large infrastructures running mission-critical applications. At the same time they are feeling the same thing – that data is exploding at a massive rate and as a result their operational expenses are increasing at a rate that is not sustainable.

They are watching Platform 3 providers, like public cloud providers, and seeing the economics being presented there – simplicity and the lower cost of operation. This means from a Platform 2 perspective they have a problem to solve. We talk to a lot of customers. All of them essentially say “don’t give me a rip-and-replace model to move to Platform 3 because it is not very sustainable”. They want a path to Platform 3 instead.

From the EMC perspective, we are saying  we will produce technology and tools that help customers face problems they have today on Platform 2 but at same time start laying the foundation to help customers transform to Platform 3 at their own pace.

How does the hybrid cloud play into this shift from Platform 2 (it being what we now view as traditional enterprise IT) to Platform 3 (a web-scale approach)?
There are certain things enterprise customers want to keep inside their own four walls because of compliance and security reasons. But they also have certain amounts of data they don’t mind putting anywhere else. From an EMC perspective that could be any service provider. It should be that the customer can move their data back and forth in a very seamless way. That is where the hybrid cloud comes in.

For EMC this is not going to be just one vendor or type of system. We believe it doesn’t have to be EMC hardware for your storage system. And storage can reside inside a customer’s own data center, a service provider or VMware’s vCloud Hybrid Service. EMC wants to give customers the automation and tools that will lower their OPEX by allowing this choice even in the presence of data growth and provide them with a path to this platform.

We are talking about completely new ways EMC will approach the market then, with a key focus on services. In September last year you announced ViPR (EMC’s platform for software-defined storage) and that Project Nile was in development. Nile has been described as a complete web-scale storage infrastructure that will allow cloud environments to be competitively priced with Amazon Web Services and Microsoft Azure. Does this mean EMC is changing the way it looks at its own business as we move towards Platform 3?
If you look at the Nile box it is literally the same commodity hardware [EMC offers now] but the software on top can convert it into a block array, file array, HDFS (Hadoop Distributed File System) array or object array. It will be customized. We can go back to the customer and ask if they want the cheapest hardware or more capability, some SSD or flash to create a more powerful system because we are innovating at both hardware and software layers now.

We did say Nile will offer cloud storage cheaper than Amazon, so the model that will come out (in my opinion) will be that even though the different layers can get commoditized the higher level data services that we provide on top won’t, necessarily.

We are moving to become a services player. I worked at Microsoft, so will use that as an example. It moved from a software model to a services model – transforming its entire business from software to services. Now the same push will happen here because the margins are very high for giving people software services, but the buy increase for software is large too. But even as we transform, I don’t believe everything will shift to the other side. There will still be workloads that require the arrays. Look at Microsoft: demand for its Windows servers is still increasing.

We look at Platform 3 as a growth area for EMC. It won’t be that Platform 2 will go away. It is just that Platform 3 is growing at a much faster rate. 

This must affect your focus on innovation?
There will be a whole lot of innovation still at the hardware layer – things like flash, server-side storage architectures. And the software layers we are now building also have a lot of intelligence because they run on commodity hardware and otherwise. I see it as two layers of innovation now, and innovation will happen at both giving us different ways to leverage one for the other.

And I think because we will be playing at both layers we will be able to build much more powerful systems.

Getting back to Nile, how exactly could it be cheaper than offering storage over AWS using a utility pricing model?
The design point Nile is addressing is that it will be simple to use and you can run it in house. With Nile, pay-per-use – which is just one model you could have – converts into a business model around how we sell. If we can do the math and work out the economics to show all the pieces there, we believe we can get you the cheaper model – using a business model. Whether this will be on a pay-per-use basis, I am not saying. Essentially, we think we can produce a storage system that is less than AWS or Azure.

Will Nile be completely reliant on your software-defined storage platform ViPR?
We are clearly going to use ViPR but remember, ViPR by itself is a very general system. It can manage all the arrays, plug into any systems etc. Nile is going to the other extreme. It will be very customized and we want to make it very simple. So yes, we are going to use ViPR as one of the core technologies in Nile but we are doing a lot more work to cut off a lot of the choices. We are thinking of it like an appliance that you plug in and you are up and ready to go.

You recently added a Hadoop Distributed File System Service (HDFSS) to ViPR. Why is this particular upgrade one of the first you have done?
Today, all analytics are done using HDFS as the standard interface. People build these big Hadoop clusters but the problem is you could have a Hadoop cluster of 100 Petabytes, which means you have to move 100 Petabytes of data every time you want to analyse data. Our approach is different. We come back and we say ‘what if the data just stayed where it is?’. With ViPR, the way our HDFS works is we lay these HDFSs out across all your storage arrays, which means regardless of where data is residing it’s available for analytics. It can cover anything with HDFS – Cloudera, Pivotal – in any data center. It means when you lay data out you don’t have to move everything to a specialised cluster.

Customers say 70% to 80% of the time they spend on big data analytics it isn’t [spent] on the analytics itself. It is on the process of sifting through all their data.
 
This article first appeared in FOCUS issue 34. To read the full digital edition, click here. Or download a copy for the iPad from DCDFocus.