Archived Content

The following content is from an older version of this website, and may not display correctly.

DCD FOCUS: How have you seen connectivity needs inside the data center change over the last year?
I’ve been seeing several changes in connectivity.  First more people are abandoning the large main distribution area (MDA) and using End of Row/Middle of Row or Top of Rack cable plant designs.  This design allows the end user to scale out quickly and simplify cable plant issues.  These designs normally use more fiber optics to the main network and less copper from the server to access the layer switch. Second, with the growth of virtualization, I’m seeing the need for more bandwidth. We’ve been tracking three trends as end users move to 10GbE.

1) More servers are being attached to the network at 10Gbps. Switch manufacturers have been relying on small form-factor+ ports on their 10Gbp switches to provide flexibility for the end user and only produce one switch that fits all requirements. End users can mix and match fiber and direct attached copper cable  (DAC) assemblies on the same switch. This can be burdensome for end users because this flexibility comes at a high cost. Proprietary DAC cables are embedded with some type of intelligence that tells the switch this is a ‘brand X’ approved cable. This cable comes at a cost of two to three times that of non-proprietary passive cables. Help desks are instructed to have the customer change all non-proprietary DAC cables prior to troubleshoot customer problems. I don’t know of many IT departments that will risk the extra time to do this, so they pay the high price of these ‘special’ cables. (Remember, this is passive technology, there is no intelligence needed to pass data over these cables.) 

2) The electronics for fiber are much more costly if you plan to upgrade all connectivity to fiber. The good news is there are a few manufacturers that have been offering 10G Base-TX connectivity which is more cost effective, standards based and vendor agnostic. Many data centers already have a cable plant that can support 10GbE. It’s going to take customers demanding that a 10G Base-TX (RJ45) solution be made available.

3) Uplinks from the access layer switches are now using some form of 40Gbps backbones to upload this data to the main network. This is driving a higher installation rate of OM3 and OM4 fiber optic cables terminated with MPOs. More than ever, understanding insertion loss budget is an important part of utilizing this technology.

How much has the software-defined data center (SDDC) contributed to changes we are seeing today?
 I haven’t seen too many people deploying software-defined networking (SDN) or storage in the data center space yet. But I have heard of people trying to utilize the technology and experiment with it. I think we’re 12 to 18 months away from adoption at a higher rate.

A lot of people say the network could be a bottleneck for the SDDC. Do you think this will be case?
A flatter network is supposed to mean lower latency and less bottlenecks. From everything I’ve read and seen, this isn’t the case if the network is designed properly. I’ve heard experts state that both server and network teams must work together to better understand the traffic patterns and bandwidth needs of the applications. Taking the holistic approach and utilizing DevOps techniques is a key to success.

Is it only the SDDC? Big data requirements must also bring about network changes?
East-west bandwidth is increasing at a terrific rate. I don’t believe it is just SDN. Virtualization is driving use of the full network pipe. I’m sure if we looked at non-virtualized servers we would find the average bandwidth need is about 10% of the connection to the access layer switch.  Virtualization uses more of the bandwidth to the access layer switch, which in turn means higher bandwidth backbones are needed to move data around.

What would the ideal network for dealing with these next-gen technologies be?
I think copper connectivity is dying. I believe fiber optic connectivity is the future. I think convergence and SDN bring much promise. I believe the industry needs to come to some standardization so there is a common method or protocol to deliver the network of the future. Businesses will rely on their data center to create revenue which will drive up bandwidth needs. Mobility and the Cloud will change our view of the world as it already has done with its promise of any data anytime, anywhere and on any device.

How are you seeing the way the network is considered in data center design change? And at what point is the network being considered?
I think it’s always been a driver to the design of the cable plant and cabinet layout. Power and cooling requirements will always be a lead design criteria but I believe the type of network, connectivity requirements and business need are just as crucial. I’ve stressed for several years now that businesses should take a holistic approach to designing data centers to suit the needs of the business.

Who is driving many of these network decisions today? We hear SDN and other efforts are sometimes being led by server teams. This must change the dynamics between vendors and end users somewhat?
I think successful decisions are made by taking a holistic approach to design. Putting the needs of the business ahead of the ‘ego-driven’ design of one silo is on the way in. I think many companies are forcing these silos to work together. Server teams and networking teams have, from my experience, worked well together most of the time.

Are we seeing changes from new design efforts like modularity and consolidation, rising density and energy efficiency concerns?
It is fantastic we finally have matured enough to begin designing data centers that are efficient, applying lean principals increases profitability. The strides the industry has made over the past ten years are amazing. Clearing out the waste has boosted momentum and is now the ‘standard’ method.

Energy efficiency concerns are a driver to new designs. Utilizing the correct mix of new energy efficiency strategies at the desired level of risk to the business is important.  One size does not fit all. Setting goals for efficiency, understanding the true ROI and true cost of operating some of the new technology is also important. I believe almost all large data centers can take advantage of modularity. Build as you grow, smaller data halls and operating at energy loads of >50% of the design load is important. Understanding at what capacity level the business kicks off the next module is key and should be well known to the business. 

What do you think the next big trend for networking will be?
I believe silicon photonic technology will change the network as well as the server world. Way out there is grapheme – a pure carbon in the form of a very thin, nearly transparent sheet, one atom thick. It is remarkably strong for its very low weight (100 times stronger than steel) and it conducts heat and electricity with great efficiency. If the scientists are correct, this substance will be the next biggest game changer of our generation.

What do you think the biggest risk to the data center network is today?
A lack of agreed standards for SDN.

 

This article first appeared in FOCUS issue 36. To read the full digital edition, click here.