In recent years, many organizations, including data center operators, have made some very ambitious sustainability promises. By 2030, or thereabouts, a wide diversity of businesses have claimed that they will be ‘carbon neutral’.

That might be quite straightforward for the kind of organization whose only carbon emissions come from heating, lighting and keeping employees’ PCs running. But for an energy intensive sector like data centers – which account for between two and three percent of world power consumption, accounting for two-fifths of all ICT power consumption by 2025 – even getting close to keeping such a promise will be a daunting challenge. With data pegged as the key driver of the fourth industrial revolution, global digital transformation is only expected to exacerbate carbon emissions.

Take, for example, self-driving vehicles. The training data behind the development of just one model is generated at a rate of 64TB every day. “The amount of data to be stored will increase from 32 zettabytes (ZB) to 180ZB by 2025, among which unstructured data will account for more than eighty percent,” says Amy Liang, director of enterprise data storage technical sales at Huawei.

In addition to keeping on top of that, data center operators will need to tackle their increasingly complex operations and management issues. Not to mention maintaining or improving current service level agreements.

The classic response to such a plethora of challenges is to automate as much as possible and to simplify processes by consolidating on proven platforms that work. And one of the first places that data center operators can make a big difference is in storage and storage-area networks (SANs); first, with a shift to all-flash storage; then, by standardising on all-Ethernet networking, even across the SAN. Such an end-to-end IP-based network enables intelligent operations and management (O&M).

In the process, organizations can take big steps towards their sustainability promises, cutting not just power consumption, but ‘carbon emission anxiety’.

All-scenario flash storage will help to reduce storage-related cooling requirements by 70 percent and, therefore, power demands and costs, while providing the same capacity of storage on half the footprint – a triple win. Over the next five years, this benefit will only get bigger in favour of flash.

But it also provides the extra performance that organizations undergoing digital transformations are demanding. “Digital transformation has driven huge numbers of offline services to go online; and innovative services are emerging, one after another,” says Liang.

She cites the requirements of ecommerce and mobile payments as just one example. These require not just high performance, but the ability to scale up to achieve the same performance over busy Christmas shopping, Black Friday or other festivals featuring shopping frenzies, as at 3.30am on a wet Tuesday morning in March.

According to Liang, an all-flash data center encompasses “flash-based servers, non volatile memory express (NVMe) native all-IP data center networks, all-scenario all-flash storage, and full-lifecycle intelligent O&M for multiple data workloads. For the storage layer, the upgrade has three key characteristics: all-scenario flash, full-series high-end quality, and all-scenario data protection.”

All-scenario flash, adds Liang, means that diverse types of data are stored on flash media – not just the mission-critical data for enterprise apps, but also data stored on edge data centers, high-performance computing systems, big data analytics and other systems. In addition, it also means that from primary storage to secondary storage, data centers are shifting to flash – albeit, perhaps, using devices based on slower but cheaper flash technology, such as QLC NAND, for applications not deemed mission critical, such as back-up and ‘cold’ storage.

The future of the SAN – NVMe over RoCE

But all-flash storage is only one half of the equation. Currently, storage networks face two key issues: One is the use of multiple types of networks, causing complex deployment and management issues. The other is Fibre Channel.

The Fibre Channel data transfer protocol started to go mainstream around two decades ago, coinciding with the first dot-com boom. It went mainstream as network-attached storage architectures gave way to storage area networks (SANs) – the efficiency of SANS that could run over kilometres outweighing the cost of Fibre Channel storage.

However, the relatively slow development of the protocol in terms of performance has led to the development of storage systems and networks based on NVMe over RoCE [NVMe over remote direct memory access over converged Ethernet].

NVMe over RoCE embodies three core technologies capable of delivering microsecond access times over storage infrastructure anywhere within the data center, with Huawei’s OceanStor Dorado hardware boasting latency down to just 0.05ms. The NVMe protocol, of course, has become the standard for accessing storage on the PCIe bus on both PCs and servers because it provides multiple high-speed lanes that can be used in parallel. NVMe provides triple the IOPS [input/output per second] of conventional SCSI SSD interfaces.

RDMA, meanwhile, enables server-to-server and server-to-storage data transmission directly between application memory, bypassing CPU and operating system – effectively transmitting data from memory to Ethernet port in both server and storage. Originally developed for high-performance computing (HPC) environments, it provides higher performance, lower latency and lower CPU utilization.

The third part is Converged Ethernet (CE), an enhanced version of Ethernet that provides Priority Flow Control to ensure zero packet-loss – a perennial disadvantage of standard Ethernet. Why is this important? Packet loss has been the Achilles' heel of Ethernet since its inception. It causes unstable performance under load, with a packet loss rate of just 0.1 percent capable of halving overall networking performance. CE can solve this problem, while Huawei’s proprietary iLossless-DCI algorithm adds artificial intelligence, trained on millions of random network samples, to automatically optimize network traffic and boost performance by a factor of ten.

However, CE requires the support of network interface cards (NICs), networking switches and all-flash arrays, while the NICs need to support RoCE.

Hence, in order to keep up, data center operators will need to prepare themselves for another round of storage hardware and SAN upgrades. For organizations running wide-area SANs based on Fibre Channel, this won’t be an insignificant shift, but it is one that, sooner or later, will need to be broached, given the incentive of total cost of ownership cuts of more than 80 percent, according to Huawei.

At the heart of Huawei’s solution is NoF+, which embodies an award-winning iteration of NVMe over RoCE. This was behind one of Huawei’s multiple successes at Interop 21 in Tokyo earlier this year, with the Commendation for Huawei’s CloudEngine+Dorado V6+DC908 Next-Generation High Performance Inter-DC Storage Network.

The solution has already seen real-world roll-outs where clients have shifted to all-IP networks. Huawei cites the example of a major banking sector client that rolled out Huawei’s CloudEngine storage hardware in an initiative that saw it move from a Fibre Channel SAN to Huawei’s NoF+. In the process, it received an 86 percent boost to storage performance, slashed latency in half and improved the reliability of its SAN, as well as making operations and maintenance easier.

NoF+ forms part of Huawei’s CloudFabric Hyper-Converged Data Center Network Solution suite, which can help organizations build-out centralized storage networks based on Ethernet and the RoCE protocol, with SANs capable of lossless data transmission over Ethernet of up to 70km in distance.

NoF+ provides a number of other key high-end features, such as failover within seconds thanks to proactive fault detection and notification, absent from standard Ethernet. In the event of a node failure, services will be switched automatically within seconds.

Furthermore, NoF+ is also able to proactively predict network performance and, hence, service requirements can be optimized on the network in advance by taking samples and applying algorithms to predict network traffic flows in advance.

This is just a few of the ways in which NoF+, alongside an all-flash storage infrastructure, can both help improve performance and reliability, on the one hand, while reducing data center operations and maintenance demands, on the other. With the continuous development of all-flash and NoF+ technologies, green and intelligent all-flash data centers are on their way now, helping mankind to build a greener planet.

Please visit here for more information about Huawei’s next-gen storage network NoF+.