Opinion: a digital Middle East needs next-gen data centres
Faisal Ameer Malik, CTO, enterprise business group, Huawei Middle East looks at how AI can transform businesses in the region
Digital transformation is changing the world around us at a rapid pace. It’s no surprise that with this transformation comes an increasing amount of data, which organisations in the Middle East are being challenged to manage and monetise.
Data management has always been a function of IT but is now a business necessity as enterprises seek to build a competitive edge through artificial intelligence (AI), the Internet of Things (IoT), and cloud-based services and infrastructure.
The relationship between big data and AI is particularly noteworthy. In fact, Huawei predicts that the AI adoption rate will increase from 16% in 2015 to 86% in 2025, as referenced in the Huawei Global Industry Vision. This reflects the fact that 67% of CEOs from cross-border companies have identified digitalisation as being at the core of their business strategies, according to Gartner. At the same time, IDC confirms that two of every three enterprises have already become seekers and practitioners of digital transformation.
Nevertheless, most AI applications today are meaningless without the right data. We have seen organisations across the region still struggling to manage these copious quantities of information in a digitalised world, with the amount of new data created reaching 180 Zettabyes (SB) in 2025 .
This is where AI comes in with its ability to mine data and accurately determine, classify, and predict future trends. Moreover, data centres have become the core of new infrastructures such as 5G and AI.
Traditional data centres will simply not be able to cope with the sheer volume of information that will be created in a more digitalised society. They have neither the capacity to sort data as needed, nor the reliability that guarantees against loss.
I am often asked what the next generation of data centres will look like. While the technology is constantly improving, there are some key characteristics that will define the future of this infrastructure.
Shifting from hard disk drives (HDD) to solid state drives (SSD) is one significant step in the evolution of data centres that is already underway. SSDs are better positioned to meet real-time data access requirements as they cut media latency by more than 100 times. Increased efficiency is further introduced in the form of either graphics processing units (GPUs) or dedicated AI chips, which improve data processing capabilities more than 100-fold.
The evolution of communication protocols not only call for network transformation, but also require intelligent scheduling and lossless forwarding to achieve the three pillars of what we refer to as “AI Fabric”; zero packet loss, low latency, and high throughput for the data centre network.
An Internet giant started to deploy unmanned driving, hoping to put it into commercial use one year ahead of the market. However, the training efficiency of unmanned driving is low. It takes 500 GPU servers a week to complete training for data collected in a single day, on the premise that the servers work continuously. Adding GPU servers cannot significantly shorten the training time. Packet loss is found to be the key restraint on reducing the training duration.
The other example is a voice recognition AI training engine. It takes 40 GPU servers four weeks to complete one training activity. The GPU servers remain in idle state for over 50% of the time, and data needs to be synchronised before the next iteration can begin. This means network communication is prolonged by over 50% due to packet loss caused by concurrent tasks.
The concept of AI Fabric is based on an innovative iLossless AI algorithm. The algorithm provides a series of congestion-management and flow-control capabilities to the data centre. This application of AI capabilities in the data centre can improve data processing by reducing computing latency by +40% in high-performance computing scenarios. It can also improve Input/Output Operations Per Second (IOPS) by 25% in distributed storage scenarios. Moreover, such innovations can effectively reduce the communication duration between HPC nodes in a network by as much as 4%, greatly improving efficiency of innovative services such as AI training.
Building on these AI-enabled capabilities, organisations can now leverage data centre switches designed specifically for the AI era. Huawei’s own CloudEngine 16800—the industry’s first data centre switch built for the AI era—is already helping organisations around the world to accelerate intelligent transformation and achieve pervasive use of AI. We are doing this by focusing on three defining characteristics of data centre switches in the AI era: embedded AI chip, taking it to 48-port 400GE line card per slot, and the capability to evolve to the autonomous driving network, the concept of Intent Driven Networks using AI to do intelligent O&M.
Huawei’s CloudEngine16800 switch is the world's first switch equipped with an AI brain. The high-performance AI chip is embedded in the main control board of the switch. The chip has a floating-point computing capability of up to 8T Flops and is suited to run deep learning AI algorithms. Such an AI chip is estimated to exceed 25 current mainstream dual-socket servers, in terms of the computing capability. The iLossless algorithm has the best operating platform.
It also supports all traditional data centre functions like support for VxLAN, Containers, integration into multiple cloud computing environments, Software Defined Network, Intelligent O&M and so on, to fit completely into our customer’s current data centre requirements and smoothly transform it into an AI based Data Centre.
Moving forward, concepts like AI Fabric and associated AI-ready infrastructure for the data centre—such as switches and chipsets—will become all the more important, as the pervasive use of AI helps organisations in the region to accelerate digital transformation.