The pace of AI adoption has accelerated during the Covid crisis but the increased rate of adoption is likely to place significant extra demand on computing resources and supporting infrastructure.

Appen’s recent State of AI report found that 55 percent of companies had accelerated their AI adoption in 2020 and 67 percent expected to ramp it up further this year. As AI adoption intensifies, overhead costs will become a major consideration and there is a risk these costs could snowball if organizations do not plan ahead. 

Top of mind

The trend was reinforced by IDC group vice president for AI and Automation Research Ritu Jyoti who said the pandemic had “pushed AI to the top of the corporate agenda, empowering business resilience and relevance.”

He added: “We have now entered the domain of AI-augmented work and decision across all the functional areas of a business. Responsible creation and use of AI solutions that can sense, predict, respond and adapt at speed is an important business imperative.”

Jyoti’s comments accompanied publication of IDC estimates that the global AI market will increase by 15.2 percent this year to a value of $341.8 billion. The market is expected to surpass $500 billion by 2024.

When it comes to infrastructure, businesses have to adapt and be flexible. This need for flexibility is making cloud, particularly hybrid cloud, the foundation of AI, especially as the need for substantial amounts of data ratchet up. Using hybrid cloud, companies can meet the technology demands of AI at the right cost level for their businesses and their workloads.

Infrastructure-as-a–Service (IaaS) gives organizations the ability to use, develop and implement AI without sacrificing performance. But there are a number of infrastructure elements that organizations need to bear in mind when evaluating potential IaaS providers.

1 Computing performance

Businesses need access to significant performance computing resources, including CPUs and GPUs, to fully take advantage of the opportunities presented by AI. Machine learning algorithms require speed and performance to transact a huge number of calculations. While a CPU-based environment can handle basic AI workloads, deep learning involves multiple large data sets and the capability to deploy scalable neural network algorithms.

CPU-based computing might not be able to meet those objectives and GPUs could be a better option. The greater performance provided by GPUs can significantly accelerate deep learning compared to CPUs. But that speed comes at a higher cost and in some instances it may not be cost-effective to switch from CPU to GPU. It is important to get the right balance for the required tasks.

2 Storage capacity

The ability to scale storage as data volumes grow is fundamental for many businesses. Organizations need to ascertain what types of storage they need and there are a number of factors to consider, including the level of AI they plan to use and whether they need to make real-time decisions. For example, a FinTech company using AI systems for real-time trading decisions may need fast all-flash storage technology, while other companies could be better served by larger capacity but less rapid storage. 

Businesses need to calculate how much data their AI applications will generate because AI applications make better decisions when they are exposed to more data. Databases grow over time so companies need to monitor their storage capacity and plan properly for expansion. 

3 Networking infrastructure

Networking is another key component of AI infrastructure. Good, fast and reliable networks are essential to maximise the delivery of results. Deep learning algorithms are highly dependent on communications, so networks need to keep pace with demand as AI efforts expand. Scalability is a high priority and AI requires a high-bandwidth, low-latency network. It is important to ensure the service wrap and technology stack are consistent for all regions.  

4 Security

AI can involve handling sensitive data such as patient records, financial information and personal data, so it is imperative the infrastructure is secured end-to-end with state-of-the-art technology. It goes without saying that a data breach would be a disaster for any organization but with AI any infusion of bad data could cause the system to make incorrect inferences, leading to flawed decisions.

5 Keep it cost-effective

As AI models become more complex, they become more expensive to run, so it’s critical to get extra performance from the infrastructure to keep costs under control. As companies increase their use of AI, they will place heavier burdens on the network, server and storage infrastructures.  

Businesses need to make careful choices and identify IaaS providers that can offer cost-effective dedicated servers as a means to boost performance and to enable them to continue investing in AI without increasing their budget.

What IaaS providers can do

Organizations looking to deploy AI services must ensure they have the right foundations in place to support them. Any IaaS provider needs to be able to deliver the right infrastructure for customers building their services on AI.

The onus is on IaaS providers to:

  • constantly investigate and invest in the latest CPU and GPU technology because this is key to deploying successful AI workloads;
  • improve networks for greater speed and delivery;
  • use automation to deliver a large percentage of their service to speed up the time to action for clients;
  • improve performance by utilising automation tools to monitor their system and to use the most efficient routes.

Subscribe to our daily newsletters