As is the case with all things AI in recent history, Edge AI deployments have not been immune to exponential growth.

As the pendulum has swung from centralized to distributed deployments, AI has driven the majority of growth in Edge computing, with organizations increasingly looking to deploy AI algorithms and models onto local Edge devices, removing the need to constantly rely on cloud infrastructure.

As a result, research from Gartner shows that at least 50 percent of Edge deployments by the year 2026 will incorporate machine learning, a figure that sat at around five percent in the year 2022.

Pallavi Mahajan
Pallavi Mahajan, corporate vice president of Intel's Edge group software – Intel

Edge is not the cloud

Businesses want the Edge to bring in the same agility and flexibility as the cloud, said Pallavi Mahajan, corporate vice president of Intel’s network and Edge group software. But, she notes, it’s important to differentiate between Edge AI and cloud AI.

“Edge is not the cloud, it is very different from the cloud because it is heterogeneous,” she says. “You have different hardware, you have different servers, and you have different operating systems.”

Such devices can include anything from sensors and IoT devices to routers, integrated access devices (IAD), and wide area network (WAN) access devices.

One of the benefits of Edge AI is that by storing all your data in an Edge environment rather than a data center, even when large data sets are involved, it speeds up the decision-making and data analysis process, both of which are vital for AI applications that have been designed to provide real-time insights to organizations.

Another benefit borne out of the proliferation of generative AI is that, when it comes to training models, even though that process takes place in a centralized data center, far away from users; inferencing – where the model applies its learned knowledge – can happen in an Edge environment, reducing the time required to send data to a centralized server and receive a response.

Meanwhile, talent shortages, the growing need for efficiency, and the desire to improve time to market through the delivery of new services have all caused businesses to double down on automation.

Alluding to the aforementioned benefits of Edge computing, Mahajan said there are three things driving its growth right now: businesses looking for new and different ways to automate and innovate, which will in turn improve their profit margins; the growing need for real-time insights, which means data has to stay at the Edge; and new regulations around data privacy, which means companies have to be more mindful about where customer data is being stored.

Add to that the fact that AI has now become a ubiquitous workload, it's no surprise that organizations across all sectors are looking for ways to deploy AI at the Edge.

Almost every organization deploys smart devices to support their day-to-day business operations, be that MRI machines in hospitals, sensors in factories, or cameras in shops, all of which generate a lot of data that can deliver valuable real-time insights.

GE Healthcare is one Intel customer that uses Edge AI to support the real-time insights generated by its medical devices.

The American healthcare company wanted to use AI in advanced medical imaging to improve patient outcomes, so partnered with Intel to develop a set of AI algorithms that can detect critical findings on a chest X-ray.

Mahajan explains that in real-time, the GE’s X-ray machines scan the images that are being taken and, using machine learning, automatically detect if there’s something wrong with a scan or if there’s an anomaly that needs further investigation.

While the patient is still at the hospital, the machine can also advise the physician to take more images, perhaps from different angles, to make sure nothing is being missed. The AI algorithm is embedded in the imaging device, instead of being on the cloud or a centralized server, meaning any potentially critical conditions can be identified and prioritized almost immediately.

“Experiences are changing,” Mahajan says. “How quickly you can consume the data and how quickly you can use the data to get real-time insights, that’s what Edge AI is all about.”

Intel brings AI to the Edge

Mahajan joined Intel in 2022, having previously held software engineering roles at Juniper Networks and HPE. She explains she was hired specifically to help build Intel’s new Edge AI platform.

Unveiled at Mobile World Congress (MWC) in February 2024, the platform is an evolution of the solution codenamed Project Strata that Intel first announced at its Intel Innovation event last year.

“[Intel] has been working at the Edge for many, many years… and we felt there was a need for a platform for the Edge,” she explains. Intel says it has over 90,000 Edge deployments across 200 million processors sold in the last ten years.

Traditionally, businesses looking to deploy automation have had to do so in a very siloed way. In contrast, Mahajan explains that Intel’s new platform will enable customers to have one server that can host multiple solutions simultaneously.

The company has described its Edge AI offering as a “modular and open software platform that enables enterprises to build, deploy, run, manage and scale Edge and AI solutions on standard hardware.” The new platform has been designed to help customers take advantage of Edge AI opportunities and will include support for heterogeneous components in addition to providing lower total cost of ownership and zero-touch, policy-based management of infrastructure and applications, and AI across a fleet of Edge nodes with a single pane of glass.

The platform consists of three key components: the infrastructure layer and the AI application layer, with the industry solutions layer sitting on top. Intel provides the software, the infrastructure, and its silicon, and Intel’s customers then deploy their solutions directly on top of it.

“The infrastructure layer enables you to go out and securely onboard all of your devices,” Mahajan says. “It enables you to remotely manage these devices and abstracts the heterogeneity of the hardware that exists at the Edge. Then, on top of it, we have the AI application layer.”

This layer consists of a number of capabilities and tools, including application orchestration, low-code and high-code AI model and application development, and horizontal and industry-specific Edge services such as data thinning and annotation.

The final layer consists of the industry solutions and, to demonstrate the wide range of use cases the platform can support, it has been launched alongside an ecosystem of partners, including Amazon Web Services, Capgemini, Lenovo, L&T Technology Services, Red Hat, SAP, Vericast, Verizon Business, and Wipro.

Mahajan also lists some of the specific solutions Intel’s customers have already deployed on the platform, citing one manufacturer that is automatically detecting welding defects by training its AI tool on photos of good and bad welding jobs.

“What this platform enables you to do is build and deploy these Edge native applications which have AI in them, and then you can go out and manage, operate, and scale all these Edge devices in a very secure manner,” Mahajan says.

At the time of writing, a release date had not been confirmed for Intel’s Edge AI platform. However, during MWC, the company said it would be “later this quarter.”

AI 'everywhere'

Although Gartner predicted in 2023 that Edge AI had two years before it hit its plateau, Intel is confident this is not the case, and has made the Edge AI platform a central part of its ‘AI Everywhere’ vision.

Alongside its Edge AI platform, Intel also previewed its Granite Rapids-D processor at MWC. Designed for Edge solutions, it has built-in AI acceleration and will feature the latest generation of Performance-cores (P-cores).

Writing on X, the social media platform previously known as Twitter, in October 2023, Intel’s CEO Pat Gelsinger said: “Our focus at Intel is to bring AI everywhere – making it more accessible to all, and easier to integrate at scale across the continuum of workloads, from client and Edge to the network and cloud.”

As demonstrated by the recent slew of announcements, Intel clearly believes that Edge AI has just reached its peak, with Mahajan stating that all industries go through what she described as “the S Curve of maturity.” Within this curve, the bottom of the ‘S’ represents those tentative first forays into exploring a new technology, where organizations run pilot programs and proof-of-concepts, while the top of the curve is the point at which the market has fully matured.

“This is where I think we are now,” she says, adding that she believes Intel was “the first to read the need for [an Edge AI] platform.” She continues: “This is the feedback that we got back from after the launch at MWC, that everybody was saying, ‘Yes, this market needs a platform.’

“I’m sure there will be more platforms to come but I'm glad that Intel has been a leader here.”