There is a wealth of AI chips coming from a multitude of huge companies, small startups and government projects. If you have AI workloads, which should you pay attention to? Data center goliath Intel suggests there is a simple answer. Ignore them all, and use what you have.

Sure, specialized accelerators are often better at the AI workload at hand, but - Intel contends - are they good enough to justify the extra expenditure?

This feature appeared in our special artificial intelligence supplement. Subscribe for free today.

It's an argument

Nvidia and AMD
– Sebastian Moss

"If you already have Xeon CPUs, do you want to get additional GPUs or not?” Wei Li, VP and GM at Intel, asked, questioning the need for specialized hardware for most workloads.

“When people say we are 100x slower [than GPUs], there was some truth a few years ago,” Li said. “The Xeon CPU was not at its best when you ran deep learning workloads. And the main reason was actually software - I’m a software guy, so I’m deeply sorry to tell the rest of the world [that our] software was not very well optimized.

“We stepped in, we made a huge progress in the past few years - by the time we got to July 2017 we had managed to improve performance,” Li said. ”Going forward, we no longer have the excuse that software was not optimized.”

In a presentation for DCD, Li showed Intel’s CPU matching or outperforming an Nvidia T4 on various inference workloads.

There was a catch, of course - the T4 is one of Nvidia’s cheaper chips, while the Intel processor chosen was the Platinum 9282, the company’s highest end, premium product that has yet to be released or externally tested.

The other catch is the power - the T4 is 70W, compared to the 9282's TDP of 400W. “The T4 cannot stand alone, right? It has to be attached to the CPU anyway,” Li said by way of defense.

The big flaw in the argument is the use of the highest end model, and one that does not exist yet. If the message is to stick with what they already have, a fairer test would actually compare GPUs with what people already have - mid-range Xeons.

"I don't have performance data on the lower SKUs," Li said, but insisted they could still compete. "Sometimes we're ahead, sometimes we're behind."

He cited customers who had come to the same conclusion, including Taboola, the recommendation service provider that provides not-very-useful content at the bottom of articles.

“They looked at both CPU and GPU, and they found with GPU you had to move data around. So it's more than just compute, more than how much you can run in order to get the best performance you want to make sure your data is there and quite often moving data around is very expensive.”

Another customer, China's JD.com, "were using GPU, but eventually they used CPU because we can get much better performance," he claimed.

Li’s case is one that makes sense from the position of the dominant server CPU manufacturer.

It also can make sense for some customers, depending on their workloads, with most inference done on CPUs. But it may make less sense for those looking to do a lot of AI tasks, where the extra investment in a GPU, FPGA or ASIC is worth it.

It’s also an argument that may well change when Intel rolls out its own GPU line, Intel Xe, in 2021.