AMD has announced the availability of its MI300 accelerators and processors to help power advancements in generative AI.

The company says that the AMD Instinct MI300X and the AMD Instinct MI300A have better memory capacity and are more energy efficient than their predecessors.

The MI300-series has been designed to train and run large language models (LLMs), with AMD claiming they are the highest-performance accelerators in the world for generative AI.

2325906-instinct-accelerator-mi300x
– AMD

During her keynote speech at the company’s Advancing AI event in California, AMD CEO Dr. Lisa Su said: “The [previous] year has shown us that AI isn’t just a cool new thing, it’s the future of computing. At AMD when we think about it, we actually view AI as the single most transformation technology over the last 50 years.”

The AMD Instinct MI300A accelerated processing unit (APU) combines a CDNA 3 GPU with the latest AMD Zen 4 x86-based CPU and 128GB of HBM3 memory. AMD said the CDNA 3 data center architecture has been optimized for performance and power efficiency, delivering more than three times higher performance for key AI data types.

The company said that it delivers approximately 1.9× the performance-per-watt on FP32 HPC and AI workloads and a 30 times energy efficiency improvement over its predecessor. Compared to Nvidia’s H100, AMD said the MI300A has 1.6× the memory capacity, at 128GB. It has a thermal design point (TDP) of 760W, above the 700W of the H100 SXM.

The MI300X GPU is also built on CDNA 3 architecture and has 1.5× more memory capacity (192GB) and 1.7× more peak theoretical memory bandwidth (5.3TBps) than the previous M1250X version, delivering nearly 40 percent more compute units. AMD also claims its new MI300X GPUs exceed the speed of Nvidia's H100 chips, offering 1.3 petaflops of FP16 and 2.6 petaflops of FP8 performance.

It has a TDP of 750W.

“It’s truly the most advanced product we’ve ever built and it is the most advanced AI accelerator in the industry,” Su said.

Su added that AMD has already had to revise its predictions from last year about growth in the data center accelerator market, with the company now expecting the sector to grow by more than 40 percent over the next four years to reach $400bn in 2027.

“The availability and capability of GPU compute is the single most important driver of AI adoption,” she said.

Nvidia plans to launch the H200 next year, with 141GB of HBM3e and a 4.8TBps memory bandwidth.