At Nvidia's annual GTC event, the company announced that Amazon Web Services will offer Tesla T4 GPUs in the company’s EC2 instances.

Available in the coming weeks, Amazon’s EC2 G4 is designed with artificial intelligence inference workloads in mind.

T4 GPU
Nvidia's T4 GPU – Nvidia

Google also launched a beta for T4 support for its cloud a few months ago, and the GPU is available in servers from Cisco, Dell EMC, Hewlett Packard Enterprise and others.

The T4 uses Nvidia's Turing architecture and features 2,560 CUDA cores and 320 Tensor cores, with the company claiming the 75 watt card offers 65 teraflops of peak performance for FP16, 130 teraflops for INT8 and 260 teraflops for INT4.

“Nvidia and AWS have worked together for a long time to help customers run compute-intensive AI workloads in the cloud and create incredible new AI solutions,” Matt Garman, VO compute services at AWS, said.

“With our new T4-based G4 instances, we’re making it even easier and more cost-effective for customers to accelerate their machine learning inference and graphics-intensive applications.”

Nvidia also announced an end-to-end platform for developers who want to use its GPUs for deep learning, machine learning and data analytics called CUDA-X AI. The new platform is a reorganization of more than 40 Nvidia deep learning acceleration libraries, and has been adopted by all the major cloud services, including AWS, Google Cloud Platform and Microsoft Azure.