Taiwanese consumer hardware manufacturer Asus is looking to build out its enterprise technology and cloud business, with plans to start offering customers full stuck systems to run AI.

According to a report from the Register, company SVP Jackie Hsu told the publication that the high-performance computing (HPC) and server market had become a “big growth area” for Asus.

Asus has been involved in a number of AI projects, including the development of its own 176-billion-parameter large language model (LLM). Dubbed the Formosa Foundation Model, it has been trained on local language data sets to generate text with traditional Chinese semantics.

The company also helped build the nine petaflops Taiwania 2 supercomputer and in 2022, Asus partnered with Nvidia to build a supercomputer in Taiwan for use by medical researchers. Hsu also said that it had built a data center to house the incoming Taiwania 4 supercomputer, achieving a PUE rating of 1.17.

As a result of its work in this space, Hsu told the Reg that the company has begun engaging with customers to design and build systems to run AI, with Asus providing most of the software and hardware stack.

At the annual Computex event in Taiwan earlier this month, Asus unveiled its new RS700-E12 and RS720-E12 server series. Powered by Intel Xeon 6 processors, the company said the hardware has been designed specifically to deal with the challenges associated with high-performance workloads.

Additionally, to support the availability of critical applications like databases, virtualization, and media editing, Asus also launched its VS320D series SAN storage solution at the event.

The company already offers a wide range of servers to its customers, including products that support high-density workloads in addition to rack, GPU, and Edge servers.

Asus was also one of the companies that Nvidia CEO Jensen Huang announced the GPU giant would be partnering with to deliver what he called AI factories.

Powered by Nvidia’s Blackwell architecture, Huang explained during his Computex keynote speech that these so-called factories consist of AI systems for cloud, on-premises, embedded, and edge applications, with offerings ranging from single to multi-GPUs, x86- to Grace-based processors, and air- to liquid-cooling technology.