Two former Google engineers have launched a new AI chip startup.

Named MatX, the company will build processors specifically designed to support Large Language Models (LLMs). The company has already raised $25 million, with recent funding coming from AI investors Nat Friedman and Daniel Gross.

semiconductor chip up close
– Sebastian Moss

In an interview with Bloomberg, co-founders Mike Gunter and Reiner Pope said that while Google had made progress trying to make LLMs run faster, the company’s aims were too diffuse, leaving the pair feeling like they had no choice but to go it alone in order to focus on designing chips for processing the data needed to power LLMs.

At Google, Pope wrote AI software while Gunter designed hardware, including chips, for the software to run on.

According to Bloomberg, the pair are now betting that the processors designed by MatX will be at least ten times better at training LLMs and delivering results than Nvidia’s GPUs. This will theoretically be achieved by removing what the founders described as “extra real estate” that is placed on GPUs to allow them to handle a wide variety of computing jobs.

MatX will instead focus on designing single-purpose chips with one large processing core. The company has already hired “dozens of employees” and expects to have the first version of its product finalized by 2025.

“Nvidia is a really strong product and clearly the right product for most companies… but we think we can do a lot better,” Pope told Bloomberg.