Swarm computing company SwarmOne has launched its instance-less platform for artificial intelligence (AI) training.

The platform has thus far completed over 340,000 hours of AI training with a selection of first customers.

Harel Boren - SwarmOne
– SwarmOne

SwarmOne aggregates an AI model, training hyperparameters, and data, and then performs the processing across multiple GPUs situated in SwarmOne's global network of data centers.

According to the company, this means that users are "free of the cloud" and don't have to set up or optimize instances, while providing access to large amounts of compute power.

SwarmOne says that data scientists can train from their development environment with any AI Training framework, and that the platform provides the cost of training before users submit their tasks to avoid unexpected costs.

The first customers are reportedly experiencing an 84 percent time reduction before training begins, a 91 percent improvement in delivery time, a 97 percent increase in AI quality and performance, and a 67 percent reduction in compute costs.

“SwarmOne boosted personnel efficiency by about 90 percent, significantly reduced training costs, and enhanced delivery,” said Michael Erlihson Ph.D, AI tech lead at Salt Security, an early customer. “This made us far more competitive in our market."

SwarmOne's executive team includes CEO Harel Boren, and Israeli IDF Technology Unit veteran and CTO, Ben Boren.

“We built this platform because, like most data scientists, I spent way too much time sourcing, provisioning, and managing GPU instances for AI training,” said Ben Boren. “For AI to thrive, it must break free from cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure. The barriers that exist in AI training are clear. But, it doesn’t have to be this way. Swarm computing can become the new standard for AI training.”

SwarmOne has not shared details about the number or type of GPUs it has on offer, nor the number of data center points of presence globally. AWS, for example, currently offers industry-standard Nvidia H100 GPUs in clusters of 20,000 in its "UltraScale" clusters. DCD has contacted the company for more information.

Earlier this year, Foundry Technologies launched its "orchestration platform" which it says makes accessing AI compute "as easy as flipping a light switch." That launch followed $80m in funding which was completed in March 2024.