The National Institute of Standards and Technology (NIST) is looking to expand its high-performance computing (HPC) footprint.

NIST said that it "maintains multiple HPC systems of varying capacities on its campuses, [but] the capacity of those on-premise systems is insufficient to satisfy all computational requirements."

Frontera TACC
NIST currently uses the Texas Advanced Computing Center (TACC) – Sebastian Moss

In a special notice, NIST said that it was seeking sources that are capable of meeting (or exceeding) the minimum computational requirements of nine Linpack petaflops for a CPU cluster with a memory size of 4GB/core, and seven petaflops for a GPU cluster, with a memory size of 80GB/GPU card.

It also asks for node interconnect speeds of 200Gbps, and temporary filespace allocation of 1TB per user with a 28-day retention period.

As an optional aspect to the seeking sources request, NIST asks to support the agency's ability to co-locate NIST-owned HPC assets at the provider’s facility and "support a 1MW load."

The early step to expanding its HPC capabilities comes a year after the Department of Energy's Office of Scientific and Technical Information analyzed the limitations with NIST's current compute capabilities.

"NIST does not have a coherent, organization-wide HPC strategy that provides a low-barrier-to-entry resource for researchers," the analysis states.

"In reality, many research projects across NIST could benefit from the use of HPC resources if there was a lower barrier-to-entry for using computing resources at NIST."