Intel has officially revealed the exact specifications of the Aurora supercomputer developed for the United States’ Argonne National Laboratory. The Intel Aurora Supercomputer has been long delayed, but it is finally taking shape. The system, which is powered by Intel’s Xeon CPU Max and Xeon GPU Max series, has been expanded to a two Exaflop machine from its initial objective of one Exaflop. This will bring it up to speed with the AMD-powered Frontier supercomputer, which is currently the world’s fastest.
According to the most recent information, the Aurora supercomputer would have a total of 10,624 Nodes, including a massive 21,248 Xeon CPUs based on the Sapphire Rapids-SP family and 63,744 GPUs based on the Ponte Vecchio architecture. This system will be a beast, with a fabric interconnect capable of peak injection bandwidths of 2.12 PB/s and peak bisection bandwidths of 0.69 PB/s.
The Aurora supercomputer has 10.9 PB of DDR5 system DRAM, 1.36 PB of HBM capacity via the CPUs, and 8.16 PB of HBM capacity via the GPUs.
The system DRAM has a maximum bandwidth of 5.95 PB/s, the CPU HBM has a maximum bandwidth of 30.5 PB/s, and the GPU HBM has a maximum bandwidth of 208.9 PB/s. The system has a 230 PB DAOS capacity with a peak bandwidth of 31 TB/s and is configured in a total of 1024 nodes for storage.
Aurora, powered by the latest Intel Data Centre GPU Max Series 1550, outperforms the NVIDIA A100 and AMD Instinct MI250X accelerators in SimpleFOMP performance. In addition to Fusion Reactor forecasts, the Monte Carlo Methods, and QMCPACK, Intel boasts some excellent relative performance versus those accelerators.
The Aurora supercomputer is expected to be released later this year, with peak performance reaching 2 Exaflops. The supercomputer will also run the most recent Aurora gen AI model, which has a total of 1 trillion parameters for scientific purposes. In addition to the Aurora Supercomputer, Intel has unveiled the Data Centre GPU Max Subsystem, which comes in an x8 UBB configuration with 8 Ponte Vecchio GPUs.
Also Read: