Granite Rapids and Sierra Forest are the two next-generation Xeon Data Centre CPUs which we are eagerly waiting for and Intel has recently provided some additional information regarding these CPUs. Intel began the HotChips 2023 presentation by describing how the demands on modern data centres are growing and becoming more and more workload-centric.
There are many different types of HPC, AI, compute-intensive, high-density, and general-purpose workloads, so using just one kind of core for all of these tasks isn’t necessarily the ideal option. As a result, Intel has already committed to providing its own response to AMD’s Zen 4 and Zen 4C approach, which includes Xeon CPUs that are optimised for P-Core and E-Core.
While the E-Core Xeon CPUs are branded under the Sierra Forest family and are optimised for Efficiency in high-density and scale-out workloads, the P-Core Xeon CPUs are branded under the Granite Rapids family and are optimised around compute-intensive and AI applications. Both CPU families are compatible with one another on the same systems as they share a same platform architecture and software stack.
Intel describes how its Xeon CPUs, particularly Granite Rapids, would contain distinct computing and IO silicon chiplets as part of the modularity features of its next-generation SoC design.
The same package’s EmiB fabric, which offers high bandwidth and low latency route lanes, will be used to connect these chiplets.
The Sierra Forest chips will scale across 1S and up to 2S solutions, whilst the Intel Granite Rapids-SP Xeon CPUs will scale across 1S and up to 8S platforms. A variety of SKUs with different core counts and thermal objectives will be available for both CPUs. Additionally, the next-generation Xeon CPU platform will handle up to 136 PCIe Gen 5 lanes with 6 UPI links (CXL 2.0) and up to 12-channels of DDR/MCR (1-2DPC) memory.
The latest Intel 3 process node will be used by the compute die on the Intel Xeon Granite Rapids chips to achieve improved performance and efficiency while also enabling a customizable row/column configuration. The LLC+SF+CHA slice, the mesh fabric interface, and the Redwood Cove architecture-based CPU cores with L2 cache will make up the Core Tile itself.
Both P-Cores and E-Cores can use the Core Tile. Additionally, there is the Advanced Memory subsystem on the same device, which has complete support for CXL-attached memory and a shared controller/IO.
Sierra Forest’s CPU core tile for the Intel E-Core Xeon family will have 2-4 cores per module and share an L2 cache, a frequency/voltage domain, and a mesh fabric interface. Since each core is single threaded, the top SKU of Compute Tile, bundled among 36 core tiles, has 144 cores and 144 threads. The LLC slice provides a high bandwidth pipeline and is shared by all cores in a socket. Thus, we now have 108 MB of LLC and up to 144 MB of L2 cache.
Also Read: