NVIDIA is planning to go into more detail about its Hopper GPUs and Grace CPUs at its Next Hot Chips 34 Event

More From Author

See more articles

Myntra Upcoming Sales 2025: Your Fashion Calendar for Maximum...

Myntra Upcoming Sales 2025 In the ever-evolving world of fashion e-commerce, Myntra continues to be India's go-to destination...

Dimensity 6020 vs Snapdragon 695: Mid-Range Chipset Battle

Dimensity 6020 vs Snapdragon 695: Qualcomm Snapdragon 695 5G (SD695) is a fast mid-range ARM-based SoC found...

My Jio Recharge Plans as of January 4,...

My Jio Recharge Plans: Since its establishment in 2016, Reliance Jio has made a remarkable impact on...

The following week, NVIDIA will present brand-new information about its Hopper GPU and Grace CPU at the next Hot Chips (34) event. Senior engineers from the business will discuss issues with a focus on Grace CPU, Hopper GPU, NVLink Switch, and the Jetson Orin module to describe breakthroughs in accelerated computing for contemporary data centres and systems for edge networking.

An annual conference called Hot Chips brings together system and processor architects and gives businesses the chance to talk about specifics like technical specifications or the performance of their products right now. The new Hopper graphics card and NVIDIA’s first server-based processor will be discussed. The Jetson Orin system on a module, or SoM, from the business is connected to the chip by the NVSwitch.

The four presentations that will be made over the course of the two-day conference will provide attendees an inside look at how the company’s platform will improve performance, efficiency, scale, and security.

NVIDIA
credit: wccftech

NVIDIA aspires to “show a design philosophy where GPUs, CPUs, and DPUs act as peer processors throughout the full stack of devices, systems, and software.” The business has so far developed a platform that manages high-performance computing, AI, and data analytics tasks inside cloud service providers, supercomputing facilities, corporate data centres, and autonomous AI systems.

To deliver the energy-efficient speed that today’s applications demand, data centres need flexible clusters of processors, graphics cards, and other accelerators transmitting huge pools of memory.

The NVIDIA NVLink-C2C will be discussed by eminent engineer and 15-year NVIDIA veteran Jonathon Evans. Due to data transfers using 1.3 picojoules per bit, it connects processors and graphics cards at 900 Gb/s with five times the energy efficiency of the current PCIe Gen 5 standard.

NVIDIA
credit: wccftech

The NVIDIA Grace CPU, which has 144 Arm Neoverse cores, is created by combining two processors using NVLink-C2C. It is a CPU designed to address the most pressing computing issues facing the world today.

LPDDR5X memory is used by the Grace CPU to maximise performance. The chip keeps the total power consumption of the complex at 500 watts while enabling a terabyte per second of bandwidth in its memory.

The NVIDIA NVSwitch uses NVLink, an interconnect that runs at 900 gigabytes per second and has more than seven times the capacity of PCIe 5.0, to combine several servers into a single AI supercomputer.

Users can connect 32 NVIDIA DGX H100 computers using NVSwitch to create an AI supercomputer that offers an exaflop of maximum AI performance.

Two of NVIDIA’s seasoned engineers, Alexander Ishii and Ryan Wells, describe how the switch enables customers to create systems with up to 256 GPUs to handle taxing workloads like training AI models with more than 1 trillion parameters.

Also Read:

source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

━ Related News

Featured

━ Latest News

Featured