22.1 C
Delhi

NVIDIA announces brand new DGX Station 320G, featuring Quad Ampere A100 GPUs with 320 GB Memory

NVIDIA DGX Station 320G AI server has just been announced by the company. It is based on the Ampere A100 Tensor Core GPUs. The DGX Station 320G, designed to be the fastest server in a box dedicated to AI research, features the updated NVIDIA A100 Tensor Core GPUs which have at its disposal, double the memory & multi-Petaflops of AI horsepower.

NVIDIA Unveils DGX Station 320G AI Server With AMD EPYC 64 Core CPU, 320 GB Memory and Quad NVIDIA Ampere A100 GPUs

The NVIDIA DGX Station 320G is aimed at the artificial intelligence market, accelerating machine learning and data science performance for research facilities, corporate offices, labs, or home offices everywhere.

The AI leaders around the world that have adopted DGX Station to power AI and data science across industries include:

- Advertisement -TechnoSports-Ad
  • BMW Group Production, while developing and deploying AI models that improve operations, use NVIDIA DGX Stations to explore insights faster.
  • German AI research center, DFKI, is using the NVIDIA DGX Station to build models that tackle critical challenges for society and industry.
  • Lockheed Martin is making use of the NVIDIA DGX Station for the development of AI models that use sensor data and service logs to predict the need for maintenance to increase safety for workers, improve manufacturing uptime, and reduce operational costs.
  • Japan’s leading mobile operator with over 79 million subscribers, NTT Docomo, puts NVIDIA DGX Station to use for the development of innovative AI-driven services such as its image recognition solution.
NVIDIA DGX Station A100 Custom NVIDIA announces brand new DGX Station 320G, featuring Quad Ampere A100 GPUs with 320 GB Memory
Image: WCCFTECH

NVIDIA DGX Station 320G System Specifications

Coming to the specifications, a total of four updated A100 Tensor Core GPUs accommodating twice the memory give power to the NVIDIA DGX Station 320G.

In the DGX Station A100, the NVIDIA A100 Tensor Core GPUs comes packed with twice the memory size of the original A100 which is 80 GB of HBM2e memory. According to a report by WCCFTECH, this implies that the DGX Station “has a total of 320 GB of total available capacity while fully supporting MIG (Multi-Instance GPU protocol) and 3rd Gen NVLink support, offering 200 GB/s of bidirectional bandwidth between any GPU pair & 3 times faster interconnect speeds than PCIe Gen 4.” 

- Advertisement -TechnoSports-Ad
NVIDIA DGX Station A100 Official Presentation 2 NVIDIA announces brand new DGX Station 320G, featuring Quad Ampere A100 GPUs with 320 GB Memory

“The system itself houses an AMD EPYC Rome 7742  64 Core CPU with full PCIe Gen 4 support, up to 512 GB of dedicated system memory, 1.92 TB NVME M.2 SSD storage for OS, and up to 7.68 TB NVME U.2 SSD storage for data cache.” The system carries 2x 10 GbE LAN controllers, for connectivity purposes, and a single 1 GbE LAN port for remote management. A discrete DGX Display Adapter card provides the display output, with up to 4K resolution support. The AIC features its own active cooling solution.

nvidia-dgx-station-a100_official_renders_1

The A100 GPUs are placed on the rear side of the chassis. A refrigerant cooling system that is whisper quiet and also maintenance-free is added to all four GPUs and the CPU. The cooler operates at a silent 37db and the whole system is powered by a 1500W PSU.

NVIDIA DGX Station A100 Official Renders 6 NVIDIA announces brand new DGX Station 320G, featuring Quad Ampere A100 GPUs with 320 GB Memory

NVIDIA DGX Station A100 System Performance

As for performance, the DGX Station A100 delivers 5 PetaOPS of INT8 inferencing horsepower & 2.5 Petaflops of AI training power. The DGX Station A100 is also the only workstation of its kind to support the MIG, in turn, allowing users to slice up individual GPUs, allowing for simultaneous workloads to be executed more efficiently and faster.

nvidia-dgx-station-a100_official_presentation_1

The new version offers a “3.17x increase in Training performance, 4.35x increase in Inference performance, and 1.85x increase in HPC oriented workloads” over the original DGX Station. 

Advancing AI with DGX SuperPOD

DGX SuperPODs are AI supercomputers featuring 20 or more NVIDIA DGX A100 systems and NVIDIA InfiniBand HDR networking. Among the latest to deploy DGX SuperPODs to power new AI solutions and services are Sony Group Corporation, NAVER, a leading internet technology company in Korea and Japan, Recursion, a digital-biology company, and MTS, Russia’s largest telecommunications company.

NVIDIA DGX Station 320G System Availability

The NVIDIA DGX Station 320G will be available later this year for a monthly subscription of $9,000 per month or at a price of $149,000. Through NVIDIA’s global partners, which can provide pricing to qualified customers upon request, Cloud-native, multi-tenant NVIDIA DGX SuperPODs will be available in Q2. NVIDIA Base Command will also be available starting in Q2.

SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

TechnoSports-Ad

Popular

TechnoSports-Ad

Related Stories

More from author

Top 10 Richest Football Club Owners in the World in 2024

Football, one of the most popular sports with an estimated 4 billion fans worldwide, is not just a sport anymore but has also turned...

Top 5 players with the most goals for Indian national football team

Football is still a sport that is on the rise in India, with it still finding its feet among the public, though in recent...

Top 10 Most Popular Sports in the World in 2024

Here we bring to you the Top 10 Most popular sports in the world From our early childhood years when we take sports very seriously...

Top 10 Semiconductor Foundries of The World in 2023

Know the Top 10 Semiconductor Foundries in the world here as of 2023 A semiconductor foundry, also known as a fab or se­miconductor fabrication plant,...