NVIDIA’s GTC 2020 ‘Get Amped’ keynote is closing in and details of a new DGX system powered by the next-generation Ampere GPU has been spotted. It looks like NVIDIA has registered a trademark for its new DGX system over at Justia which was spotted by Komachi (via Videocardz). Being an HPC product, it makes a lot of sense for NVIDIA to be filing a trademark for a new DGX system that will go on to house its next-generation graphics processing units.
NVIDIA’s DGX A100 Trademark Spotted – Powered by The Next-Generation Ampere GA100 Flagship GPU
The specific name for the DGX system is DGX A100 which has a lot to say. The DGX system is solely designed for the deep learning and HPC community, offering supercomputing capabilities inside a workstation form factor. NVIDIA has released DGX solutions based on its Pascal and Volta GPUs but with the release of Ampere GPU imminent, a new DGX solution has to be designed.
The Volta line of DGX systems was streamlined to offer more options to HPC users. We saw several variants ranging from the DGX Station which featured a total of four Tesla V100 GPUs all the way to the 16 Tesla V100 housing DGX-2 monster which NVIDIA had termed as the “World’s Largest GPU“.
With Ampere GPU, NVIDIA would be releasing its latest DGX A100 system. The name makes it clear that the system would be based on the GA100 GPU. The GA100 GPU would be the biggest chip in the Ampere lineup and would definitely feature one of the flagship 128 SM configurations that we expect to see on an NVIDIA GA100 chip. NVIDIA may start off its Ampere line of DGX systems in a more traditional manner, offering 8 Tesla GPU configurations in the beginning and moving on to the larger and denser parts later on as the yields get better for the new Ampere chips.
While the Ampere GPU would remain the key component of the DGX system, it will be interesting to see where NVIDIA goes with the rest of the tech configuration on its DGX A100 systems. NVIDIA’s current DGX-2 system makes use of Intel’s Xeon Platinum processors based on the 14nm Skylake architecture. It also features 1.5 TB of memory and a range of NV Switches which act as the main protocol channel between GPUs. NVIDIA would further expand its NVLINK & NVSwitch proprietary interconnect technologies in Ampere based systems, offering higher bandwidth and tighter links for faster GPU-to-GPU communications than existing products.
It will also be interesting to see NVIDIA offer users a choice to select between an Intel Xeon or an AMD EPYC powered DGX A100 system. AMD’s EPYC GPUs have been winning the hearts of several top-tier HPC customers and is being embedded in some of the world’s faster supercomputers that will become operational in a few years.
This would be a great opportunity for NVIDIA to have a lead as the first GPU carrier offering DNN/DL solutions featuring both Intel & AMD HPC chips. We will have to wait and see if this happens but even if NVIDIA goes all onboard with Intel again, then it will make sense from an optimization perspective as NVIDIA’s previous line of DGX system had lots of optimizations embedded for Intel’s Xeon CPUs.
NVIDIA Tesla Graphics Cards Comparison
Tesla Graphics Card Name | NVIDIA Tesla M2090 | NVIDIA Tesla K40 | NVIDIA Telsa K80 | NVIDIA Tesla P100 | NVIDIA Tesla V100 | NVIDIA Tesla Next-Gen #1 | NVIDIA Tesla Next-Gen #2 | NVIDIA Tesla Next-Gen #3 |
---|---|---|---|---|---|---|---|---|
GPU Architecture | Fermi | Kepler | Maxwell | Pascal | Volta | Ampere? | Ampere? | Ampere? |
GPU Process | 40nm | 28nm | 28nm | 16nm | 12nm | 7nm? | 7nm? | 7nm? |
GPU Name | GF110 | GK110 | GK210 x 2 | GP100 | GV100 | GA100? | GA100? | GA100? |
Die Size | 520mm2 | 561mm2 | 561mm2 | 610mm2 | 815mm2 | TBD | TBD | TBD |
Transistor Count | 3.00 Billion | 7.08 Billion | 7.08 Billion | 15 Billion | 21.1 Billion | TBD | TBD | TBD |
CUDA Cores | 512 CCs (16 CUs) | 2880 CCs (15 CUs) | 2496 CCs (13 CUs) x 2 | 3840 CCs | 5120 CCs | 6912 CCs | 7552 CCs | 7936 CCs |
Core Clock | Up To 650 MHz | Up To 875 MHz | Up To 875 MHz | Up To 1480 MHz | Up To 1455 MHz | 1.08 GHz (Preliminary) | 1.11 GHz (Preliminary) | 1.11 GHz (Preliminary) |
FP32 Compute | 1.33 TFLOPs | 4.29 TFLOPs | 8.74 TFLOPs | 10.6 TFLOPs | 15.0 TFLOPs | ~15 TFLOPs (Preliminary) | ~17 TFLOPs (Preliminary) | ~18 TFLOPs (Preliminary) |
FP64 Compute | 0.66 TFLOPs | 1.43 TFLOPs | 2.91 TFLOPs | 5.30 TFLOPs | 7.50 TFLOPs | TBD | TBD | TBD |
VRAM Size | 6 GB | 12 GB | 12 GB x 2 | 16 GB | 16 GB | 48 GB | 24 GB | 32 GB |
VRAM Type | GDDR5 | GDDR5 | GDDR5 | HBM2 | HBM2 | HBM2e | HBM2e | HBM2e |
VRAM Bus | 384-bit | 384-bit | 384-bit x 2 | 4096-bit | 4096-bit | 4096-bit? | 3072-bit? | 4096-bit? |
VRAM Speed | 3.7 GHz | 6 GHz | 5 GHz | 737 MHz | 878 MHz | 1200 MHz | 1200 MHz | 1200 MHz |
Memory Bandwidth | 177.6 GB/s | 288 GB/s | 240 GB/s | 720 GB/s | 900 GB/s | 1.2 TB/s? | 1.2 TB/s? | 1.2 TB/s? |
Maximum TDP | 250W | 300W | 235W | 300W | 300W | TBD | TBD | TBD |
NVIDIA’s Ampere GPUs are definitely going to shake things up in the HPC market with several variants already leaked and performance being rated at around 30 TFLOPs (FP32). We will keep you updated as more info comes prior to the 14th of May when NVIDIA will be presenting its next-gen GPU lineup.
The post NVIDIA DGX A100 – Next-Gen Ampere GA100 GPU Based Supercomputing System Spotted by Hassan Mujtaba appeared first on Wccftech.
Powered by WPeMatico