NVIDIA A100 TENSOR CORE GPU

  • GPU Memory: 40 GB
  • Peak FP16 Tensor Core: 312 TF
  • System Interface: 4/8 SXM on NVIDIA HGX A100

About

The Most Powerful End-to-End AI and HPC Data Center Platform

A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale.

Specification

Specifications
Peak FP64
9.7 TF
Peak FP64 Tensor Core
19.5 TF
Peak FP32
19.5 TF
Peak FP32 Tensor Core
156 TF | 312 TF*
Peak BFLOAT16 Tensor Core
312 TF | 624 TF*
Peak FP16 Tensor Core
312 TF | 624 TF*
Peak INT8 Tensor Core
624 TOPS | 1,248 TOPS*
Peak INT4 Tensor Core
1,248 TOPS | 2,496 TOPS*
GPU Memory
40 GB
GPU Memory Bandwidth
1,555 GB/s
Interconnect
NVIDIA NVLink 600 GB/s PCIe Gen4 64 GB/s
Multi-instance GPUs
Various instance sizes with up to 7MIGs @5GB
Form Factor
4/8 SXM on NVIDIA HGX™ A100
Max TDP Power
400W

Our Partners

Sign up with our newsletter to follow the latest trends in server technology