A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC™. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale.
NVIDIA A100 TENSOR CORE GPU
- GPU Memory: 40 GB
- Peak FP16 Tensor Core: 312 TF
- System Interface: 4/8 SXM on NVIDIA HGX A100
About
The Most Powerful End-to-End AI and HPC Data Center Platform
Specification
Specifications
Peak FP64
9.7 TF
Peak FP64 Tensor Core
19.5 TF
Peak FP32
19.5 TF
Peak FP32 Tensor Core
156 TF | 312 TF*
Peak BFLOAT16 Tensor Core
312 TF | 624 TF*
Peak FP16 Tensor Core
312 TF | 624 TF*
Peak INT8 Tensor Core
624 TOPS | 1,248 TOPS*
Peak INT4 Tensor Core
1,248 TOPS | 2,496 TOPS*
GPU Memory
40 GB
GPU Memory Bandwidth
1,555 GB/s
Interconnect
NVIDIA NVLink 600 GB/s
PCIe Gen4 64 GB/s
Multi-instance GPUs
Various instance sizes with up to 7MIGs @5GB
Form Factor
4/8 SXM on NVIDIA HGX™ A100
Max TDP Power
400W
You May Also Like
Related products
-
NVIDIA TESLA P100
SKU: N/A- GPU Memory: 16 CoWoS HBM2
- CUDA Cores: 3584
- Single-Precision Performance: 9.3 TeraFLOPS
- System Interface: x16 PCIe Gen3
-
NVIDIA JETSON™ TX2
SKU: N/A- GPU Memory: 8GB 128-bit LPDDR4 Memory
- GPU: 256-core NVIDIA Pascal™ GPU architecture with 256 NVIDIA CUDA cores
- CPU: Dual-Core NVIDIA Denver 2 64-Bit CPU, Quad-Core ARM® Cortex®-A57 MPCore
-
NVIDIA RTX A5500
SKU: 900-5G132-2570-000- 24GB GDDR6 with error correction code (ECC)
- 4x DisplayPort 1.4*
- PCIe Gen 4 x 16
Our Partners
Previous
Next