NVIDIA H200 NVL
About
The GPU for Generative AI and HPC
The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200’s larger and faster memory fuels the acceleration of generative AI and large language models (LLMs) while advancing scientific computing for HPC workloads.
Specification
H200 NVL
FP64
34 TFLOPS
FP64 Tensor Core
67 TFLOPS
FP32
67 TFLOPS
TF32 Tensor Core
989 TFLOPS2²
BFLOAT16 Tensor Core
1,979 TFLOPS²
FP16 Tensor Core
1,979 TFLOPS²
FP8 Tensor Core
3,958 TFLOPS²
INT8 Tensor Core
3,958 TFLOPS²
GPU memory
141GB
GPU memory bandwidth
4.8TB/s
Decoders
7 NVDEC
7 JPEG
Confidential Computing
Supported
Max thermal design power (TDP)
Up to 600W (configurable)
Multi-Instance GPUs
Up to 7 MIGs @16.5GB each
Form factor
PCIe
Interconnect
2- or 4-way NVIDIA NVLink bridge: 900GB/s PCIe Gen5: 128GB/s
Server options
NVIDIA MGX™ H200 NVL partner and NVIDIA-Certified Systems with up to 8 GPUs
NVIDIA AI Enterprise
Included
You May Also Like
Related products
-
NVIDIA JETSON™ AGX XAVIER DEVELOPER KIT
SKU: N/A- GPU Memory: 32GB
- CUDA Core: 512-core Volta GPU with Tensor Cores
- CPU: 8-core ARM v8.2 64-bit
-
NVIDIA RTX A4500
SKU: 900-5G132-2550-000- 20 GB GDDR6 with error-correcting code (ECC)
- 4x DisplayPort 1.4*
- PCI Express Gen 4 x 16
-
NVIDIA H100
SKU: 900-21010-0000-000Take an order-of-magnitude leap inaccelerated computing. The NVIDIA H100 Tensor Core GPU delivers unprecedented performance,scalability, and security for every workload. With NVIDIA® NVLink® SwitchSystem, up to 256 H100 GPUs can be connected to accelerate exascaleworkloads, while the dedicated Transformer Engine supports trillion-parameter language models. H100 uses breakthrough innovations in theNVIDIA Hopper™ architecture to deliver industry-leading ...More Information
Our Partners
Previous
Next