NVIDIA HGX B200
- HGX B200 8-GPU
- 8x NVIDIA B200 SXM
- NVIDIA NVLink (Fifth generation)
- NVIDIA NVSwitch™ (Fourth generation)
About
Purpose-Built for AI and HPC
AI, complex simulations, and massive datasets require multiple GPUs with extremely fast interconnections and a fully accelerated software stack. The NVIDIA HGX™ AI supercomputing platform brings together the full power of NVIDIA GPUs, NVLink®, NVIDIA networking, and fully optimized AI and high-performance computing (HPC) software stacks to provide the highest application performance and drive the fastest time to insights.
Specification
HGX B200
GPUs
HGX B200 8-GPU
Form factor
8x NVIDIA B200 SXM
FP4 Tensor Core
144 PFLOPS
FP8/FP6 Tensor Core
72 PFLOPS
INT8 Tensor Core
72 POPS
FP16/BF16 Tensor Core
36 PFLOPS
TF32 Tensor Core
18 PFLOPS
FP32
640 TFLOPS
FP64
320 TFLOPS
FP64 Tensor Core
320 TFLOPS
Memory
Up to 1.5TB
NVIDIA NVLink
Fifth generation
NVIDIA NVSwitch™
Fourth generation
GPU-to-GPU bandwidth - 1.8TB/s
Total aggregate bandwidth
14.4TB/s
You May Also Like
Related products
-
NVIDIA JETSON™ AGX XAVIER DEVELOPER KIT
SKU: N/A- GPU Memory: 32GB
- CUDA Core: 512-core Volta GPU with Tensor Cores
- CPU: 8-core ARM v8.2 64-bit
-
NVIDIA TESLA V100-32GB
SKU: 900-2G500-0010-000- GPU Memory: 32GB HBM2
- CUDA Cores: 5120
- NVIDIA Tensor Cores: 640
- Single-Precision Performance: 14 TeraFLOPS
-
NVIDIA H100
SKU: 900-21010-0000-000Take an order-of-magnitude leap inaccelerated computing. The NVIDIA H100 Tensor Core GPU delivers unprecedented performance,scalability, and security for every workload. With NVIDIA® NVLink® SwitchSystem, up to 256 H100 GPUs can be connected to accelerate exascaleworkloads, while the dedicated Transformer Engine supports trillion-parameter language models. H100 uses breakthrough innovations in theNVIDIA Hopper™ architecture to deliver industry-leading ...More Information
Our Partners
Previous
Next