NVIDIA GPU AI Servers
-
NVIDIA DGX B200
SKU: 900-2G133-0010-000-1- 8x NVIDIA Blackwell GPUs
- 1,440GB total GPU memory
- 72 petaFLOPS training and 144 petaFLOPS inference
- 2 Intel® Xeon® Platinum 8570 Processors
-
NVIDIA DGX H200
SKU: 900-2G133-0010-000-1-1- 8x NVIDIA H200 GPUs with 1,128GBs of Total GPU Memory
- 4x NVIDIA NVSwitches™
- 10x NVIDIA ConnectX®-7 400Gb/s Network Interface
- Dual Intel Xeon Platinum 8480C processors
- 30TB NVMe SSD
-
NVIDIA DGX GH200
SKU: DGX GH200-1- 32x NVIDIA Grace Hopper Superchips, interconnected with NVIDIA NVLink
- Massive, shared GPU memory space of 19.5TB
- 900 gigabytes per second (GB/s) GPU-to-GPU bandwidth
- 128 petaFLOPS of FP8 AI performance
-
NVIDIA DGX H100
SKU: DGX H100- 8X NvidiaH100 Gpus With 640 Gigabytes of total gpu memory
- 4x Nvidia Nvswitches
- 8X Nvidia Connect-7 and 2x Nvidia bluefield dpu 400 gigabits-per-second network interface
- Dual x86 CPUs and 2 Terabytes of system memory
- 30 Terabytes NVME SSD
-
NVIDIA DGX A100
SKU: DGXA-2530A+P2CMI00- 8X NVIDIA A100 GPUS WITH 320 GB TOTAL GPU MEMORY
- 6X NVIDIA NVSWITCHES
- 9X MELLANOX CONNECTX-6 200Gb/S NETWORK INTERFACE
- DUAL 64-CORE AMD CPUs AND 1 TB SYSTEM MEMORY
- 15 TB GEN4 NVME SSD
-
NVIDIA DGX STATION A100 320GB/160GB
SKU: DGXS-2080C+P2CMI00- 2.5 petaFLOPS of performance
- World-class AI platform, with no complicated installation or IT help needed
- Server-grade, plug-and-go, and doesn’t require data center power and cooling
- 4 fully interconnected NVIDIA A100 Tensor Core GPUs and up to 320 gigabytes (GB) of GPU memory
-
NVIDIA HGX A100 (8-GPU)
SKU: N/A- 8X NVIDIA A100 GPUS WITH 320 GB TOTAL GPU MEMORY
- 6X NVIDIA NVSWITCHES
- 320 GB MEMORY
- 4.8 TB/s TOTAL AGGREGATE BANDWIDTH
-
10 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMX-R4051- GPU: 10 NVIDIA H100, A100, L40, A40, RTX6000, 2-slot GPU
- CPU: 2 4th Generation Intel Xeon Scalable Processors
- System Memory: 8 TB (32 DIMM)
- STORAGE: NVME
-
8 H100 GPU 2 EPYC AI SYSTEM
SKU: SMX-R4041- Powered by 8 NVIDIA H100, A100, L40, A40, RTX6000, RTXA5000
- AMD EPYC 9004 processors with Dual 128 Zen 4c cores
-
8 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMX-GS4845- GPU : 8 NVIDIA A100, V100, RTXA6000, RTX8000, A40
- NVLINK : 4 NVLINK
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 4 TB (32 DIMM)
- Type A: 12 3.5" SATA/NVMe U.2 Hotswap bays
- Type B: 24 2.5" SATA/SAS NVMe U.2 Hotswap bays
-
10 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMXB7119FT83- GPU: 10 NVIDIA A100, A40, A30, V100, RTXA6000, RTXA5000
- NVLINK: 4 NVLINK
- CPU: 80 CORES (2 Intel Xeon Scalable), Single/Dual Root
- System Memory: 8 TB (32 DIMM)
- STORAGE: 12 3.5" SATA SSD/HDD OR NVMe PCIe U.2
-
2 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMXB8252T75- GPU : 2 NVIDIA RTXA6000, A40, RTX8000, T4
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 4 TB (32 DIMM)
- 26 2.5" SATA/NVMe U.2 SSD Hotswap bays, 2 NVMe M.2 SSD
-
4 GPU 1 EPYC DEEP LEARNING AI SERVER
SKU: SMXB8021G88- GPU : 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- CPU: 64 CORES (1 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 2 TB (16 DIMM)
- 2 2.5" SATA SSD Hotswap bays, 2 NVMe M.2 SSD
- 1U Rackmount
-
4 GPU 1 XEON DEEP LEARNING AI SERVER
SKU: SMXB5631G88- GPU: 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- CPU: 28 CORES (1 Intel Xeon Scalable)
- System Memory: 1.5 TB (12 DIMM)
- STORAGE: 2 2.5" SSD, 2 NVMe M.2 SSD
- 1U Rackmount
-
4 GPU 2 EPYC DEEP LEARNING AI SERVER
SKU: SMX-B8251- GPU : 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- NVLINK : 2 to 6 NVLINK
- CPU: 128 CORES (2 AMD EPYC ROME)
- PCIe Gen 4.0 support
- System Memory: 2 TB (16 DIMM)
- 8 3.5" SATA/NVMe U.2 Hotswap bays
-
4 GPU 2 XEON DEEP LEARNING AI SERVER
SKU: SMXESC4000G4- GPU: 4 NVIDIA A100, V100, RTXA6000, A40, RTX8000, T4
- NVLINK: 2 NVLINK
- CPU: 56 CORES (2 Intel Xeon Scalable), Single/Dual Root
- System Memory: 2 TB (16 DIMM)
- STORAGE: 8 3.5" SATA SSD/HDD OR NVMe U.2
-
4 RTX6000 AI WORKSTATION
SKU: SMX-DX4- GPU: 4 NVIDIA RTX6000 ADA
- Quiet Operation for Office/Lab
-
SMX STATION A100
SKU: SMX STATION A100- GPUs: 4x NVIDIA A100 80 GB GPUs
- GPU Memory: 320 GB total
- NVLink: 6 NVLink Bridge max 600 Gbytes per second
- System Power Usage: 1.5 kW at 100–240 Vac
- CPU: Single AMD 7742, 64 cores, 2.25 GHz (base)–3.4 GHz (max boost)
- System Memory: 512 GB DDR4
- Networking: Dual-port 10Gbase-T Ethernet LAN, Dual-port 1Gbase-T Ethernet, BMC management port
- Storage: OS: 1x 1.92 TB NVME drive, Internal storage: 7.68 TB U.2 NVME drive
- Software: Ubuntu Linux OS, NGC Package
-
2 GPU 2 EPYC DEEP LEARNING AI WORKSTATION
SKU: SMX-T0046- GPU: 2 NVIDIA RTX6000 ADA / RTXA6000 / L40 / A40
- CPU: 2 AMD EPYC 9004 GENOA 256 Cores
- 1 NVLINK (Optional)
- System Memory: 3 TB (24 DIMM)
- NVME SATA SSD
-
4 GPU 1 EPYC AI WORKSTATION
SKU: SMX-SE4- GPU: 4 Dual Slot Active
- CPU: AMD EPYC 7003 64Cores
- System Memory: 1 TB (8 DIMM)
- NVME SSD
-
4 GPU 1 THREADRIPPER DEEP LEARNING AI WORKSTATION
SKU: SMX-ST4- GPU: 4 RTX6000 ADA
- CPU: AMD Threadripper Pro 64Cores
- RAM: 1TB 8x 128GB
- NVME SSD
-
4 GPU 1 XEON DEEP LEARNING AI WORKSTATION
SKU: SMX-SX4- GPU: 4 ACTIVE GPU
- CPU: Intel Xeon W 24Cores
- System Memory: 1 TB (8 DIMM)
- NVME SSD
-
4 NODES in 2U EPYC Server
SKU: N/A- 2U chassis with 4 node support 16x 2.5'' HDD, 1600W Redundant (1+1) PSU
- Single AMD EPYC™ 7002 Processor family
- 8 DIMM Slots, Supports Eight-Channel DDR4 3200/2933 R DIMM (Modules up to 64GB Supported), and LR DIMM (Modules up to 256GB Supported)
- Supports 4 x 2.5" HDD/SSD per node (all SATA or 2 x NVME* + 2 x SATA)
- Supports 2x PCIe4.0 x 16, 2x M.2 slots per node
- Integrated IPMI 2.0 and KVM with Dedicated LAN
- Supports OCP 3.0 PCIe4.0 x 16 mezzanine card
-
4 Nodes in 2U Xeon Servers
SKU: N/A- 2U chassis with 4 node support 16x 2.5'' HDD, 1600W Redundant (1+1) PSU
- Dual Socket Intel Xeon Scalable Processors and 2nd Gen Intel Xeon Scalable Processors
- Supports Six channel DDR4 2666/2400 RDIMM, LRDIMM, 16 x DIMM slots
- Supports 4 x 2.5" HDD/SSD per node (all SATA or 2 x NVME + 2 x SATA)
- Supports 2 x PCIe3.0 x 16 per node. Supports OCP 3.0 PCIe3.0 x 16 mezzanine card
-
NVIDIA DGX-2
SKU: N/A- WORLD’s FIRST 2 PETAFLOPS SYSTEM
- ENTERPRISE GRADE AI INFRASTRUCTURE
- 12 TOTAL NVSWITCHES + 8 EDR INFINIBAND/100 GbE ETHERNET
- 2 INTEL PLATINUM CPUS + 1.5 TB SYSTEM MEMORY + DUAL 10/25 GbE ETHERNET
- 30 TB NVME SSDS INTERNAL STORAGE
-
NVIDIA DGX-1
SKU: N/A- EFFORTLESS PRODUCTIVITY
- NVIDIA TESLA V100 + NEXT GENERATION NVIDIA NVLINK
- TWO INTEl XEON CPUs + QUAD EDR IB
- THREE-RACK-UNIT ENCLOSURE
-
NVIDIA DGX Station
SKU: DGXS-2511C+P2CMI00- Four NVIDIA TESLA V100 GPU
- Next Generation NVIDIA NVLINK
- Water Cooling
- 1/20 Power CONSUMPTION
- Pre-installed standard Ubuntu 14.04 w/ Caffe, Torch, Theano, BIDMach, cuDNN v2, and CUDA 8.0
Our Partners



Previous
Next