NVIDIA JETSON™ TX2
- GPU Memory: 8GB 128-bit LPDDR4 Memory
- GPU: 256-core NVIDIA Pascal™ GPU architecture with 256 NVIDIA CUDA cores
- CPU: Dual-Core NVIDIA Denver 2 64-Bit CPU, Quad-Core ARM® Cortex®-A57 MPCore
About
Jetson TX2 is the fastest, most power-efficient embedded AI computing device. This 7.5-watt supercomputer on a module brings true AI computing at the edge. It’s built around an NVIDIA Pascal™-family GPU and loaded with 8GB of memory and 59.7GB/s of memory bandwidth. It features a variety of standard hardware interfaces that make it easy to integrate it into a wide range of products and form factors.
Specification
Specifications
GPU Architecture
256 core NVIDIA Pascal
CPU
2 Denver 64-bit CPUs + Quad-Core A57 Complex
System Memory
8 GB L128 bit DDR4 Memory
Storage
32 GB eMMC 5.1 Flash Storage
Network
10/100/1000BASE-T Ethernet
Connectivity to 802.11ac Wi-Fi and Bluetooth-Enabled Devices
Connectivity to 802.11ac Wi-Fi and Bluetooth-Enabled Devices
Camera Module
5 MP Fixed Focus MIPI CSI Camera
Power Input
5.5V – 19.6V
Applications
Intelligent Video Analytics, Drones, Robotics, Industrial automation, Gaming, and more.
You May Also Like
Related products
-
NVIDIA RTX 4000 SFF Ada Generation
SKU: 900-5G133-2550-000-1Built on the NVIDIA Ada Lovelace architecture, the RTX 4000 SFF combines 48 third-generation RT Cores, 192 fourth-generation Tensor Cores, and 6,144 CUDA® cores with 20GB of error correction code (ECC) graphics memory. The RTX 4000 SFF delivers incredible acceleration for rendering, AI, graphics, and compute workloads. -
NVIDIA RTX A6000
SKU: 900-5G133-2500-000- GPU Memory: 48 GB GDDR6 with error-correcting code (ECC)
- CUDA Core: 10,752
- PCI Express Gen 4
-
NVIDIA H200 NVL
SKU: 900-21010-0040-000The GPU for Generative AI and HPC The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200’s larger and faster memory fuels the acceleration of generative AI and large language models (LLMs) while advancing scientific computing for ...More Information
Our Customers

























Previous
Next