About
CUDA is a parallel computing platform and programming model developed by Nvidia for general computing on its own GPUs (graphics processing units). CUDA enables developers to speed up compute-intensive applications by harnessing the power of GPUs for the parallelizable part of the computation.
This post is a super simple introduction to CUDA, the popular parallel computing platform and programming model from NVIDIA. I wrote a previous “Easy Introduction” to CUDA in 2013 that has been very popular over the years. But CUDA programming has gotten easier, and GPUs have gotten much faster, so it’s time for an updated (and even easier) introduction.
CUDA C++ is just one of the ways you can create massively parallel applications with CUDA. It lets you use the powerful C++ programming language to develop high performance algorithms accelerated by thousands of parallel threads running on GPUs. Many developers have accelerated their computation- and bandwidth-hungry applications this way, including the libraries and frameworks that underpin the ongoing revolution in artificial intelligence known as Deep Learning.
Specification
You May Also Like
Related products
-

Caffe
SKU: N/ACaffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license. Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without ...More Information -

NVIDIA DGX-2
SKU: N/AMore Information- WORLD’s FIRST 2 PETAFLOPS SYSTEM
- ENTERPRISE GRADE AI INFRASTRUCTURE
- 12 TOTAL NVSWITCHES + 8 EDR INFINIBAND/100 GbE ETHERNET
- 2 INTEL PLATINUM CPUS + 1.5 TB SYSTEM MEMORY + DUAL 10/25 GbE ETHERNET
- 30 TB NVME SSDS INTERNAL STORAGE
-

cuDNN
SKU: N/ANVIDIA cuDNN The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. Deep learning researchers and framework developers worldwide rely on cuDNN for high-performance GPU acceleration. It allows them ...More Information
Our Customers





























