Graphics Processing Units or GPUs were traditionally used for the purpose of rendering graphics on screen. They were optimized for throughput and could render millions of pixels simultaneously, by performing the same computations on millions of individual data elements in parallel. This immense processing power of GPUs is now being harnessed for general purpose applications as well. Parallel programming is becoming increasingly important today, in a world where CPUs are becoming harder to optimize for speed and energy and are just about reaching their limits.
For a beginner looking to get started with parallel programming, NVIDIA’s parallel programming framework known as CUDA may be a good place to start. NVIDIA’s GPUs, found in almost every PC today, can be used for general purpose parallel programming by writing CUDA applications, which can be in languages like C, C++, and Fortran.
This post aims to help those wanting to begin CUDA C/C++ programming on their own Linux machines get off to a smooth start.
- NVIDIA CUDA Getting Started Guide for Linux
- List of CUDA-capable NVIDIA GPUs
- CUDA Toolkit Downloads page
Following the posts linked above, the summary of the fundamental steps is as follows:
- Verifying the hardware details of your machine – the GPU version and whether it exists on
- Verifying that your OS can support CUDA programming – for the latest CUDA version, will help you see whether your OS is supported or not.
- Verifying that gcc is installed on your system.
- Download CUDA fromthe Toolkit Downloads page.
- Follow the installation instructions from the NVIDIA Getting Started Guide.
A word of warning!
This is where you have to be extremely careful. Downloading CUDA might be smooth enough, but installing it is quite another issue. Depending on the installation method you chose, a different version of the NVIDIA GPU Driver may or may not be auto-downloaded and installed. This can completely crash your system if the new driver is not compatible with it, and you might not be able to fix it without many hours of Googling for solutions from some other device, because you will be unable to boot due to a black screen. You might end up reformatting your device. It is highly advisable to create a separate disk partition, install a CUDA-capable OS on that partition, and do all your CUDA-related work only on that partition. This way you can protect your current OS from corruption. I speak from experience.
Testing CUDA Post-Installation
Once installed, writing a Hello World program is easy enough. If you have installed CUDA correctly (kudos to you, I spent several days and crashed my system multiple times), then the command nvcc –version should give you the CUDA compiler version number.
To write a Hello World program to test your newly CUDA-enabled machine,
- Create a file with .cu extension, say helloWorld.cu
- Copy and paste the Hello World CUDA code from this link: The Real Hello World for CUDA
- From the terminal, run nvcc -o helloWorld helloWorld.cu
- Run ./helloWorld
If you get “Hello World” as the output, you are all set to begin CUDA programming, and can begin writing applications of your choice. The set of NVIDIA CUDA Samples is a great repository of CUDA codes, and going through them will help you get started. It should be available on your system already, post CUDA installation.
Hopefully this post will help people like me who had no idea where to start. Happy CUDA programming!