Primary tabs

GPU computing basics

General-purpose graphics processing units (GPGPUs) are now popular in many computational workflows. It is important to understand how GPGPUs work from software perspectives, and how to improve the performance of your GPU-accelerated codes or software packages. To help you get started, this hands-on workshop will introduce you to GPU computing basics by looking at several NVIDIA GPU architectures available on the RCC clusters.  You will also learn the GPU programming basics, write simple GPU codes with NVIDIA CUDA Toolkit, and measure the performance. Finally, you will learn how to submit GPU jobs to the RCC machines.

We will cover topics such as:

  • Basic concepts: 1) how GPUs are designed for parallel processing, 2) how GPUs work with CPUs in a compute node, and 3) CUDA programming model.

  • Hands-on examples: porting simple CPU codes (vectorAdd and matrix transpose) to the GPU with NVIDIA CUDA Toolkit, and measuring the performance of the GPU code. The code used for this part is accessible at: https://github.com/rcc-uchicago/GPU-computing

  • GPU usage on Midway: how to compile a GPU code with NVIDIA CUDA Toolkit on Midway3, and how to submit a Slurm batch job to use the GPU.

Objectives:

After the workshop, the learners will be able to:

  • understand the basics of GPU computing and programming 

  • measure the performance of GPU-accelerated codes

  • request GPUs for their calculations on the Midway clusters

Please bring your laptop. Having an RCC account and familiarity with the HPC environment at the RCC (compute nodes, login nodes, storage, software modules) is required. Some programming experience is necessary.

Level: Intermediate

Duration: < 2 hours

  Register  Thursday, October 10, 2024 - 14:00 to 16:00