Course on CUDA Programming
Source: https://people.maths.ox.ac.uk/~gilesm/cuda/ Parent: https://oerc.ox.ac.uk/education/cuda-programming-on-nvidia-gpus
Course on CUDA Programming on NVIDIA GPUs, July 20-24, 2026
The course will be taught by Prof. Mike Giles and Prof. Wes Armour. They have both used CUDA in their research for many years, and set up JADE, the first national GPU HPC facility for Machine Learning. \ \ Online registration should be set up by the end of March 2026 with a link from this webpage.
\ \ This is a one-week hands-on course for students, postdocs, academics and others who want to learn how to develop applications to run on NVIDIA GPUs using the CUDA programming environment. All that will be assumed is some proficiency with C and basic C++ programming. No prior experience with parallel computing will be assumed. \ \ The course consists of approximately 3 hours of lectures and 4 hours of practicals each day. The aim is that by the end of the course you will be able to write relatively simple programs and will be confident and able to continue learning through studying the CUDA code samples provided by NVIDIA on GitHub. \ \ All attendees should bring a laptop to access the GPU servers which will be used for the practicals. \ \ The costs for the course are:
- free for everyone in Oxford (due to central funding)
- £250 for those from other UK universities
- £500 for those from UK government labs, UK not-for-profit organisations, and foreign universities
- £2500 for those from industry and foreign government labs
Anyone with a status which does not fit into one of the categories above, including those outside the UK who are not from a university, company or government lab, should contact me (mike.giles@maths.ox.ac.uk) to discuss the appropriate fee category. \ \ The intention is that these costs should not deter anyone from attending the course. The higher costs for certain participants correspond to the fact that they will be paying more for their travel and accommodation, and/or their organisations will be paying more for their time spent attending the course. It also reflects the UK funding for the facilities being used.
\ \
Venue
The lectures and practicals will all take place in Lecture Theatre L1 downstairs in the Mathematical Institute. Attendees should bring laptops for accessing the remote Linux servers to carry out the practicals. It would be good to use fully-charged laptops, but we will try to provide adequate charging points as far as possible. \ \
Travel to Oxford
For those coming to Oxford, especially from abroad, there is travel advice here. \ \
Accommodation and food
Those attending the course must arrange their own accommodation. These are within a few minutes walk (or bus ride), and are arranged roughly in order of increasing cost:
- University Rooms (St. Anne's, Somerville and Keble colleges are the closest)
- Premier Inn -- Westgate (15-20 minute walk)
- Travelodge -- Peartree (15 minutes by bus)
- easyHotel -- Oxford (10 minutes by bus)
- Cotswold Lodge Hotel (10 minute walk)
- Old Parsonage Hotel (5 minute walk)
Alternatively, you might consider using Airbnb. \ \ For coffee, breakfast and lunch, there is a good cafe in the basement of the Mathematical Institute. Little Clarendon Street, which is nearby, has several restaurants for dinner (and an excellent ice cream shop), and there are two sandwich shops for lunch on either side of its junction with Woodstock Road (A4144 on Google Maps). \ \
Timetable
For the first three days we will follow this timetable:
- 09:00 - 10:30 lecture
- 10:30 - 11:00 break
- 11:00 - 12:30 practical
- 12:30 - 13:30 lunch break
- 13:30 - 15:00 lecture
- 15:00 - 15:30 break
- 15:30 - 17:00 practical
On the last two days we will switch to having both lectures in the morning, and then have practicals all afternoon. This provides more time for longer practicals, and will also allow those coming to Oxford from far away to leave when they wish on Friday afternoon. \ \
Preliminary Reading
Please read chapters 1 and 2 of the NVIDIA CUDA C Programming Guide which is available both as PDF and online HTML. \ \ CUDA is an extension of C/C++, so if you are a little rusty with C/C++ you should refresh your memory of it. Here are links to a couple of introductory lectures on C, a larger online resource and an even larger online resource. This reddit critique particularly recommends that last one, and mentions various other ones in addition. \ \
Additional References
- CUDA Runtime API
- cuBLAS library
- cuFFT library
- cuRAND library
- cuSOLVER library
- cuSPARSE library
- cuDSS library
- NCCL multi-GPU communications library
-
PTX ISA (low-level instructions)
- Nsight Visual Studio Code
- Nsight Eclipse
- Nsight Kernel Profiling Guide
- Nsight Compute Command Line Interface
- Nsight Compute User Interface
- Compute Sanitizer (including memchk and racecheck tools)
-
CUDA code samples on GitHub
-
helper_math.h header file defining operator-overloading operations for CUDA intrinsic vector datatypes such as float4
-
dbldbl.h header file defining double-double arithmetic for quad-precision (originally developed by NVIDIA, but not supported)
-
NVIDIA webpage listing Compute Capability type of all GPUs
-
Wikipedia pages on NVIDIA HPC cards, and GeForce 50 graphics cards
-
Jetson Thor for embedded systems
- Jetson Thor faqs
-
Red Hawk real-time OS for Jetson systems
-
NVIDIA slides on Performance and Debugging Tools (2025)
-
GTC slides on "Dissecting the Ampere GPU Architecture through Microbenchmarking"
-
arXiv paper on "Dissecting the NVIDIA Hopper Architecture through Microbenchmarking and Multiple Level Analysis"
-
NVIDIA T4 datasheet for those doing practicals on Google Colab
Lectures
- lecture 1: An introduction to CUDA
- lecture 2: Different memory and variable types
- lecture 3: Control flow and synchronisation
- lecture 4: Warp shuffles, and reduction / scan operations
- lecture 5: Libraries and tools
- lecture 6: Multiple GPUs, and odds and ends
- lecture 7: Tackling a new CUDA application
- lecture 8: Future Directions
- lecture 9: choice of different research talks in L1 and L2:\ a) AstroAccelerate\ b) FlashAttention -- an interesting CUDA application \ Use of GPUs for Explicit and Implicit Finite Difference Methods
- lecture 10: NVIDIA guest lecture by Ira Shoker (40MB) \
- extra research talks (not presented):\ Automated CUDA code generation \ Sparse matrix-vector multiplication \ OP2 "Library" for Unstructured Grids
Practicals
Attendees will be provided with accounts on the ARC/HTC system which has a number of NVIDIA GPU nodes. Before starting the practicals, please read these ARC notes. The notes include some weblinks for info for Windows users who may not be very familiar with the Linux systems we will be working on. Some details on the Slurm batch queueing system are available here. \ \ The practicals all use these header files (helper_cuda.h, helper_string.h) which came originally from the CUDA SDK. They provide routines for error-checking and initialisation.
Tar file for all practicals
Practical 1
Application: a trivial "hello world" example \ \ CUDA aspects: launching a kernel, copying data to/from the graphics card, error checking and printing from kernel code
- instructions (PDF)
- prac1a.cu
- prac1b.cu
- prac1c.cu
- Makefile
- notes on Makefiles (PDF)
Note: the instructions above explain how a tar file of all files can be copied from this webpage, so there's no need to download individual files from here \
- instructions (PDF) for those doing the practicals within Google Colab
- Google Colab notebook
Practical 2
Application: Monte Carlo simulation using NVIDIA's cuRAND library for random number generation \ \ CUDA aspects: constant memory, random number generation, kernel timing, optimising device memory bandwidth
- instructions (PDF)
- some mathematical notes (PDF)
- prac2.cu
- prac2_device.cu
- Makefile \
- instructions (PDF) for those doing the practicals within Google Colab
- Google Colab notebook
- Google Colab notebook with model solution
Practical 3
Application: 3D Laplace finite difference solver \ \ CUDA aspects: thread block size optimisation, multi-dimensional memory layout, performance profiling
- instructions (PDF)
- some mathematical notes (PDF)
- notes on Nsight Systems profiling (PDF)
- laplace3d.cu
- laplace3d_new.cu
- laplace3d_gold.cpp
- Makefile \
- instructions (PDF) for those doing the practicals within Google Colab
- Google Colab notebook
Practical 4
Application: reduction \ \ CUDA aspects: dynamic shared memory, thread synchronisation, shuffles, atomics
- instructions (PDF)
- reduction.cu
- Makefile
- round_up_test.c code to round an integer up to nearest power of 2 \
- instructions (PDF) for those doing the practicals within Google Colab
- Google Colab notebook
- Google Colab notebook with model solution
Practical 5
Application: using Tensor Cores and cuBLAS and other libraries
- instructions (PDF)
- tensorCUBLAS.cu
- simpleTensorCoreGEMM.cu
- Makefile \
- instructions (PDF) for those doing the practicals within Google Colab
- Google Colab notebook
Practical 6
Application: revisiting the simple "hello world" example \ \ CUDA aspects: using g++ for the main code, building libraries, using templates
- instructions (PDF)
- main.cpp
- prac6.cu
- prac6b.cu
- prac6c.cu
- Makefile \
- instructions (PDF) for those doing the practicals within Google Colab
- Google Colab notebook
Practical 7
Application: tri-diagonal equations -- see Lecture 7, and also this research talk
- instructions (PDF)
- trid.cu
- trid_gold.cpp
- Makefile \
- instructions (PDF) for those doing the practicals within Google Colab
- Google Colab notebook
Practical 8
Application: scan operation and recurrence equations
- instructions (PDF)
- scan.cu
- Makefile \
- instructions (PDF) for those doing the practicals within Google Colab
- Google Colab notebook
Practical 9
Application: pattern matching
Practical 10
Application: auto-tuning
- instructions (PDF)
- README file
- Flamingo auto-tuning software
Practical 11
Application: streams and OpenMP multithreading
Practical 12
Application: more on streams and overlapping computation and communication
\
Acknowledgements
Many thanks to:
- Yassamine Mather, Yishun Lu and Jay Zhang for their help with the practicals
- the Mathematical Institute for hosting the lectures and practicals
- Oxford's Advanced Research Computing for the GPU servers used in the practicals
- Google for the Google Colab system
\