Massively Parallel Computing INFO4173 (Master)



Graphics processors contain thousands of parallel processing elements and thus enable us to explore this realm of massively parallel computing today. The high number of parallel cores poses a great challenge for software design that must expose massive parallelism to benefit from this hardware. The main purpose of the lecture is to teach practical algorithm design for such parallel hardware.

Taking Part

To take part, do the following:

Apart from having a WSI account, there are no specific prerequisites for taking part in the course.
However, since CUDA builds upon C/C++, some basic C/C++ knowledge is recommended.


Exercises during block course

Oral exam


Planned Schedule

Date Lectures Exercises
Thu., 07.03.2024 08:30: Intro
13:30: Memory
Assignment 1: Kernel Calls,Memory Transfer
Assignment 2: Cross Correlation, Reverse Arrays
Fri., 08.03.2024 08:30: Control Flow
13:30: Sorting
Assignment 3: Reduction, Compaction
Assignment 4: Bucket Sort, Cell Coverage
Weekend   Assignment 4: Bucket Sort, Cell Coverage (continued)
Mon., 11.03.2024 08:30: Data Structures, Profiling
13:30: Machine Learning
Assignment 5: Matrix Multiplication, Machine Learning
Tue., 12.03.2024

08:30: Searching, N-Body
13:30: Systems

Assignment 6: Particle Systems
Wed., 13.03.2024 08:30: PDEs
13:30: Numerics
Assignment 6: Particle Systems (continued)