Study Plan

The 2024/2025 edition of the 'Parallel Computing' course belongs to the following cycle of studies of the Faculty of Sciences at University of Porto:

Timetable

Goals and Learning Outcomes

Introduce the students to advanced concepts on the theory and practice of computational models for parallel and distributed memory architectures. Hands-on experience on programming distributed memory architectures with MPI, and programming shared memory architectures using processes, threads and OpenMP.

On completing this course, the students must be able to: be aware of the main models, paradigms, environments and tools for parallel programming; understand and assess the concepts related to the structure, operation and performance of parallel programs; formulate solutions in the main parallel programming paradigms, namely MPI, Pthreads and OpenMP.

Syllabus

Foundations
Parallel programming, concurrency and parallelism. Flynn taxonomy. Foster's programming methodology. Major parallel programming models and paradigms. Speedup measures, efficiency, redundancy, usability and quality of a parallel application. Amdahl law. Gustafson-Barsis law. Karp-Flatt metrics.

Programming for Distributed Memory Architectures
MPI specification, explicit message passing, communication protocols, derived types and data packing, collective communications, communicators, topologies.

Programming for Shared Memory Architectures
Processes, shared memory segments, shared memory through file mapping, spinlocks, semaphores. Multithreading processes with Pthreads, mutexs, conditional variables, keys, implementations of Pthreads. OpenMP specification, compilation directives, work-sharing constructors, basic constructors, synchronisation constructors, basic functions, locking functions, environment variables, removing data dependencies, performance, combining OpenMP with MPI.

Parallel Algorithms
Scheduling and load balancing strategies. Parallel algorithms for sorting, monte-carlo simulation, and matrix multiplication.

Assessment Components

The project assignments worth 3 points each out of 20 and the written exam worth 14 points out of 20.
The minimum classification in the written exam is 40%.

Main Bibliography

Parallel Programming with MPI
P. Pacheco. Morgan Kaufmann.

Parallel Programming in C with MPI and OPenMP
Michael J. Quinn. McGraw-Hill.

Advanced Linux Programming
M. Mitchell, J. Oldham and A. Samuel. New Riders.

Pthreads Programming
B. Nichols, D. Buttlar and J.P. Farrell. O'Reilly.

Parallel Programming in OpenMP
R. Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald and R. Menon. Morgan Kaufmann.

Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers
B. Wilkinson, M. Allen. Prentice Hall.

Complementary Bibliography

An Introduction to Parallel Programming

The Free Lunch is Over

MPI Forum

A User's Guide to MPI

Writing Message Passing Parallel Programs with MPI

POSIX Threads Tutorial

OpenMP API

An Introduction Into OpenMP

A Hands-On Introduction to OpenMP

Advanced OpenMP Tutorial

Exercises to Support Learning OpenMP

Intel Guide for Developing Multithreaded Applications