Aims, Skills and Learning Outcomes

Introduce the students to advanced concepts on the theory and practice of computational models for parallel and distributed memory architectures. Hands-on experience on programming, in distributed memory architectures using MPI, and in shared memory architectures using processes, Pthreads and OpenMP.

To complete this course, the students must be able to:

Syllabus

Introduction and foundations:
Parallel programming, concurrency and parallelism, and Flynn's taxonomy. Foster's programming methodology. Major parallel programming models and paradigms.

Programming for distributed memory architectures using MPI:
MPI specification, explicit message passing, communication protocols, derived types and data packing, collective communication, communicators and topologies.

Programming for shared memory architectures with processes:
Processes, shared memory segments, shared memory through file mapping, spinlocks and semaphores.

Programming for shared memory architectures with threads:
Pthreads specification, multithreaded programming, mutexes, conditional variables and keys.

Programming for shared memory architectures with OpenMP:
OpenMP specification, compilation directives, work-sharing constructors, basic constructors, synchronization constructors, basic functions, locking functions, environment variables, removing data dependencies, performance, combination of OpenMP with OpenMPI.

Hybrid Programming with MPI, OpenMP and Pthreads:
Combining MPI with OpenMP and using MPI in a multithreaded environment.

Memory Cache and Multi-Processor Architectures:
The importance of caches in a multi-processor architecture: space and temporal locality, synchronization with memory barriers and with mutual exclusion.

Performance metrics:
Speedup measures, efficiency, redundancy, usability and quality of a parallel application. Amdahl law. Gustafson-Barsis law. Karp-Flatt metrics.

Parallel algorithms:
Scheduling and load balancing strategies. Parallel algorithms for sorting, search, monte-carlo simulation and matrix multiplication.

Main Bibliography

Parallel Programming with MPI
P. Pacheco. Morgan Kaufmann.

Parallel Programming in C with MPI and OPenMP
Michael J. Quinn. McGraw-Hill.

Parallel Programming for Multicore and Cluster Systems
Thomas Rauber and Gudula RĂ¼nger.

Advanced Linux Programming
M. Mitchell, J. Oldham and A. Samuel. New Riders.

Pthreads Programming
B. Nichols, D. Buttlar and J.P. Farrell. O'Reilly.

Parallel Programming in OpenMP
R. Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald and R. Menon. Morgan Kaufmann.

Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers
B. Wilkinson, M. Allen. Prentice Hall.

Complementary Bibliography

An Introduction to Parallel Programming

The Free Lunch is Over

MPI Forum

A User's Guide to MPI

Writing Message Passing Parallel Programs with MPI

POSIX Threads Programming

POSIX Threads Tutorial

OpenMP API

An Introduction Into OpenMP

A Hands-On Introduction to OpenMP

Advanced OpenMP Tutorial

Exercises to Support Learning OpenMP

Intel Guide for Developing Multithreaded Applications