Skip to main content Skip to navigation

Introduction to MPI

MPI (Message Passing Interface) is a popular method for writing parallel (multi-processor) codes for anything from desktops to the largest supercomputers. It relies on explicit passing of messages (data) between processors, by the programmer.

These slides and notes cover the very basics of MPI, from a first code up to some simple, but already very useful, test programs. To follow these, you should be able to write simple programs (including basic arrays) in either C or Fortran, and have access to a computer with a suitable compiler and MPI installed. We recommend MPICH (http://www.mpich.org/) or OpenMPI (https://www.open-mpi.org/). Alternately, if you have access to the Warwick SCRTP cluster machines (Tinis or Orac), you can run code there. If you're not sure how to run MPI code, see our HPC at Warwick and Beyond notes.

Slides

Deck 1 - Concepts of Parallel Programming

Deck 2 - Starting MPI

Deck 3 - Basic Communications - Collectives

Deck 4 - Point to Point Communications

Deck 5 - Domain Decomposition

Deck 6 - Worker Controller

Notes

Notes (PDF) 

Example Code

Download or Clone from Github at https://github.com/WarwickRSE/IntroMPI