Skip to main content Skip to navigation

Intermediate MPI

MPI (Message Passing Interface) is a popular method for writing parallel (multi-processor) codes for anything from desktops to the largest supercomputers. It relies on explicit passing of messages (data) between processors, by the programmer.

These slides and notes cover some MPI beyond the basics, but which is still very generally useful. To find these notes useful, you should be happy writing basic programs in MPI, using Send, Recv and collectives, and be interested in knowing more.

We cover two topics - custom MPI types, and non-blocking and persistent comms. Types are extremely useful for a lot of problems, especially when writing in C, because they allow you to pass around structs and array subsections as easily as single numbers. Non-blocking comms becomes useful for more interesting communication arrangements, where one doesn't know precisely what order Send and Recv may post in on different processors.

We use two case studies to demonstrate these methods - a simple domain decomposition, and a simple prime checker using a worker-controller model.

Slides

Section 1 - a brief recap

Section 2 - description of our case studies/examples

Section 3 - Custom MPI Types

Section 4 - Non-blocking MPI communication and persistent comms handles

Notes

A PDF of the notes, which cover everything on the slides but are a bit more readable is available here. As usual, if you'd like a larger font or other adjustments, contact us at rse{at}warwick.ac.uk.

Example Code

See here on our Github page.