Assignment IV: MPI Programming
The purpose of this assignment is to familiarize yourself with MPI programming.
0. Setting Up the Environment
0.1 Creating a working directory on NFS
If you have not created a working directory (named with your student ID) for your own, please follow the commands below:
$ cd /nfs
$ mkdir xxxxxxx # your student ID
$ chmod 700 xxxxxxx # make it safe
0.2 Downloading the Source Code:
$ cd /nfs/<student_id>
$ wget https://nctu-sslab.github.io/PP-s21/HW4/HW4.zip
$ unzip HW4.zip -d HW4
$ chmod -R 700 HW4
$ cd HW4
0.3 Configuring SSH
To make MPI work properly, you need to be able to execute jobs on remote nodes without typing a password. You will need to generate an ssh key by yourself. You may refer to this article for instruction or follow the instructions below.
You will also need to configure your ~/.ssh/config
file to define host names. Use HW4/config
as a template for the configuration. Notice that in the configuration template, the string follows each User
should be your student ID.
ATTENTION: We will test your code based on this config.
# <login to one of the PP nodes>
$ mkdir -p ~/.ssh
$ ssh-keygen -t rsa # Leave all empty
# <setup the config>
$ cat ~/.ssh/id_rsa.pub | ssh ppN 'mkdir -p ~/.ssh; cat >> .ssh/authorized_keys' # ppN should be pp1, pp2, ..., pp9
Then, login to all machines at least once to make sure the configuration works. This will save the SSH fingerprints of the machines in ~/.ssh/known_hosts
.
# Log in without entering password
$ ssh pp1
$ ssh pp2
$ ssh pp3
$ ssh pp4
$ ...
$ ssh pp9
0.4 About Network Connections
There are in total nine compute nodes within two private networks.
pp1
-pp5
are in a private network, which we will use for the grading process.
pp6
-pp9
are in the other private network, which provides an alternative environment for testing your MPI programs if the nodes in the first private network are busy.
In this assignment, your MPI programs are expected to run in the same private network; that is, issues of interconnection between private networks are not, and should not, considered in this assignment.
1. Getting Familiar with MPI Programming
1.1 MPI Hello World, Getting Familiar with the MPI Environment
Here we’re going to implement our first MPI program.
Expected knowledge includes basic understanding of the MPI environment, how to compile an MPI program, how to set the number of MPI processes, and retrieve the rank of the process and number of MPI processes at runtime.
Let’s learn how to launch an MPI code on NCTU-PP workstations.
There is a starter code for you. Look at hello.cc
. Please finish the TODO
s with the MPI_Comm_size
and MPI_Comm_rank
functions.
We will work on NFS, so we don’t need to copy and compile the code again and again on different nodes.
$ cd /nfs/<student_id>
Let’s first compile hello.cc
on NFS:
$ cd ./HW4
$ mpicxx ./hello.cc -o mpi_hello
Then create a file hosts
, which is a text file with hosts specified, one per line.
We will use the file to specify on which hosts to launch MPI processes when running mpirun
.
For example, we put the following two lines into the file to specify we will launch MPI processes on nodes pp1
and pp2
.
Notice that you should use the same hostnames as in ~/.ssh/config
.
hosts
:
pp1 # 1st node hostname
pp2 # 2ed node hostname
Notice that the nodes should be in the same private network (using either
pp1
-pp5
orpp6
-pp9
).
Make sure mpi_hello
is located in the same directory hierarchy on the two nodes (should not be a problem under NFS) before running mpirun
to start MPI processes:
$ mpirun -np 8 --hostfile hosts mpi_hello
This command will launch eight processes to run mpi_hello
on the two nodes, and you should be able to get an output similar to:
Hello world from processor ec037-076, rank 0 out of 8 processors
Hello world from processor ec037-076, rank 1 out of 8 processors
Hello world from processor ec037-076, rank 3 out of 8 processors
Hello world from processor ec037-076, rank 2 out of 8 processors
Hello world from processor ec037-077, rank 4 out of 8 processors
Hello world from processor ec037-077, rank 6 out of 8 processors
Hello world from processor ec037-077, rank 5 out of 8 processors
Hello world from processor ec037-077, rank 7 out of 8 processors
Code Implementation:
- Complete
hello.cc
: the Hello World code in C. mpirun -np 8 --hostfile hosts mpi_hello
should run correctly- The argument of
-np
is expected to be any positive integer less than or equal to 16.
Q1
- How do you control the number of MPI processes on each node?
- Which functions do you use for retrieving the rank of an MPI process and the total number of processes?
- We use Open MPI for this assignment. What else MPI implementation is commonly used? What is the difference between them?
1.2 Calculating PI with MPI
In this exercise, we are going to parallelize the calculation of Pi following a Monte Carlo method and using different communication functions and measure their performance.
Expected knowledge is MPI blocking and non-blocking communication, collective operations, and one-sided communication.
Instructions: Write different versions of MPI codes to calculate Pi. The technique we are implementing relies on random sampling to obtain numerical results. You have already implemented this in the previous assignments.
1.2.1 MPI Blocking Communication & Linear Reduction Algorithm
In our parallel implementation, we split the number of iterations of the main loop into all the processes (i.e., NUM_ITER / num_ranks
). Each process will calculate an intermediate count
, which is going to be used afterward to calculate the final value of Pi.
To calculate Pi, you need to send all the intermediate counts of each process to rank 0. This communication algorithm for reduction operation is called linear as the communication costs scales as the number of processes.
An example of linear reduction with eight processes is as follows:
Rank 0 is going to receive the intermediate count
calculated by each process and accumulate them to its own count value. (Rank 0 also needs to calculate a count
, and this stands for the following sections) Finally, after rank 0 has received and accumulated all the intermediate counts, it will calculate Pi and show the result, as in the original code.
Implement this code using blocking communication and test its performance.
Hint 1. Change the main loop to include the number of iterations per process, and not
NUM_ITER
(which is the total number of iterations).Hint 2. Do not forget to multiply the seed of
srand()
with the rank of each process (e.g., “rank * SEED”) to ensure the RNGs of processes generate different random numbers.
There is a starter code pi_block_linear.cc
for you.
$ mpicxx pi_block_linear.cc -o pi_block_linear; mpirun -np 4 --hostfile hosts pi_block_linear 1000000000
Code Implementation:
- Complete
pi_block_linear.cc
: the MPI block communication for PI. - Implement the code using a linear algorithm with
MPI_Send
/MPI_Recv
. - Do not modify the output messages.
$ mpicxx pi_block_linear.cc -o pi_block_linear; mpirun -np 4 --hostfile hosts pi_block_linear 1000000000
should run correctly.- The argument of
-np
is expected to be any positive integer less than or equal to 16. mpirun -np 4 --hostfile hosts pi_block_linear 1000000000
should run almost as fast as TA’s reference version (refer to/nfs/ref
). The difference (in execution time) should be within 15%.
You may use up to five nodes (and up to four processors in each node), so you may want to try different configurations in the hostfile.
Q2
- Why
MPI_Send
andMPI_Recv
are called “blocking” communication? - Measure the performance (execution time) of the code for 2, 4, 8, 12, 16 MPI processes and plot it.
1.2.2 MPI Blocking Communication & Binary Tree Reduction Communication Algorithm
Implement the binary tree communication algorithm for performing the reduction on rank 0 using blocking communication (e.g., MPI_Send
/MPI_Recv
).
The communication pattern for a reduction with a binary tree with eight processes is as follows:
In you implementation, you can assume that we use a power-of-two number of processes.
There is a starter code pi_block_tree.cc
for you.
$ mpicxx pi_block_tree.cc -o pi_block_tree; mpirun -np 4 --hostfile hosts pi_block_tree 1000000000
- Complete
pi_block_tree.cc
: the MPI block communication for PI. - Implement the code using a binary tree reduction algorithm with
MPI_Send
/MPI_Recv
. - Do not modify the output messages.
$ mpicxx pi_block_tree.cc -o pi_block_tree; mpirun -np 4 --hostfile hosts pi_block_tree 1000000000
should run correctly.- The argument of
-np
is expected to be any power-of-two number less than or equal to 16. mpirun -np 4 --hostfile hosts pi_block_tree 1000000000
should run almost as fast as TA’s reference version. The difference (in execution time) should be within 15%.
Q3
- Measure the performance (execution time) of the code for 2, 4, 8, 16 MPI processes and plot it.
- How does the performance of binary tree reduction compare to the performance of linear reduction?
- Increasing the number of processes, which approach (linear/tree) is going to perform better? Why? Think about the number of messages and their costs.
1.2.3 MPI Non-Blocking Communication & Linear Reduction Algorithm
Use non-blocking communication for the linear reduction operation (in Section 1.2.1).
Hint: Use a non-blocking
MPI_Irecv()
(MPI Receive with Immediate return). The basic idea is that rank 0 is going to issue all the receive operations and then wait for them to finish. You can either useMPI_Wait()
individually to wait for each request to finish orMPI_Waitall()
. Regardless of your decision, keep in mind that we want you to perform the receive operations in parallel. Thus, do not callMPI_Irecv()
and immediatelyMPI_Wait()
! In addition, we recommend you allocate an array ofMPI_Request
and also an array of counts (i.e., one for each receive needed).
There is a starter code pi_nonblock_linear.cc
for you.
$ mpicxx pi_nonblock_linear.cc -o pi_nonblock_linear; mpirun -np 4 --hostfile hosts pi_nonblock_linear 1000000000
Code Implementation:
- Complete
pi_nonblock_linear.cc
: the MPI non-blocking communication for PI. - Implement the code using a non-blocking algorithm with
MPI_Send
/MPI_IRecv
/MPI_Wait
/MPI_Waitall
. - Do not modify the output messages.
$ mpicxx pi_nonblock_linear.cc -o pi_nonblock_linear; mpirun -np 4 --hostfile hosts pi_nonblock_linear 1000000000
should run correctly.- The argument of
-np
is expected to be any positive integer less than or equal to 16. mpirun -np 4 --hostfile hosts pi_nonblock_linear 1000000000
should run almost as fast as TA’s reference version. The difference (in execution time) should be within 15%.
Q4
- Measure the performance (execution time) of the code for 2, 4, 8, 12, 16 MPI processes and plot it.
- What are the MPI functions for non-blocking communication?
- How the performance of non-blocking communication compares to the performance of blocking communication?
1.2.4 MPI Collective: MPI_Gather
Use the collective MPI_Gather()
operation, instead of point-to-point communication.
Hint: You can keep rank 0 as the root of the communication and still make this process aggregate manually the intermediate counts. Remember that the goal of MPI_Gather()
is to provide the root with an array of all the intermediate values. Reuse the array of counts as the output for the gather operation.
There is a starter code pi_gather.cc
for you.
$ mpicxx pi_gather.cc -o pi_gather; mpirun -np 4 --hostfile hosts pi_gather 1000000000
Code Implementation:
- Complete
pi_gather.cc
: the MPI gather communication for PI. - Implement the code using
MPI_Gather
. - Do not modify the output messages.
$ mpicxx pi_gather.cc -o pi_gather; mpirun -np 4 --hostfile hosts pi_gather 1000000000
should run correctly.- The argument of
-np
is expected to be any positive integer less than or equal to 16. mpirun -np 4 --hostfile hosts pi_gather 1000000000
should run almost as fast as TA’s reference version. The difference (in execution time) should be within 15%.
Q5
- Measure the performance (execution time) of the code for 2, 4, 8, 12, 16 MPI processes and plot it.
1.2.5 MPI Collective: MPI_Reduce
Use the collective MPI_Reduce()
operation.
Hint 1: Remember that the goal of
MPI_Reduce()
is to perform a collective computation. Use theMPI_SUM
operator to aggregate all the intermediate count values into rank 0, But, watch out: rank 0 has to provide its own count as well, alongside the one from the other processes.Hint 2: The send buffer of
MPI_Reduce()
must not match the receive buffer. In other words, use a different variable on rank 0 to store the result.
There is a starter code pi_reduce.cc
for you.
$ mpicxx pi_reduce.cc -o pi_reduce; mpirun -np 4 --hostfile hosts pi_reduce 1000000000
Code Implementation:
- Complete
pi_reduce.cc
: the MPI reduce communication for PI. - Implement the code using
MPI_Reduce
. - Do not modify the output messages.
$ mpicxx pi_reduce.cc -o pi_reduce; mpirun -np 4 --hostfile hosts pi_reduce 1000000000
should run correctly.- The argument of
-np
is expected to be any positive integer less than or equal to 16. mpirun -np 4 --hostfile hosts pi_reduce 1000000000
should run almost as fast as TA’s reference version. The difference (in execution time) should be within 15%.
Q6
- Measure the performance (execution time) of the code for 2, 4, 8, 12, 16 MPI processes and plot it.
1.2.6 MPI Windows and One-Sided Communication & Linear Reduction Algorithm
Use MPI Windows and MPI one-sided communication, which we didn’t cover this in class. You can choose the functions you’d like to use but remember that there is a reduction on the same MPI window from many processes! You may refer to one_side_example.c
to get familiar with it.
There is a starter code pi_one_side.cc
for you.
$ mpicxx pi_one_side.cc -o pi_one_side; mpirun -np 4 --hostfile hosts pi_one_side 1000000000
Code Implementation:
- Complete
pi_one_side.cc
: the MPI one sided communication for PI. - Implement the code using
MPI_Widows_*
(orMPI_Put
,MPI_Get
, andMPI_Accumulate
). You only need to implement the first one, and the remainings are optional. - Do not modify the output messages.
$ mpicxx pi_one_side.cc -o pi_one_side; mpirun -np 4 --hostfile hosts pi_one_side 1000000000
should run correctly.- The argument of
-np
is expected to be any positive integer less than or equal to 16. mpirun -np 4 --hostfile hosts pi_one_side 1000000000
should run almost as fast as TA’s reference version. The difference (in execution time) should be within 15%.
Q7
- Measure the performance (execution time) of the code for 2, 4, 8, 12, 16 MPI processes and plot it.
- Which approach gives the best performance among the 1.2.1-1.2.6 cases? What is the reason for that?
- Which algorithm or algorithms do MPI implementations use for reduction operations? You can research this on the WEB focusing on one MPI implementation.
1.3 Measuring Bandwidth and Latency on NCTU-PP workstations with Ping-Pong
Expected knowledge is concepts of bandwidth and latency and performance model for parallel communication.
The ping-pong is a benchmark code to measure the bandwidth and latency of a supercomputer. In this benchmark, two MPI processes use MPI_Send
and MPI_Recv
to continually bounce messages off of each other until a final limit.
The ping-pong benchmark provides as output the average time for the ping-pong for different messages sizes.
If we plot the results of the ping-pong with size on the x-axis and ping-pong time on the y-axis, we will roughly obtain points distributed along a line. If we do a linear best fit of the obtained points, we will get the intercept and the slope of the line. The inverse of the line slope is the bandwidth while the intercept is the latency of the system.
We will use ping_pong.c
to measure the ping-pong time for different message sizes.
Instructions. For completing this exercise, you may refer to this video: Performance Modeling & Ping-Pong
Run the ping-pong code and calculate the bandwidth and latency for:
- case 1: intra-node communication (two processes on the same node), and
- case 2: inter-node communication (two processes on different nodes). You should consider more different nodes in this case.
Q8
- Plot ping-pong time in function of the message size for cases 1 and 2, respectively.
- Calculate the bandwidth and latency for cases 1 and 2, respectively.
- For case 2, how do the obtained values of bandwidth and latency compare to the nominal network bandwidth and latency of the NCTU-PP workstations. What are the differences and what could be the explanation of the differences if any?
2. Requirements
Finish the code implementation part in Sections 1.2.1-1.2.6. The corresponding files are as follows.
hello.cc
pi_block_linear.cc
pi_block_tree.cc
pi_gather.cc
pi_nonblock_linear.cc
pi_one_side.cc
pi_reduce.cc
Questions marked as Q1-Q8 are take-home questions. You don’t need to submit the answers, but we leave them for you to think about instead.
3. Grading Policies
NO CHEATING!! You will receive no credit if you are found cheating.
- Total of 100%: Code implementation including correctness and performance.
hello.cc
(10%)pi_block_linear.cc
(15%)pi_block_tree.cc
(15%)pi_gather.cc
(15%)pi_nonblock_linear.cc
(15%)pi_one_side.cc
(15%)pi_reduce.cc
(15%)
4. Evaluation Platform
Your program should be able to run on UNIX-like OS platforms. We will evaluate your programs on the workstations dedicated for this course. You can access these workstations by ssh
with the following information.
The workstations are based on Ubuntu 18.04 with Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz processors. g++-10
, clang++-11
, and mpi4
have been installed.
IP | Port | User Name | Password |
---|---|---|---|
140.113.215.195 | 37076 ~ 37080, 37091~37094 | {student_id} | {Provided by TA} |
ATTENTION: Never touch 37095. It is for NIS and NFS.
Login example:
$ ssh <student_id>@140.113.215.195 -p <port>
You can use the testing script $ test_hw4
to check your answer for reference only. Run test_hw4
in a dictionary that contains your HW4_XXXXXXX.zip
file on the workstation.
test_hw4
:
- Check if the zip file is correct.
- Check for the correctness, not for the performance. (The system might be busy while you test your programs.)
You should manually run your programs to compare with TA’s reference programs (located at /nfs/ref/
). Notice that your programs will be graded on machines that only the TA can access in order to eliminate possible resource contention.
5. Submission
All your files should be organized in the following hierarchy and zipped into a .zip
file, named HW4_xxxxxxx.zip
, where xxxxxxx
is your student ID.
Directory structure inside the zipped file:
HW4_xxxxxxx.zip
(root)hello.cc
pi_block_linear.cc
pi_block_tree.cc
pi_gather.cc
pi_nonblock_linear.cc
pi_one_side.cc
pi_reduce.cc
Zip the file:
$ zip HW4_xxxxxxx.zip <file1> <file2> ...
Be sure to upload your zipped file to new E3 e-Campus system by the due date.
You will get NO POINT if your ZIP’s name is wrong or the ZIP hierarchy is incorrect.
Due Date: 2021/5/08 00:00