代写program、代做python设计程序

- 首页 >> Web
Assignment 6
Due Wednesday by 11:59pm
Points 80
Submitting a file upload
Available Oct 28 at 12pm - Dec 31 at 11:59pm
Start Assignment
Assignment 6 (80 Points)
Due November 6 at 11:59 PM
In this assignment, you will create distributed solutions (i.e., with multiple processes) for the two
programs you developed in Assignment 2 (Curve Area Calculation and Heat Transfer) using Message
Passing Interface (MPI). MPI is different from the shared memory model we have used so far, and
requires using APIs to communicate between processes.
Before starting this assignment, you should have completed the Slurm Tutorial
(https://canvas.sfu.ca/courses/84236/pages/slurm-tutorial) , which walks you through how to use our
servers for your code development. Additionally, you should also have completed the MPI tutorial
(https://canvas.sfu.ca/courses/84236/pages/slurm-mpi-tutorial) , which gives an overview of MPI and how
to correctly run MPI programs using slurm.
General Instructions:
1. You are given the serial implementations here (https://canvas.sfu.ca/courses/84236/files/24448350?
wrap=1) (https://canvas.sfu.ca/courses/84236/files/24448350/download?download_frd=1) .
2. MPI permits various communication strategies to pass data between processes. This assignment
uses the point-to-point communication strategy.
3. For simplicity, we only use one thread per process in this assignment. Make sure you
use MPI_Finalize (https://www.open-mpi.org/doc/current/man3/MPI_Finalize.3.php) before exiting
the main() function.
4. MPI uses the distributed model where each process is completely independent and has its own
separate memory space. Remember to set the --mem option appropriately in your script.
5. While testing your solutions, make sure that --cpus-per-task is set to 1 in your slurm job
script, and the --ntasks and --nodes is set based on number of MPI processes and nodes you
want.
#!/bin/bash
#
#SBATCH --cpus-per-task=1
#SBATCH --nodes=1
Assignment 6
1/8
#SBATCH --ntasks=4
#SBATCH --partition=slow
#SBATCH --mem=10G
srun ./curve_area_parallel
6. You will be asked to print the time spent by different processes on specific code regions. The time
spent by any code region can be computed as follows:
timer t1;
t1.start();
/* ---- Code region whose time is to be measured --- */
double time_taken = t1.stop();
If you need to time a sub-section inside a loop, you can do that as follows:
double time_taken = 0.0;
timer t1;
while(True){
/* ---- Code region whose time should not be measured --- */
t1.start();
/* ---- Code region whose time is to be measured --- */
time_taken += t1.stop();
/* ---- Code region whose time should not be measured --- */
}
std::cout << "Time spent on required code region : " << time_taken << "\n";
7. The output of the two programs can be tested by comparing the serial output to the parallel program
output. You can also modify the scripts provided with assignment 2 to test your code.
8. Since each MPI process is independent, use these rules to print your outputs:
Use printf() to avoid garbled logs. You can also concatenate the information as a string and
use std::cout to print a single line of output. To add a new line, use "\n" as part of the
concatenated string instead of std::endl .
You can check the rank of the process before printing as shown below:
if (world_rank == 0)
printf("Time taken (in seconds): %g\n", time_taken);
The root process should print most of the output logs.
Non-root processes should only print the process statistics in a single line.
1. Monte Carlo Curve Area Estimation using MPI
Similar to Assignment 2 (https://canvas.sfu.ca/courses/84236/assignments/1006849) , you will develop a
parallel solution for curve area estimation using MPI. Here, the work is distributed among P processes.
The total number of points should be divided evenly between processes. Use the following pseudocode
for determining the subset of vertices for each process:
// Dividing up n vertices on P processes.
// Total number of processes is world_size. This process rank is world_rank
min_points_per_process = n / world_size
Assignment 6
2/8
excess_points = n % world_size
if (world_rank < excess_points)
points_to_be_generated = min_points_per_process + 1
else
points_to_be_generated = min_points_per_process
// Each process will work on points_to_be_generated and estimate curve_points.
Each process will compute the number of curve points from the total points allocated to it. Process 0
(henceforth, referred as the root process) aggregates (i.e., sums up) the local counts from other
processes (henceforth, referred as non-root processes) and computes the final curve area calculation.
The pseudocode for question 1 is given below:
for each process P in parallel {
local_curve_count = 0
for each point allocated to P {
? ? ? ?x_coord = (2.0 * get_random_coordinate(&random_seed)) - 1.0);
y_coord = (2.0 * get_random_coordinate(&random_seed)) - 1.0);
if ( (a * (x^2)) + (b * (y^4)) ) <= 1.0)
local_curve_count++;
}
}
// --- synchronization phase start ---
if(P is root process){
global_count = Sum of local counts of all the processes
}
else {
// Use appropriate API to send the local_curve_count to the root process
} // --- synchronization phase end -----
if(P is root process){
? ? area = 4.0 * (double)global_curve_points / (double)n;
// print process statistics and other results
}
else{
// print process statistics
}
}
You should use point-to-point communication, i.e., MPI_Send() (https://www.open?mpi.org/doc/current/man3/MPI_Send.3.php) and MPI_Recv() (https://www.open?mpi.org/doc/current/man3/MPI_Recv.3.php) , and do the following:
Non-root processes will send their local_curve_count to the root process.
Root process receives the information from other processes and aggregates them to get the final
area.
INote that the MPI function calls we use in this assignment are synchronous calls. So make sure that
the MPI_Send() (https://www.open-mpi.org/doc/current/man3/MPI_Send.3.php) and MPI_Recv()
(https://www.open-mpi.org/doc/current/man3/MPI_Recv.3.php) are called in the correct order in every
process.
Output Format for Question 1:
1. Your solution should be named curve_area_parallel.cpp and your Makefile should produce
curve_area_parallel binary. Command line parameters to be supported:
Assignment 6
3/8
nPoints: Total number of points used for estimating the area. This number should be divided
equally among processes (with the remainder r=nPoints % world_size going to processes 0,...,r-1
)
coeffA: Value of coefficient a.
coeffB: Value of coefficient b.
rSeed: Seed of the random number generator that you use in the program.
2. Your parallel solution must output the following information:
World size (i.e., number of processes) (only root process).
For each process: the number of random points generated, the number of points inside the curve,
and the time taken to generate and process these points (your processes should be numbered
between [0, world_size-1) ).
The total number of points generated.
The total number of points within the curve.
The total time taken for the entire execution. This should include the communication time and
decomposition time (only root process).
Please note that the output format should strictly match the expected format (including "spaces" and
"commas"). You can test your code using the test script as shown below. You can run the python script
only with slurm. Remember to invoke the script without srun . A sample output file is provided
under sample_outputs/curve_area_parallel.txt
2. Heat Transfer using MPI
You will implement Heat Transfer (from Assignment 2) with MPI. Here, the work is distributed
among P processes. For simplicity, every process will create the whole grid in its local memory, but will
only compute for a vertical slice of the grid, similar to Assignment 2. The following pseudocode can be
used to compute the start and end column for each process:
min_columns = size / world_size;
excess_columns = size % world_size;
? if (world_rank < excess_columns) {
? ? startx = world_rank * (min_columns + 1);
? ? endx = startx + min_columns;
? }
? else {
? ? startx = (excess_columns * (min_columns + 1)) + ((world_rank-excess_columns) * min_columns);
? ? endx = startx + min_columns - 1;
? }
The heat transfer pseudocode is given below:
for each process P in parallel {
for(local_stepcount = 1; local_stepcount <= tSteps; local_stepcount++) {
? ? ? ?Compute the Temperature Array values Curr[][] in the slice allocated to this process from Pr
ev[][]
// --- synchronization: Send and Receive boundary columns from neighbors
// Even processes communicate with right proces first
// Odd processes communicate with left process first
if (world_rank % 2 == 0) { // even rank
Assignment 6
4/8
if (world_rank < world_size - 1) { // not last process
Send my column "end" to the right process world_rank+1
Receive column "end+1" from the right process world_rank+1, populate local Curr Arra
y
}
if (world_rank > 0) { // not first process
Receive column "start-1" from the left process world_rank-1, populate local Curr Arr
ay
Send my column "start" to the left process world_rank-1
}
} // even rank
else { // odd rank
if (world_rank > 0) { // not first process
Receive column "start-1" from the left process world_rank-1, populate local Curr Arr
ay
Send my column "start" to the left process world_rank-1
}
if (world_rank < world_size - 1) { // not last process
Send my column "end" to the right process world_rank+1
Receive column "end+1" from the right process world_rank+1, populate local Curr Arra
y
}
? ? ? ?} // odd rank
// --- synchronization end -----
} // end for local_stepcount
if(P is root process){
// print process statistics and other results
}
else{
// print process statistics and relevant point temperatures
}
}
Key things to note:
1. A key difference between Heat Transfer and Curve Area is that you will need to continuously
communicate boundary values for each loop iterations (instead of just communicating the local
curve_points at the end of each process). This continuous synchronization/communication makes it a
much harder problem to reason about and debug.
2. You should use point-to-point communication, i.e., MPI_Send() (https://www.open?mpi.org/doc/current/man3/MPI_Send.3.php) and MPI_Recv() (https://www.open?mpi.org/doc/current/man3/MPI_Recv.3.php) to communicate between processes.
Note that the MPI function calls we use in this assignment are synchronous calls. So make sure that
the MPI_Send() (https://www.open-mpi.org/doc/current/man3/MPI_Send.3.php) and MPI_Recv()
(https://www.open-mpi.org/doc/current/man3/MPI_Recv.3.php) are called in the correct order in every
process.
Please observe the order in which processes send and receive data. This order has to be correct
with every MPI_Send having a corresponding MPI_Receive in the same order; otherwise the
program will stop making progress.
3. The above pseudocode has opposite and complimentary orders for even and odd processes. Please
examine the order of send/receive operations carefully and reason about why this order is correct.
Assignment 6
5/8
Please note that this order is not the only correct order. If you wish, you can use a different
communication order provided that it produces correct results.
4. Please read the syntax for MPI_Send and MPI_Receive in the MPI tutorial. You need to figure out
parameter values and how to coordinate a send to a corresponding receive and vice-versa. You need
to use the correct size and data type for the message on both sender and receiver.
5. A key parameter for both MPI_Send and MPI_Receive is the message tag. Tags of the sender and
receiver have to match. If two messages with the same tag are sent, a send/receive mismatch could
occur. A possible strategy is to use the column number being sent or received as the message tag,
so both sender and receiver sync up on the same tag.
6. Only the time spent on synchronization phase 1 is used for calculating communication time.
7. Printing the program output has to be synchronized as well. The exact strategy is up to you. For
example, process 0 can print its output first then send a message to process 1 which is blocked on
MPI_Receive, then process 1 prints its output and sends a message to process 2, ...etc.
8. While it is better to have the order of output statements match the sample output exactly, your
grade will not be affected if the output lines are printed correctly but out of order.
Output Format for Question 2:
1. Your solution should be named heat_transfer_parallel.cpp and your Makefile should
produce heat_transfer_parallel binary. Command line parameters to be supported:
gSize: Grid size. The size of the temperature array is gSize x gSize.
mTemp: Temperature values in the middle of the array, from [gSize/3 , gSize/3] to [2*gSize/3 ,
2*gSize/3].
iCX: Coefficient of horizontal heat transfer.
iCY: Coefficient of vertical heat transfer.
tSteps: Time steps of the simulation
2. Your parallel solution must output the following information:
World size (i.e., number of processes) (only root process).
Grid size.
Values of iCX, iCY, mTemp and tSteps
For each process: process id, start column, end column, time taken.
Temperatures at end of simulation for points at [0,0], [gSize/6, gSize/6], [gSize/3, gSize/3],
[gSize/2, gSize/2], [2*gSize/3, 2*gSize/3], [5*gSize/6, 5*gSize/6].
Temperatures at the right boundary of all processes: [endx[0], endx[0]], [endx[1], endx[1]], ...,
[[endx[world_size-1],endx[world_size-1]].
The total time taken for the entire execution. This should include the communication time (only
root process).
Please note that the output format should strictly match the expected format (including "spaces" and
"commas"). The sample console output can be found in sample_outputs/heat_transfer_parallel.txt .
3. Assignment Report
Assignment 6
6/8
In addition to your parallel code, you need to submit a report (in pdf format) that answers the following
questions:
Q1. Run your curve_area_parallel program from part 1 with 1, 2, 4, and 8 processes and the following
parameters: coeffA=1.2, coeffB=0.8, rSeed=129, nPoints=4,000,000,000 (i.e., 4 billion). Each of your
parallel programs should run 3 times. {Total number of runs is 4 (different process counts) x 3 (number of
runs for each process count) = 12 runs]
Plot a graph with average execution time on the y-axis, process count on the x-axis.
Q2. From the plot in Q1, what is the parallel speedup for 2, 4, and 8 processes (compared to 1 process)?
Is this problem embarrassingly parallel?
Q3. Run your heat_transfer_parallel program from part 2 with 1, 2, 4, and 8 processes and the following
parameters: gSize=4000, mTemp=600, iCX=0.15, iCY=0.1, tSteps = 1000 (if you already used 2000, that
is also acceptable). Each of your parallel programs should run 3 times. {Total number of runs is 4
(different process counts) x 3 (number of runs for each process count) = 12 runs]
Plot a graph with average execution time on the y-axis, process count on the x-axis.
Q4. From the plot in Q3, what is the parallel speedup for 2, 4, and 8 processes (compared to 1 process)?
Is this problem embarrassingly parallel?
Submission Guidelines
Make sure that your solutions folder has the following files and sub-folders. Let's say your solutions
folder is called my_assignment6_solutions . It should contain:
core/ -- The folder containing all core files. It is already available in the assignment 6 package.
Do not modify it or remove any files.
Makefile -- Makefile for the project. This is the same Makefile provided in the serial package. Do
not modify it.
curve_area_parallel.cpp
heat_transfer_parallel.cpp
report.pdf -- A pdf file that includes answers to questions in the previous section.?
To create the submission file, follow the steps below:
1. Enter in your solutions folder, and remove all the object/temporary files.
$ cd my_assignment6_solutions/
$ make clean
2. Create the tar.gz file.
$ tar cvzf assignment6.tar.gz *
which creates a compressed tar ball that contains the contents of the folder.
3. Validate the tar ball using the submission_validator.py script.
Assignment 6
7/8
$ python scripts/submission_validator.py --tarPath=/assignment6.tar.gz
Submit via Canvas by the deadline.
Assignment 6
8/8

站长地图