Monday November 24, 2014
This problem challenges you to create a hybrid MPI+OpenMP model of a rumor spreading through a population.
The model in this problem is based on Shodor's Rumor Mill system dynamics model (http://shodor.org/talks/ncsi/vensim/RumorMill.html). You may find it helpful to reference that model as you solve the problem.
This challenge problem is also similar to six previous "Hybrid Parallel" challenge problems, and you may find it helpful to reference those as you solve the problem:
http://hpcuniversity.org/students/weeklyChallenge/81/
http://hpcuniversity.org/students/weeklyChallenge/82/
http://hpcuniversity.org/students/weeklyChallenge/83/
http://hpcuniversity.org/students/weeklyChallenge/85/
http://hpcuniversity.org/students/weeklyChallenge/86/
http://hpcuniversity.org/students/weeklyChallenge/87/
Your task is to implement a hybrid MPI+OpenMP parallel program, wherein each MPI process spawns OpenMP threads. The threads are each responsible for running a simulation with different parameters. The parameters are determined by the thread's MPI rank and OpenMP thread number, as follows:
if mpi_size = 1, spreading_probability = 0. Otherwise, spreading_probability = 0.005 * mpi_rank / (mpi_size  1).
if num_threads = 1, rationality_rate = 0. Otherwise, rationality_rate = thread_num / (num_threads  1).
The other parameters in the model are constant for all threads:
INITIAL_GULLIBLES = 999
INITIAL_RUMOR_MONGERERS = 1
INITIAL_LOYALISTS = 0
START_TIME = 0
END_TIME = 50
TIME_STEP = 0.015625
At each current_time_step from START_TIME through END_TIME, incrementing by TIME_STEP, each OpenMP thread performs the following calculations:
gullibles{new} = gullibles{old}  TIME_STEP * mongerization{old}
rumor_mongerers{new} = rumor_mongerers{old} + TIME_STEP * (mongerization{old}  coming_to_senses{old})
loyalists{new} = loyalists{old} + TIME_STEP * coming_to_senses{old}
mongerization{new} = spreading_probability * gullibles{new} * rumor_mongerers{new}
coming_to_senses{new} = rationality_rate * rumor_mongerers{new}
At the end of the simulation, the thread stores the final amounts of gullibles, rumor mongerers, and loyalists in 3 arrays, one for each quantity. The arrays are shared by all threads and indexed by each thread's OpenMP thread number. Thus, each MPI process will have 3 arrays that contain all of its threads' final amounts.
Each MPI process is responsible for sending its final data to the MPI process rank 0, who is responsible for printing the results in the following format:
X1 Y1 A1 B1 C1
X2 Y2 A2 B2 C2
...
where X is spreading_probability, Y is rationality_rate, A is the final amount of gullibles, B is the final amount of rumor_mongerers, and C is the final amount of loyalists. This list of results should be ordered first by MPI rank and then by OpenMP thread number.
Assume that the populations of gullibles, rumor_mongerers, and loyalists can contain fractional values.
A sample output file for a working program running with 6 MPI processes and 5 OpenMP threads is provided in the "Hybrid Parallel Rumor Mill sample output" file below.
This challenge problem is also similar to six previous "Hybrid Parallel" challenge problems, and you may find it helpful to reference those as you solve the problem:
http://hpcuniversity.org/students/weeklyChallenge/81/
http://hpcuniversity.org/students/weeklyChallenge/82/
http://hpcuniversity.org/students/weeklyChallenge/83/
http://hpcuniversity.org/students/weeklyChallenge/85/
http://hpcuniversity.org/students/weeklyChallenge/86/
http://hpcuniversity.org/students/weeklyChallenge/87/
Your task is to implement a hybrid MPI+OpenMP parallel program, wherein each MPI process spawns OpenMP threads. The threads are each responsible for running a simulation with different parameters. The parameters are determined by the thread's MPI rank and OpenMP thread number, as follows:
if mpi_size = 1, spreading_probability = 0. Otherwise, spreading_probability = 0.005 * mpi_rank / (mpi_size  1).
if num_threads = 1, rationality_rate = 0. Otherwise, rationality_rate = thread_num / (num_threads  1).
The other parameters in the model are constant for all threads:
INITIAL_GULLIBLES = 999
INITIAL_RUMOR_MONGERERS = 1
INITIAL_LOYALISTS = 0
START_TIME = 0
END_TIME = 50
TIME_STEP = 0.015625
At each current_time_step from START_TIME through END_TIME, incrementing by TIME_STEP, each OpenMP thread performs the following calculations:
gullibles{new} = gullibles{old}  TIME_STEP * mongerization{old}
rumor_mongerers{new} = rumor_mongerers{old} + TIME_STEP * (mongerization{old}  coming_to_senses{old})
loyalists{new} = loyalists{old} + TIME_STEP * coming_to_senses{old}
mongerization{new} = spreading_probability * gullibles{new} * rumor_mongerers{new}
coming_to_senses{new} = rationality_rate * rumor_mongerers{new}
At the end of the simulation, the thread stores the final amounts of gullibles, rumor mongerers, and loyalists in 3 arrays, one for each quantity. The arrays are shared by all threads and indexed by each thread's OpenMP thread number. Thus, each MPI process will have 3 arrays that contain all of its threads' final amounts.
Each MPI process is responsible for sending its final data to the MPI process rank 0, who is responsible for printing the results in the following format:
X1 Y1 A1 B1 C1
X2 Y2 A2 B2 C2
...
where X is spreading_probability, Y is rationality_rate, A is the final amount of gullibles, B is the final amount of rumor_mongerers, and C is the final amount of loyalists. This list of results should be ordered first by MPI rank and then by OpenMP thread number.
Assume that the populations of gullibles, rumor_mongerers, and loyalists can contain fractional values.
A sample output file for a working program running with 6 MPI processes and 5 OpenMP threads is provided in the "Hybrid Parallel Rumor Mill sample output" file below.
Show solution
Challenge Resources:
Hybrid Parallel Rumor Mill solution
—
Solution to the Hybrid Parallel Rumor Mill challenge problem in C.
Hybrid Parallel Rumor Mill sample output
—
Sample output file for the "Hybrid Parallel Rumor Mill" challenge problem from a working program running with 6 MPI processes and 5 OpenMP threads.
©19942022

Shodor

Privacy Policy

NSDL

XSEDE

Blue Waters

ACM SIGHPC






XSEDE Code of Conduct

Not Logged In. Login