Sunday October 10, 2010
The National Science Foundation uses 6 well-known scientific applications to benchmark their high-performance machines: WRF, OOCORE, HOMME, MILC, Paratec, and GAMESS, along with the famed LINPACK. The performance of these benchmarks is indicative of the expected performance of various types of codes. The benchmarks are also a central factor in the grant application process, which brings them very tangible value.

Background


You can find a description of these applications here (look for the section entitled "3.0 - Application Benchmarks"). The Linpack benchmark goes by the name "HPL" (High Performance Linpack) on that same page, in the section entitled "2.0 - System Architecture Benchmarks." Links are provided on that page to download all sources.

Problem


You should consider the following machines:

-Lonestar
-Steele
-Queenbee
-Longhorn
-Purdue Condor Pool

To run a benchmark on any machine, you must first SSH to the "login node" of that resource, build your executable, then submit a batch job to the local queuing system (batch scheduler).
Benchmark at least one of these machines with OOCORE, MILC, GAMESS, and LINPACK as specified by the NSF document. Try to improve the performance of the benchmarks by optimizing the compile-time and run-time environments. Do not change the source code. You may want to look at compilers, libraries, and/or flags. Benchmark every machine you have access to with MILC.

Discuss the performance of the benchmarks and which compile-time and run-time environments maximized performance. In particular, what types of algorithms are the various machines good at performing? What types of machines run MILC the most efficiently? Why?

Consider the type of demands imposed by the application (compute, network/inter-connect, file I/O, etc.) and the strengths or weaknesses of the machine and its subsystems. Be aware that most HPC machines have multiple file systems like $HOME (your login directory) and $SCRATCH (parallel file systems optimized for performance).

Recommended Readings
Benchmarking Background

HPL Benchmark
Show solution
©1994-2024   |   Shodor   |   Privacy Policy   |   NSDL   |   XSEDE   |   Blue Waters   |   ACM SIGHPC   |   feedback  |   facebook   |   twitter   |   rss   |   youtube   |   XSEDE Code of Conduct   |   Not Logged In. Login