High Performance Computing and Simulations

 

I am currently a third year PhD student in the Department of Earth Sciences. My research so far has been focusing on numerical modeling of dynamic ruptures with earthquake application using finite difference method and finite element method. I am also very interested in numerical modeling of mantle convection and doing some work in that area as well. My goal of taking this course is to learn more about the general high performance computing techniques and also some specific methods like multigrid method, kinetic Monte Carlo method and etc.
 
A high performance computing application: Regional Ocean Modeling System
a. Problem desription

To study the ocean circulation system is essential to understand our current climate and also predict the future climate change. However, the oceanographic instruments are sparsely deployed and satellite data only covers the ocean surface. Three-dimensional ocean modeling can provide us with below-surface data. The Regional Ocean Modeling System (ROMS), a regional ocean general circulation modeling system which solves the free surface, hydrostatic, primitive equations over varying topography, is used for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. The parallelization of ROMS is explored using MPI in this case.

b. Shared-memory ROMS

The shared-memory ROMS (Shchepekin and McWilliams, 2004; Song and Haidvogel, 1994) solves the 3D, free-surface, primitive equations separately for their external mode, which represents the vertical average flow, and internal mode, which represents deviations from the vertical average flow. These two modes are coupled through the non-linear and pressure-gradient terms. The model has been shown to be able to handle irregular coastal geometry, continental shelf/slope topography, and strong atmospheric forcing.

c. Parallelization of ROMS and performance tests on supercomputers

MPI is used to parallelize the shared-memory ROMS in order to obtain high performance and good portability. The horizontal 2D computing domain was chosen as the candidate for parallelization since the depth length scale is much smaller compared with the horizontal scale. The parallel version of ROMS is tested on both the SGI Origin 2000 at JPL in Pasadena, CA and the NASA Columbia supercomputer SGI Altix (currently ranks 4th on the 26th TOP500 list of the worldĄŻs fastest computers). One unique capability of MPI ROMS is to simulate both the large-scale ocean over the whole globe at a lower resolution and the small-scale circulation over a seleted area of interest.

The following two figures show the wallclock time of the MPI code on the SGI Original 2000 (Fig. 1) and SGI Altix (Fig. 2) to integrate a model with different grid sizes for fixed total simulation time using different number of processors.
Fig. 1
Fig. 2
The following two figures show the speedup of the parallel MPI ROMS with a couple of different problem sizes on both SGI Origin 2000 (Fig. 3) and SGI Altix (Fig. 4). Both computer systems give excellent speedup versus number of processors. Superlinear scalability is achieved on 20 processors for a problem with a grid size 256 x 256 x 20 on SGI Origian 2000 and on 200 processors for a problem with a grid size of 1520 x 1088 x 30.
Fig. 3
Fig. 4
The slight non-linear response of the speedup curves with more processors is due to the increase with communication work for a fixed size problem. Once the size of the problem becomes smaller than the number of CPUs multiplied by cache size, the communication overhead causes performance degradation.
d. Visulization of simulation results
It is always very challenging to deal with ocean model data because of its huge amount. Higher resolution is required for better representation of boundary currents and eddies. Longer integrations are required for climate studies. Fig. 5 shows a snapshot of simulated sea surface temerature from the MPI ROMS North Pacific model with a domain in latitude from 45 degrees South to 65 degrees North and in logitude form 100 degrees East to 70 degrees West. Modern hardware allows interactive animation. Currently the most common visualization tools in ocean modeling are Matlab (Mathworks Inc.) and Ferret (NOAA/PMEL/TMAP). Nowadays,however, OpenDX, an open source software, is gaining more and more interest. Fig. 6 shows an this image created by OpenDx using streamlines to show upwelling in a 3D model simulation of a confined vortex in a circular well.
Fig. 5
Fig. 6
e. References
1. Wang, P., Song, Y.T., Chao, Y. and Zhang, H., 2005. Parallel computation of the regional ocean modeling system. The International Journal of High Performance Computing Applications, 19:375-384.
2. Shchepetkin, A.F. and McWilliams, J.C., 2004. The regional oceanic modeling system: a split-explicit, free-surface, topography-following-coordinate ocean model. Ocean Modeling, 9:347-404.
3. Song, Y.T. and Haidvogel, D., 1994. A semi-implicit ocean circulation model using a generalized topography-following coordinate system. Journal of Computational Physics, 115:228-244.
4. Ferret software reference website: http://ferret.pmel.noaa.gov/Ferret/
5. OpenDX software reference website: http://www.opendx.org
6. OpenDX visualization of Fig. 6 is produced by Duncal Galloway, Patrick Collins, Eric Wolanski, Brian King and Peter Doherty of the Australian Institute of Marine Science.