Performance analysis of a parallel CFD solver in cloud computing clusters

Project Information

Discipline
Mechanical and Related Engineering (109) 
Subdiscipline
14.19 Mechanical Engineering 
Orientation
Research 
Abstract

In the recent years, high performance computing (HPC) has revolutionized the world of computer simulations. With the aid of HPC tools, now a days, it is possible to perform simulations in hundreds of thousands of processors concurrently. Computational fluid dynamics (CFD) is an ideal candidate to use the technology of HPC, because an enormous amount of computing power is needed to resolve all the time and length scales of fluid flows. This is due to the unsteady, non-linear, multiscale and chaotic nature of the Navier-Stokes equations, that govern the fluid flow phenomena. Our objective of this project is to analyze the performance of a parallel solver to simulate unsteady, incompressible Navier-Stokes equations in cloud computing clusters. A few benchmark problems e.g. lid driven flow in a square cavity, Rayleigh-Benard convection in a rectangular domain, flow over a backward step etc. will be studied to determine the parallel performance of the developed solver. The nature of strong and weak scaling will be explored and the communication and computation time among processors for different grid sizes will be compared. By running the parallel solver in different HPC architectures, it would be possible to identify what type of infrastructure is more suitable for the next generation CFD solvers. This project is also an exploration of cloud-based HPC performance, associated with the 'HPC Experiment (Uber Cloud Experiment)'.

Intellectual Merit

In this research project, a parallel CFD solver will be optimized using state-of-the-art cloud computing clusters. The research results will help us better understand the bottlenecks (e.g. bandwidth or latency) of the parallel algorithm in distributed memory systems, how memory access affects the parallel performance in a domain with large grid size and how input/output are managed in cloud-computing clusters.

Broader Impacts

The project will be helpful to develop algorithm applied to solving fluid flow phenomena in the next generation peta-scale supercomputers. The results will be available publicly in the HPC community so that anyone interested in developing HPC software in CFD can get a preliminary idea about the main performance bottlenecks.

Project Contact

Project Lead
Pratanu Roy (pratanu) 
Project Manager
Lyn Gerner (leg) 

Resource Requirements

Hardware Systems
  • alamo (Dell optiplex at TACC)
  • sierra (IBM iDataPlex at SDSC)
  • xray (Cray XM5 at IU)
 
Use of FutureGrid

We intend to use FutureGrid as a testbed for performance analysis of a parallel CFD solver.

Scale of Use

We want to run a set of comparisons on high performance clusters and we will need 10000 CPU hours in each of the selected hardwares.

Project Timeline

Submitted
09/04/2012 - 12:09