Quantifying User Effort to Migrate MPI Applications

Project Information

Discipline
Computer Science (401) 
Orientation
Research 
Abstract

Today’s scientific computing infrastructures provide scientists with easy access to a wide variety of computing resources. However, migrating applications to new computing sites can be tedious and time consuming. We propose to quantify this effort in terms of time. We will conduct a study to measure how long it takes scientists to get their applications running at various FutureGrid sites. Our study will focus on MPI applications. We will study how long it takes to migrate binaries and source code. We will measure the effort required for manual migration and contrast this with the effort of using FEAM, our Framework for Efficient Application Migration.

Intellectual Merit

We will identify the common tasks involved in the process of getting MPI applications running at new computing sites. We will also determine how much time different groups of participants took to carry out these tasks. We will group participants based categories such as self-reported experience, experience as assessed from our tests, migration success rate, and migration time per site. Our research will provide experimental data for quantifying the user effort required to migrate MPI applications to new resources.

Broader Impacts

By identifying the time consuming tasks for users who are getting started at new computing sites, we will provide the community with information that can be used to enhance the usability of existing cyber infrastructures especially for the non-expert users. We will publish our findings to disseminate this knowledge to community. Our research will also provide non-expert individuals first-hand experience with national HPC resources.

Project Contact

Project Lead
Karolina Sarnowska-Upton (sarnowsk) 
Project Manager
Karolina Sarnowska-Upton (sarnowsk) 

Resource Requirements

Hardware Systems
  • alamo (Dell optiplex at TACC)
  • hotel (IBM iDataPlex at U Chicago)
  • india (IBM iDataPlex at IU)
  • sierra (IBM iDataPlex at SDSC)
  • xray (Cray XM5 at IU)
 
Use of FutureGrid

We would like to use FutureGrid resources to run MPI applications. Head nodes would be used for compilation and debugging while compute nodes would be used to run the MPI applications.

Scale of Use

We would like access for several months to a variety of resources where MPI applications can be run. These resources will not be heavily utilized. We would be submitting on the order of 10s of jobs per week at the most to carry out our migration tests.

Project Timeline

Submitted
04/11/2012 - 15:09