Elastic Site - Using Clouds to Elastically Extend Site Resources

Project Information

Discipline
Computer Science (401) 
Subdiscipline
11.04 Information Sciences and Systems 
Orientation
Research 
Abstract

Infrastructure-as-a-Service (IaaS) cloud computing offers new possibilities to scientific communities. One of the most significant is the ability to elastically provision and relinquish new resources in response to changes in demand. In our work, we will develop a model of an “elastic site” that efficiently adapts services provided within a site, such as batch schedulers, storage archives, or Web services to take advantage of elastically provisioned resources. We describe the system architecture along with the issues involved with elastic provisioning, such as security, privacy, and various logistical considerations. To avoid over- or under-provisioning the resources we propose three different policies to efficiently schedule resource deployment based on demand. We have implemented a resource manager, built on the Nimbus toolkit to dynamically and securely extend existing physical clusters into the cloud. Our elastic site manager interfaces directly with local resource managers, such as Torque. Developing and evaluating policies for resource provisioning on a Nimbus-based cloud. To demonstrate a dynamic and responsive elastic cluster, capable of responding effectively to a variety of job submission patterns. We also demonstrate that we can process 10 times faster by expanding our cluster up to 150 EC2 nodes.

Intellectual Merit

Discover best practices for highly scalable embarassingly parallel applications using backfill VM techniques.

Broader Impacts

Through this project, we link to the already existing,sharable cloud community and schedule our jobs to the cloud resources and get the work done easily with good performance.

Project Contact

Project Lead
Ravi Teja Mahavrathayajula (ravi-it09) 
Project Manager
Ravi Teja Mahavrathayajula (ravi-it09) 

Resource Requirements

Hardware Systems
  • alamo (Dell optiplex at TACC)
  • foxtrot (IBM iDataPlex at UF)
  • hotel (IBM iDataPlex at U Chicago)
  • sierra (IBM iDataPlex at SDSC)
 
Use of FutureGrid

I intend to use the FutureGrid Nimbus clouds.

Scale of Use

8GB RAM, 100GB harddisk,5 virtual machines for a period of 45 days

Project Timeline

Submitted
05/10/2013 - 10:47