Skip to:

e-Science 2008 4th IEEE International Conference on e-Science

Main Conference Sessions

SWARM: Scheduling Large-scale Jobs over the Loosely-Coupled HPC Clusters

Authors

  • Sangmi Pallickara, Indiana University
  • Marlon Pierce, Indiana University

Abstract

Compute-intensive scientific applications are heavily reliant on the available quantity of computing resources. The Grid paradigm provides a large-scale computing environment for scientific users. However, conventional Grid job submission tools do not provide a high-level job scheduling environment for these users across multiple institutions. For extremely large number of jobs, a more scalable job scheduling framework that can leverage highly distributed clusters and supercomputers is required. In this paper, we propose a high-level job scheduling Web service framework, Swarm. Swarm is developed for scientific applications that must submit massive number of high-throughput jobs or workflows to highly distributed computing clusters. The Swarm service itself is designed to be extensible, lightweight, and easily installable on a desktop or small server. As a Web service, derivative services based on Swarm can be straightforwardly integrated with Web portals and science gateways. This paper provides the motivation for this research, the architecture of the Swarm framework, and a performance evaluation of the system prototype.

Date and Time

Friday, December 12, 2 p.m. to 2:30 p.m.

Room Number

208

More Information

Show your support for e-Science 2008

Add one of our badges to your site:

  • Teal eScience 2008 Web badge
  • Green eScience 2008 Web badge
  • Orange eScience 2008 Web badge