Introducing FutureGrid, Gordon and Keeneland (90 mins) In 2009, the NSF made awards to fund the development, deployment, and support of three innovative new TeraGrid high performance computing resources: Future Grid, Gordon, and Keeneland. They cover respectively prototyping in HPC grids and clouds, data intensive computing with large memory, heterogeneous supercomputer based on graphics processors. The workshop will describe these systems, typical applications and programming/usage models and the opportunities to use or collaborate with them. In 2009, the NSF made awards to fund the development, deployment, and support of three innovative new TeraGrid high performance computing resources: Future Grid, Gordon, and Keeneland. Each of these systems has unique features that make it particularly well suited to certain classes of problems. FutureGrid is a distributed system that allows scientists to collaboratively develop and test innovative approaches to parallel, grid, and cloud computing. The testbed is composed of a high- speed network connected to clusters of high-performance computers with about 4000 cores distributed across Indiana University, University of Chicago, University of Florida, San Diego Supercomputing Center, Texas Advanced Computing Center, and Purdue University. FutureGrid employs virtualization technology and supports cloud-provisioning middleware to allow the testbed to support a wide range of customized images. The FutureGrid system helps researchers identify cyberinfrastructure that best suits their scientific needs, and enables seamless provisioning of virtual appliance clusters for educational and training activities. The Gordon system and its smaller prototype, Dash, were specifically designed to handle large memory and data intensive problems. Dash is already in production and Gordon is expected to be available at SDSC in late 2011. Gordon will be composed of 32 supernodes, each consisting of 32 compute nodes and two I/O nodes. Each compute node will contain two Intel Sandy Bridge processors and 64 GB of memory. The peak performance of Gordon is expected to be in excess of 200 TFlops. The 64 I/O nodes are each populated with 4 TB of Intel enterprise flash memory, and provide access to a 4 PB parallel file system at 100 GB/s. The supernodes will support the use of vSMP software developed by ScaleMP to aggregate memory from across multiple compute nodes to provide access to a large, logical shared memory space. The NSF Track 2D Experimental System of Innovative Design project, named Keeneland, has the goal of deploying a large-scale heterogeneous supercomputer based on graphics processors to the NSF scientific community. To this end, Keeneland has scheduled two system deployments during its lifetime. In Phase 1, scheduled for 2010, the Keeneland Initial Delivery (KID) system is a HP SL390 cluster-based system with 240 Intel Westmere host processors, accelerated by 360 NVIDIA FERMI M2070 graphics processors, and interconnected by Infiniband QDR. KID provides an initial platform for software development, application porting, training, and limited application access. In Phase 2, scheduled for 2012, the Keeneland Final Delivery (KFD) system will be substantially larger, and use next generation technologies for all components. The Keeneland team is composed of Georgia Institute of Technology, University of Tennessee-Knoxville, National Institute for Computational Sciences, and Oak Ridge National Laboratory. The workshop will describe these systems, typical applications and programming/usage models and the opportunities to use or collaborate with them. It should be of interest to users in computer and computational science areas as well those responsible for large scale innovative systems. There will be a tutorial on Monday. This session will not have hands-on material Agenda: 1. Introduction by NSF Office of Cyberinfrastructure Program Manager for Track IID 2. FutureGrid Overview 3. Gordon Overview 4. Keeneland Overview 5. Related Projects such as Grid5000 6. Discussion Location: Salon E