I've assembled a lot of stuff from your book chapter and from the Gateway site into this, but you need to look that over closely because it's more than I really understand, of course. And this is too long so we'll have to do some cutting - most probably in all the general argument stuff I had from the start. I'll be working on that, but I wanted you to get on the CS stuff without waiting for that. Note carefully these sections, while I work on the others: PROPOSED EFFORT --------------- APPROACH -------- RELATED EFFORT -------------- Gateway (Fox) ------------- **************************************************************************** INTRODUCTION ------------ Over the years, there has been the irony of continual calls from industry for reduction in the person-time required for geometry/mesh (grid) generation but little initiative among the government funding agencies to support major effort to achieve that end. That mesh generation is a major pacing item (THE pacing item were it not for turbulence) in regard to the use of computational fluid dynamics (CFD), as well as other field simulations in engineering analysis and design, in industry now embarrassingly has the status of a cliche. This is true in spite of the fact that geometry/mesh generation is an essential infrastructure element of major current initiatives in DoD and DoE that are based fundamentaly on the use of computational simulation to replace or reduce the need for experimental testing, and that computational science has emerged as essential to scientific investigation and industrial design. But mesh generation has never had a home or an advocate in the structures of NSF, NASA, DoD, DoE, or DARPA. Small projects in the area have been funded, but more to advance the mathematics issues involved than to develop systems for general and effective use. The concerted development needed to address these calls for progress might properly be considered the provence of commercial software companies, but there the need for such mesh systems has been lost behind the emphasis on CAD system development. So, in large measure, the Federal agencies have concentrated on funding solution development, and the software companies have concentrated on solid modeling, while geometry/mesh generation - the essential link - has not gotten sufficient attention from either. The computational technology of geometry/mesh generation is now advanced, though much still remains to be done in the area of adaptive coupling with solutions, but there are major computer science issues in developing a configurable system of general application as enabling technology for computational science. This proposal thus seeks the opportnity to bring major computer science expertise to bear in collaboration with major expertise in the geometry/mesh computational technology to address this pressing software development need. ENABLING TECHNOLOGY ------------------- Geometric representation and mesh generation is enabling technology that is cross-cutting across all areas of computational simulation and thus across specific mission agency applications, essential to computational science in both scientific investigation and engineering design. This enabling technology has repeatedly been cited by industry and Federal labs as a pacing item holding back the capability of the effective application of computational simulation in prediction, analysis, and design. This pacing effect manifests itself both in lack of sufficient resolution for accuracy and also in prohibitive person-time required for solution pre and post-processing. For example, still in 1998, there appeared the following comment from SIAM President John Guckenheimer of Cornell, in an article "Numerical Computation in the Information Age" in the March 1998 issue of Computing Research News and also in the June 1998 issue of SIAM News: Ironically, as numerical analysis is applied to larger and more complex problems, non-numerical issues play a larger role. Mesh generation is an excellent example of this phenomenon. Solving current problems in structural mechanics or fluid dynamics with finite difference or finite element methods depends upon constructing high-quality meshes of surfaces and volumes. Geometric design and constructing these meshes are typically much more time-consuming than the simulations that are performed with them. Thus, there is a continuing need for "precompetitive" research in this area. And Tim Gatze of Boeing, writing on industrial applications in the "Handbook of Grid Generation", published in 1999 by CRC Press, had this to say: Increased problem size, computer resources, and user experience base lead to the bottom line for CFD applications. There are more new customers for CFD applications everyday, and for most of those applications, grid generation turn-around time is a limiting factor. Whoever can solve that problem will provide a great service to the CFD engineer. This is echoed now in the DoE Scientific Simulation Initiative, in reference to necessary enabling technology: Mesh generation is often the most time-consuming and labor-intensive part of the simulation process. In 1997, a series of DoD White Papers assessed critical needs of the users of the DoD Major Shared Resource Centers (MSRCs) of the DoD High Performance Computing Modernization Program (HPCMP). Comments related to geometry/mesh generation in those White Papers follow: First, as to pervading need across the CTAs: It would be an excellent opportunity to develop a common grid generation procedure for all pertaining CTAs. The products for structured and unstructured grid generation have the potential to become the standards of the grid-based community. Then, as to progress: In spite of thirty years of continuous efforts on structured grid generation, the pre-process of a grid-based computational technology is still far from efficient. DESIDERATA ---------- The major driving factors in comprehensive mesh codes must first be automation and then graphical interaction. Since design is the paramount application, the effacy of a mesh code is measured primarily by the person-time it takes to generate a series of geometrically related meshes for complex configurations. And the coupling with CAD systems on the front end, and with solution systems on the back end, must be smooth and effective. The ideal is not to make it easy for a person to generate a mesh but rather to remove the person from the process - not to make it interactive, but to make it automatic. Present mesh codes enable and rely on extensive graphical user interaction rather than automation, and therefore require considerable user experience and effort. The goal of an automated mesh generation system that will produce a suitable mesh with little user interaction and effort has not yet been achieved in any current code, commercial or freeware. And mesh generation tools must be designed to be applied by design engineers rather than mesh generation specialists. There is also the problem of the more powerful of these mesh codes requiring considerable training and experience for effective use. This latter factor sometimes causes users to continue to use tools that are less powerful but familiar, in the press of time constraints to get solutions done, rather than moving to newer and more effective tools. Mesh generation systems must be capable of handling very large scale variations as occur in high Reynolds number flow, and this precludes any approach not encompassing large aspect ratio cells with good numerical properties. Mesh generation systems must interface with CAD or other geometry input systems on the front end, and must interface with solution and visualization systems on the back end. And ultimately there must be dynamic interaction coupling between the mesh and solution systems so that the mesh can adapt to resolve developing solution gradients. There is a clear need for interaction with commercial CAD vendors. CAD codes were developed before the advance of mesh generation technology and widespread application. In order to become truly effective in multidisciplinary design optimization, CAD tools must be redesigned to target computational analysis as well as tooling and material formation. And there is the fact that comprehensive mesh codes are very large software systems, but the real market is not yet large enough to encourage development to the extent that has been attained by commercial CAD systems. The development of an entirely new mesh code is a multi-year, multi-million dollar effort. All of this argues for the creation of a toolbox or library for geometry/mesh generation: a set of interfacing components that are reliable and readily usable which can be assembled to effectively and efficiently address the demands of different applications and different users of computational simulation for engineering analysis and design in DoD, DoE, and industry. This geometry/mesh tookit/library should have the following characteristics: * Object-oriented for modularity. * Java-based for portability. *** WHY **** * Scalable parallel operation. * Incorporation of existing useful components. * Extendable to incorporate emerging technology. * Automated operation, with user intervention. ** KEY **** * User-configurable for compatibility with applications. * Built-in web-based training facility and documentation. And it should incorporate the following features: * Interface with CAD systems, solution systems, and visualization systems. * Internal CAD capability for geometry generation, repair, and modification. * Block-structured meshes: including overset and hybrid. * Unstructured meshes: both tetrahedral and hexahedral. * Surface and volume mesh systems. * Quality assessment, display, and control. * Dynamic adaptive coupling with solution systems. * Macros, editing, and script-based operation capability. The development of this geometry/mesh generation toolkit/library system should proceed as follows: (1) Establishment of collaborative ** didtributed object *** framework. (2) Definition of all needed capability - with users. (3) Encapsulation of all capability into components (objects/operations). (4) Identification of existing components. (5) Identification of components to be developed. (6) Design of library infrastructure and data structure. (7) Design of documentation and training structure. (8) Implementation. PROPOSED EFFORT --------------- A networked collaboration among computer scientists and researchers in geometry/mesh is proposed to develop an open-source geometry/mesh generation system for application and continual enhancement in the PACI community and then beyond. This open-source geometry/mesh generation system will be developed based on components technology, incorporating useful components from existing systems and developing new components as needed. The system will be structured in accordance with sound software engineering principles, and extensions resulting from the open-source nature of the effort will be managed by this networked collaboration. The system will operate on PACI servers for use by clients throughout the PACI community in the mode of an application service provider. The networked collaboration will continually provide training, guidance, and assistance to the PACI community in the use of the system. The development of a truly effective geometry/mesh generation system of this scope is beyond the skill set of the engineers and mathematicians who have advanced the general computational technology. This development can only be accomplished through a concerted collaborative effort with computer scientists to incorporate the following essential elements: Object-oriented extendable modularity. Scalability and portability on parallel architectures. Adaptable and configurable user interface. Web-based development, enhancement, maintenance, guidance, and operation. Software engineering of the modular system. Networked computing model and portal with geometry/mesh as a network service. Collaborative computing to link geometry/mesh experts with remote users. A networked collaboration bringing together leading expertise in geometry/mesh generation at Mississippi State and Texas with relevant expertise in computer science at Florida State and Indiana to develop an open-source geometry/mesh generation system for application and continual enhancement in the PACI community is an effective and logical way to address this national problem. APPROACH -------- Design Criteria The purpose of this effort will be to develop an effective and efficient geometry/mesh generation system for general accessibility and use across the research and application communities. The fundamental design criteria for this system will be the following: This system is to be user-configurable from interacting components to meet the differing demands of various applications, rather than being a single monolithic system. The system will therefore be based on components technology incorporating both libraries and distributed objects. This system will provide for continual enhancements and extensions by the user community, and therefore will be open source. This system will operate as a computing portal, and therefore will be based on a web-based three-tier client/broker/service architecture utilizing a browser interface with the user. This system will utilize existing geometry/mesh computational technology incorporated in a framework developed with frontier computer science. Object Web Architecture This project will address the development of a computing portal for geometry/mesh generation, allowing user customizability from a suite of objects and supporting services. The geometry/mesh generation computational elements of this system will be open source, gathered from and to be continually enhanced and extended by the geometry/mesh user community. The framework of this system will be constructed in terms of components built according to emerging distributed object and Web standards technologies. Geometry/mesh generation computational technology will be encapsulated into modular elements as open-source, allowing for contunual enhancement and extension. This system will utilize an "Object Web" approach to building distributed systems: a three-tier architecture that generalizes the traditional client-server model to become a client-broker-service model. In this model, the middle tier acts as an intermediary or broker that allows diverse clients to share and choose among many different resources. The middle tier interfaces with the user through a "request for service" interface, and with service objects through a "resources" interface, with expression at these interfaces accomplished through XML technology. This architecture builds on distributed object technology, and this concept underlies the "Object Web" approach to building distributed systems. The Object Web signifies the merger of distributed object and web technologies. The Pragmatic Object Web implies that there is no outright winner in the distributed object field, and one should allow for mixing and matching of approaches as needed. In this Object Web architecture for geometry/mesh generation, everything is a distributed object, whether it be an elliptic mesh generation element, an unstructured front advancement element, a surface NURB, or a mesh quality measure element, etc, with XML definitions for the raw definition and operation objects themselves and the descriptions and results they produce. A single surface geometry or a volume mesh, for example, is composed of many elements, all of which will be objects. Using the Pragmatic Object Web strategy, all relevant properties of computing elements are defined in XML. These properties are used to generate, either statically or dynamically, the needed object wrappers. This approach requires users to specify what they know - the properties of their program element - while the filter copes with the obscure syntax of each object model. In this way, all object models - CORBA, COM, Java - can be supported by changing the filter. Relation to Libraries This approach to components technology differs somewhat from the usual library approach in that possibly distributed modules are linked together in a dynamic fashion. This is more flexible that the library approach, and supports distributed components. This component approach can admittedly be less efficient that the library approach, as component linkage is obtained through explicit exchange of messages rather than through efficient parameter passing as for library methods. Since this inefficiency is particularly serious for small components, because of latency overhead, a hybrid aproach will be followed - using the traditional library mechanism within a set of agreed interfaces and design frameworks to promote easier interchange of modules. Large objects (roughly geometric macros) will use a component approach within a distributed object framework defined in XML. Portal Strategy Specifically, the strategy of this proposad approach is as follows: 1. Define the coarse grain object structure of system - this is distributed object structure 2. As needed, define fine grain object structure -- this is where classic software engineering enters 3. Define ALL object interfaces in XML - whether entities are in Java, C++, o Fortan, or are program or data (if data use XML whether data itself is XML, HTML or private syntax) 4. Decide what exists and how it is to be broken up / put together to agree with #1-3. 5. Decide what is truly backend, what must be client/embedded in the server (and so a candidate for Java) and what could be either but may today be backend and tomorrow Java 6. Define the classic backend services needed (security, visualization server side, file access etc) 7. Wrap backend applications 8. Decide on user tools (e.g. rapid mesh tweaker) 9. Decide on specialized collaborative tools to enable remote consultation e.g. a shared mesh 10. Implement test/evaluate iteratively RELATED EFFORT -------------- This effort will build on the existing Gateway Architecture development ( http://www.osc.edu/~kenf/theGateway/ ) led by Fox at Syracuse (now relocating to Florida State) as a part of the Programing Environment & Training (PET) program of the DoD High Performance Computing Modernization Program (HPCMP) and on the extensive geometry/mesh generation computational technology effort of Thompson and co-workers at Mississippi State. And this effort will build on the collaborative working structure established between Fox and Thompson in the DoD HPCMP effort over the past four years. Gateway (Fox) ------------- The Gateway system creates a Web-based environment for DoD scientists and engineers that enable secure and seamless access to high-performance resources. It comprises a multi-tier (currently three-tier) architecture. The first tier is comprised of a web browser-based graphical user interface which assists the researcher in the selection of suitable applications, generation of input data sets, specification of resources, and the post-processing of computational results. The distributed, object-oriented middle-tier maps the user task specification onto back-end resources, which forms the third tier. In this way we hide the underlying complexities of a heterogeneous computational environment, and replace it with a graphical interface through which a user can understand, define, and analyze scientific problems. The Gateway is a seamless interface for scientists and engineers to utilize HPC resources, including computational engines, software and visualization. The fundamental components of the gateway are: 1. The Workstation Client - which makes requests via the Middleware. The workstatation Client is a Web browser thin client. The most important element here is the user interface based on Web browser technology, and is therefore platform independent and easily extensible. This of course does not preclude users from having specific software, outside the Web browser to prepare data, analyze results, and enhance default functionality of Web browser with third-party presentation tools. 2. The Middleware - One or more servers that receive such requests and determine how they should be met via an Object Request Broker (ORB) whose components are: 1. access to back-end schedulers, such as the Portable Batch System (PBS) 2. an interface with the Kerberos/SecureID environment 3. display of job status 4. techniques to display of available system resources, plus an interface to file management system 5. object-oriented distributed computing environment component 3. The HPC Environment - Multiple back-end platforms (MPPs, NT clusters, networks, servers, etc.) that perform application specific tasks. The Gateway Infrastructure Front End The front-end is designed as a thin client, implemented as a combination of dynamic, html documents and Java applets. This implies that most of the business code is moved to the server side, with the middle-tier implementing proxies to the back-end services. The front-end is extensible, as it is constructed from components which are typically defined in XML. The components are organized as toolboxes, and they are accessible through a hierarchical navigation bar. Some of the components are application independent, for example file access and job monitoring, and come as a part of the gateway infrastructure. Other comes as a plug-in, application-oriented toolboxes such as one for computational chemistry or astrophysics, or visualization. These toolboxes can be interpreted as pure client side actions, but typically they involve the other tiers. Note, this distinction is hidden from the user and is exhibited through the handler invoked by the XML parser. The interactions between the front-end part of the toolbox with the middle-tier are defined by the Gateway APIs. Middle-Tier The Gateway middle-tier contains all the 'business logic' or control code necessary to handle requests from a client or direct a request to an appropriate backend resource. The middle-tier can also act as a broker choosing from several possible backend systems to service the user request. The 'business logic' is set in an API specification. The middle-tier (known also as the WebFlow system) is given by a set of CORBA objects implemented in Java. There are two kinds of WebFlow objects: containers (also referred to as contexts) and modules. The WebFlow API defines how to create a hierarchy of contexts, place modules there, and define interactions between objects, including communication with the front-end. The Gateway's gatekeeper server (a WebFlow context) is the root of the hierarchy. It contains user contexts. Each user context servers as a container for the user applications contexts, and each application is made of modules that implements the application logic, or module contexts - in case the module is build hierarchically from subcomponents. Back End The Gateway does not specify requirements for the back-end components. On the contrary, the design objective is to provide the easiest possible way of incorporating legacy codes into the Gateway infrastructure. In particular, we avoid requiring introducing any modifications to the original code. Grid Technology (Thompson) -------------------------- The ERC Initially funded by NSF in 1990 as an NSF Engineering Research Center (ERC), the ERC for Computational Field Simulation at Mississippi State is a multi-disciplinary academic research center - now funded at approximately $15M annually by NSF, DoD, NASA, DoE, and industry - conducting a coordinated research program according to a strategic plan to advance US capability in the use of computational simulation in engineering analysis and design, as well as in scientific research in general. This Center focuses on all elements involved in the computational simulation of physical field phenomena: physical processes occuring over space and time, i.e. governed by partial differential equations - computationally intense simulations requiring access and efficient utilization of HPC facilities at the highest level. This Center necessarily incorporates engineers, physicists, computer scientists, and mathematicians in cross-disciplinary research in geometrical representation, numerical solutions, and scientific visualization - together with the underlying parallel computing environments and mathematical foundations. Although the Center's historical concentration has been in computational fluid dynamics, its strategic research efforts in building computational problem solving environments encompasses all areas of field physics. This Center has built and expanded on established nationally-recognized research effort in mesh generation at Mississippi State (recognized by the 1992 AIAA Aerodynamics Award to Thompson), and has now made major advances in unstructured mesh generation, as well as in its traditional area of block-structured mesh generation. The Center has produced the comprehensive "Handbook of Grid Generation" published by CRC Press in 1999. The National Grid Project at the Center served to develop major components of a mesh generation system, and this prototype effort has been carried forward by the Center to develop a system for general applicability which is now in production operation. The PET Effort The NSF ERC at Mississippi State took the leadership role in setting up a university team to join with Nichols Research of Huntsville and E-Systems of Dallas to respond to the DoD competitive solicitation for support of the four DoD HPC Major Shared Resource Centers (MSRCs) in the DoD High Performance Computing Modernization Program (HPCMP). This university team has the responsibility for the Programming Environment and Training (PET) element of this support, amounting to some $4M @ year for each of the four MSRCs. This university PET team, now in operation for four years, is composed of the following institutions: Center for Computational Field Simulation (NSF Engineering Research Center at Mississippi State), National Center for Supercomputing Applications - NCSA (NSF Supercomputer Center at Illinois), Center for Research in Parallel Computing - CRPC (NSF Science & Technology Center headquartered at Rice, including Syracuse and Tennessee), Ohio Supercomputer Center (at Ohio State), Texas Institute for Computational & Applied Mathematics - TICAM (at Texas), HBCUs: Jackson State, Clark-Atlanta, Central State. and has the responsibility for Programming Environment and Training (PET) at three of the four MSRCs: ERDC - Army Engineer Research and Development Center at Vicksburg MS ASC - Air Force Aeronautical Systems Center at Dayton OH ARL - Army Research Laboratory at Aberdeen MD Mississippi State (the ERC) leads this university team at the ERDC MSRC, with Ohio State (OSC) leading at ASC and Illinois (NCSA) leading at ARL. (Winning this DoD competition represents something of an NSF success story, since the nucleus of the team was an NSF ERC, an NSF S&TC, and an NSF SCC.) ISGG The ERC at Mississippi State is also the headquarters of the newly-established International Society for Grid Generation (ISGG) - with Thompson as the initial president - which organizes the series on bi-annual international conferences of mesh generation. Mississippi State thus is very well positioned to assemble and coordinate computational technology for this open source geometry/mesh generation system.