The programming languages for massively parallel processors must go beyand conventional languages in expressivity. In addition to the normal requirements, useful languages for MPPs must incorporate semantic constructs for delineating program parallelism, identifying locality and specifying data set partitioning. The broad range of HPCC applications may embody a rich diversity of program structures. It is anticipated that no one parallel programming model will satisfy all needs. Data parallel, task parallel, and object parallel forms have all found applicability to end user problems and software developers and are expected to continue to do so. A parallel programming language will incorporateing them into a single schema or by providing means for a single application to use different languages or different parallelism models for separate parts of a user problem. In either case, language and compilers will of necessity support interoperability among independently derived program modules, perhaps even running on separate computers in a distributed heterogeneous computing environment. Adequate return from the investment in MPP software development depends on the ability to combine independently developed software modules. The objective of computing environments is to identify the needs for system software in managing the total system infrastructure as it is brought to bear on GCC applications at TeraFLOPS level of performance. The questions are examioned within the framework of a computing environment hoerarchy which is expected to encompass all systems within the foressable future. Parallel computers are ensembles of equivalent computing elelments and include vector supercomputers, SIMD processors, shared memory MIMD, and massively parallel distributed memory MIMD systems. The meta-computer layer is composed of a heterogeneous collection of disparate computer systems and data storage systems integrated by high bandwidth LANs, all at one site. The meta-center concept is the top layer and consistutes multiple computer centers connected by a very high bandwodth WAN. Associated with each level are the resource management and programming support software necessary to effectively apply the compuing facilities to end-user Grant Challenge scale problems. Porting and rewriting of applications programs require a support environment that encourage code reuse, portability among diferent platforms, and scalability across similar systems of different size. Necessary to achieving this is standardiation of key programming elements including parallel I/O and message-passing primitives, as well as parallel programming support tools. The motivation for migrating to heterogeneous environments is the potential to apply the most suitable computing systems to the different parts of a given program, better matching the resources to computing needs. Realizing this possibility will require the development of a uniform programming support environment and a software resource management infrastructure. Both are made more difficult than their homogeneous system counterparts because of the complex trade-offs between data locality and granularity across component systems. Analysis tools will have to be provided to assist in determining the mapping of activities and data to computing elements. The optimal balance point is a function of many factors including communication bandwidth and latency, size and power of the constituent processors, and software overhead for coordination within the heterogeneous environment. The computing environment topics include data storage, communications, and programming support environments. Applications are inherently limited by the capabilities and access that the computing environment provides. One way to analyze the application requirements for high performance computing is to examine the associated requirements for the computing environment software infrastructure. The hardware computing environment provided by vendors consists of heterogeneous computing platforms with associated data storage devices, linked by communication channels. The computing environment needed by application developers is support for rapid porting or prototyping of software codes that can make efficient use of the vendor supplied hardware. The software infrastructure that normally provided the interface between the application developers and the computer software is seriously lagging hardware advances. Difficulties also exist in providing system software for heterogeneous platforms since historically system software is developed by vendores whoo optimze the software for only their own platform. This has become further exacerbated by bottlenecks in the development of application software because of associated paradigm shifts that are now occurring. They comprise: 1. development of a range of parallel computers with different architctures and power. 2. Integration of either homogeneous or heterogeneous computing platforms into metacomputers. The metacomputer is a uniform interface to disparate computing platforms, providing the application programmer the opportunity to decompose the application problem across multiple machines. One approach to understanding the HPCC technology requirement is to use the GC as case studies rather than as end goals. This assures that the required systems software will be availavle to support particular GC applications, while allowing development of a uniform systems software computing environment. Standardization of the programming environment is complicated by the competing factors of data granularity versus data locality. Data granularity is a decomposition metric with the issue of whether one should restrict fine-grained decompositions to local environments (i.e., within one machine) or very tightly coupled machines in the same machine room (e.g. the comunicaiton path between a MPP and its front end is a s fast as the inter-node rates within the MPP). Data locality is a data distribution metric which addresses the issue of whether a particular application will work well on distributed memory architectures. Since the application algorithms may require a particular data locality/ data granularity pattern for efficient execution, there may be multiple programming environments that will need to be supported. Providing users with systems that make generation of efficient parallel codes less painful is a harder problem and may not be completed within five years. Possible approaches include object oriented programming or identification of an easily parallelized subset of Fortran. Other examples arise in the graphcs dataflow systems such as AVS/Khoros.