Overview Foils September95 Some Remarks on the Future of HPF -- Metacomputing and the World Wide Web Icase HPF Workshop August 23,1995 Geoffrey Fox NPAC Syracuse University 111 College Place Syracuse NY 13244-4100 Abstract of HPF Futures Presentation This talk discusses the impact of World Wide Web and large scale distributed (meta) computing on the future of HPF. It builds on concepts in Web Vision Presentation with an overall pervasive WebWindows environment The Boston-NPAC Proposal for WebWork which notes the critical importance of the excellent software engineering environment that can be built on top of WebWindows Discussion of Java which allows proper client-server implementations on the Web and the trade-off between Interpreted and Compiled Environments Implications of this for parallel C++ and Fortran Domain Specific Problem Solving Environments and the relation to WebScript VRML as an example of a universal 3D data structure Guidelines for HPF Futures We need to do more than make HPF available on PC/Workstation networks and shared memory multiprocessors We need to make HPF (and similar HPCC technologies) the languages that typical(many) PC/WS user will adopt because it supports the distributed computing opportunities of the World Wide Web This implies that we need to build HPF on top of common Web or distributed computing standards. HPF brings to the Web world, the accumulated experience of synchronization and parallel algorithms from HPCC. This includes collective communication (multi-cast) etc. Interpreters versus Compilers -- I? We need to use compilers on tightly coupled systems such as MPP's (shared and distributed memory) But for metacomputing, the hardware intrinsically has latencies that suggests increased flexibility of interpreters is more appropriate This implies a hybrid compiler-interpreter environment Maybe frontends should be built with interpreters such as object-oriented PERL5 so easier to link with Web. Note that interpreted environment will have best software engineering support and so suggestion is -- taking SP2 as example: debug on SP2 set up with compute-enhanced Web Server on each node with say MPI running on top of HTTP message passing protocol execute debugged code on conventional SP2 with high performance compiled environment Interpreters versus Compilers -- Domain Specific Environments This implies that we should allow hybrid model not just for task (interpreted) versus data parallelism(compiled) Rather should support full data parallelism in interpreter NPAC demonstrated a protype HPF interprter at SC93 Current Web Interpreters include Java TCL and PERL(5) which are optimized for different application domains For instance PERL is optimized for documents/files and Java for browsers This leads to WebScript Concept of interoperable interpreters optimized for different domains WebHPL (High Performance Language) is then script optimized for computing which links compiled HPL modules on tightly coupled MPP's This naturally suggests that we can link domain specific systems (e.g. partial differential equation toolkit) to HPF future and WebHPL Java and HPF Futures Java is a C++ subset which interestingly does not have pointers as these are unsafe in necessary secure metacomputing environment. Thus Java has removed the part of C++ which is hardest to parallelize Java may not "survive" but if it doesnt something better will! Thus it makes sense to study and experiment with it Natural first step is to use Java to build the interpreted "shell" which we called HPFCL for HPF coordination Language. This is task parallel script linking HPF modules Java is partially compiled as you take basic Java high-level code and compile down to a universal Java machine language. This is very similar to concepts in ANDF (Architecture Neutral Distribution Format) but with a different goal Java's model supports universal heterogenous clients linked together in metacomputing VRML and HPF Futures VRML -- Virtual Reality Model Language -- is an object oriented database built as a subset of the SGI Inventor System VRML can be considered as another script optimized for graphics but not many interesting processing (compute) capabilities are in current standard VRML can be considered as an example of a universal data structure allowing exchange of 3D objects over the Web. These objects could either be tanks in a videogame or parts of an aircraft used in large scale simulation Thus useful to consider data parallel VRML and building CC++ or HPF(Fortran90) modules to support VRML HPCC community should join with the Web to ensure that standards such as VRML can be implemented efficiently either in parallel (maybe a niche) but also in a distributed network (similar issues where HPCC can contribute and clearly very important)