Tentative Title: Universal Access Technologies and Tools for Education and Careers in Computer Science

{note this has yet to incorporate DO-IT parts TBA from Sheryl}

Draft Disclaimer
This is a bunch of text imported and adapted from various places
(as of the time of Geoffrey’s trip departure on 2/8/99).
Do not expect coherence, continuity, or completeness.
It is just a working draft. Caveat emptor.
- Ed

PROJECT DESCRIPTION

Introduction

{Need Gregg to set the UA stage here: issues, problems, needs, prior work, references, …}

The principal goal of the proposed work is to enhance career opportunities for people with disabilities, particularly in the fields of computer science, but with a solid grounding in science and technology. For successful careers in the 21st Century, access to and experience with information and communication technologies are of singular importance. Therefore, it is essential to provide universal access to these technologies. In our nation, in particular, higher education is recognized as the ticket to enhanced professional careers. Modern technologies can be of great assistance to persons with disabilities who wish to advance their education for personal development and career enhancement. Accordingly, this project will be directed at improving, particularly for persons with disabilities, a) active learning opportunities and experiences and b) career opportunities in the burgeoning field of computer science.

We bring to the table innovative human-computer interface technologies developed recently at Syracuse University, based on earlier work by David Warner M.D. and colleagues in California. The core software technology is our fourth-generation NeatTools visual programming and run-time environment that allows both rapid prototyping and full application development for human computer interaction. Our hardware includes palm-sized serial interface boxes (TNG-3) and a growing collection of sensors and mounting devices. Together, these modular technologies allow custom interfacing to the needs of the individual. These core technologies for universal access along with NPAC’s Web-based TangoInteractive collaborative technology are described in the Background section at the end of this Project Description.

{modify the following paragraph}

The Trace Center—which is recognized for its leadership in universal design and universal access, particularly in computer and information technologies—will play a crucial role in guiding this project for optimal success. The Trace Center will specify design standards and will test preliminary hardware and software components and systems developed under this project. {Gregg, please add more here to specify your team's role and effort.} For its role on this project, Trace will conduct research on accessibility to visual direct manipulation interfaces for individuals with severe visual or physical disabilities.

The long term goal of achieving truly universal access to direct manipulation interfaces is clearly beyond the scope of the present project. This goal now presents a challenging area of research. In the proposed work, we will undertake to identify key issues and, accordingly, conduct exploratory research using our representative core technologies (Tango, NeatTools, and interface devices) as practical models.

Objectives

{This needs to be reworked after we have more of the research actually planned out, preferably with a timeline; for now, I just tossed some stuff in here}

The main focus of this research project will be to investigate the accessibility of current and emerging tools and technologies both for education and for professional careers. These include a) visual direct-manipulation tools such as LabView (www.natinst.com) and NeatTools and b) synchronous and asynchronous educational and collaborative technologies such as Habanero and TangoInteractive. The proposed effort will focus on our own technologies for which we possess (and can therefore modify) the source code, namely NeatTools and TangoInteractive.

Ability to perform in the laboratory is of enormous significance to both education and careers in science and engineering fields. Notwithstanding major advances in technology, it has remained the case that many individuals with severe disabilities are unable to participate actively in the laboratory. This in turn can limit career opportunities for such individuals. We will endeavor to break down such barriers, thereby enabling enhanced learning and employment potential for all. Major effort will be carried out to enable students and professionals with physical disabilities to perform in the laboratory, by use of robotic and other computer-based interface approaches.

Although a substantial proportion of the disabled population includes those with cognitive and developmental disabilities, this will not be a major focus of the proposed work. We do have a related project, called SmartDesk, that provides an instrumented learning environment with event tracking that can significantly benefit such individuals. However, in the present work, we want to provide access to professional careers that depend on graduate level education.

In a related project, currently supported by a one-year grant from the NEC Foundation, we are establishing test sites in four metropolitan centers (Syracuse, Washington D.C., Minneapolis, and Seattle). This involves universities and colleges working in conjunction with schools. These provide a testbed for evaluating and assessing the technologies and applications of the work proposed here.

Research Plan

We propose to research and explore, in a testbed, issues concerned with preparing all citizens for employment, physical and sensory disabilities notwithstanding. This requires universal access to education and training from K-12 through graduate/continuing education. For this proposal, we have chosen points on this path wherein we already have a curriculum that is both modular and Web-based. It is not currently universally accessible, but it has been prepared in such a way that it will readily accommodate universal-access technology and methodology, without need to develop a new curriculum. The four chosen courses are:

  1. Java Academy (introduction to programming for middle and high school students);
  2. Science for 21st Century (a general science course suitable for the broad-based high-school and undergraduate student (the PI is co-creator of this course and remains a co-instructor; two current NSF-funded educational grants have derived from this course); {Ed will write a short blurb on this modular course and associated Java applets; one of the NSF grants [MRA] will be covered under Ed’s (& Geoffrey’s) Results from Prior NSF support}
  3. Introduction to Web Technology (taught to undergraduates at Syracuse and Jackson State using TangoInteractive); and
  4. Large Scale Distributed Systems: Advanced Object and Web Concepts (taught to Syracuse graduate students, and being offered over the Internet using TangoInteractive in Spring 1999 semester; a continuing education version of this course is also available).

We adopt the following model for learning. Students access material that, for the curriculum studied here, is either directly or indirectly linked to a computer. For instance, the student could be accessing a computer-controlled instrument, a CD-ROM, or a Web site. Of course, books constitute the "paper medium" version of this model.
Now, we have two modalities of studying such material:
a) Self-Paced or Asynchronous. This is always important, and is how homework and review of background material have always been implemented
b) Interactive or Synchronous. This includes project-based teams working together in brainstorming mode (project teams can also interact asynchronously). It also includes conventional teacher or mentor interactions with students.
We assert that both models are described technically as interacting with shared, distributed educational objects. In model b, the objects and changes therein are shared in real-time. In the model a, one has independent access to the distributed objects. History suggests that both models are important with their relative role depending on maturity of student and nature of material.

In this project, we will use TangoInteractive to support interactive mode b. Tango supports shared real-time changes in objects of disparate type—formally this is achieved by TangoInteractive’s ability to link to client or server side applications in any language. Further, TangoInteractive uses the flexible event-sharing model of collaboration. This allows the same event to be presented with different views to participating collaborators. TangoInteractive, behind the scenes,s shares the basic change, which is then rendered differently on each client according to user preferences—here exhibited by their choice of different NeatTool interfaces.

{the following section by Gregg is excerpted from KDI/KN/98; this could be the beginning of Research Plan, or perhaps get relocated a bit further down; note: I put in Sheryl & DO-IT, below, in place of Cori from last year’s}

Universal Access Perspectives and Plans

Access to visual direct-manipulation-based tools.

Increasingly, direct manipulation and collaborative direct manipulation tools are being used to advance the constructivist learning model in educational programs, particularly in the sciences. A system such as NeatTools allows students to manipulate, interconnect, and create. It allows them to experiment, hypothesize, test, play, and invent in a fashion that is difficult or expensive to do with real apparatus, wires, meters, signal generators, etc. (although NeatTools can be easily interfaced to such apparatus, when available). Students are allowed to combine logic, analog circuitry, and transcendental functions at will. It allows them to start at a very basic level and to advance at their own rates. Together with the collaborative mechanisms of Tango, it allows them to also interact with other’s work on group projects and to receive remote tutoring.

But only if they can see! And only if they can manipulate the elements. If they cannot, then, as things now stand, they will be unable to participate in the educational environments using such tools. A different special set of tools might be created for them but a) it would always have the subset of the functionality, b) it would always come out later, c) they would be unable to participate side-by-side with their colleagues since their version would be a different, "nonvisual" version which may make little or no sense to their peers using the standard visual version.

This leads to the following hypotheses that we will test as indicated below:

Hypothesis 1. Such visual-based direct manipulation tools can have their interface enhanced so they can be operated by individuals who are blind and individuals with no ability for direct manipulation.

Hypothesis 2. The enhanced version of the tool will allow collaboration between individuals who can see and individuals who are blind.

Hypothesis 3. The tools with the enhanced interface will be more usable by individuals who can see and manipulate than the version of the tools without the enhanced interface.

Approach: Using techniques developed at the Trace Center (Wisconsin) as a part of its seamless human- interface protocol work (Vanderheiden, 1994) and the touchscreen-access work (Vanderheiden, 1997) as a basis, the investigators will develop strategies to allow individuals who are blind to successfully navigate the GUI interface, including the various toolboxes to select, place, and interconnect elements as a part of the NeatTools process. Using the nested navigation strategies developed through the seamless protocol, the individuals will be able to move about through the different contexts, elements, and sub-elements (e.g., connection points), exploring, positioning, and connecting or exploring interconnections. By allowing complete keyboard input, access can be provided not only for individuals who are blind, but individuals with any type of physical disability including individuals who are completely paralyzed and who might use a sip and puff or even eye-blink interface to control their computer. By embedding appropriate text information as a part of each object and allowing complete navigation to all aspects of each element, it should be possible to allow an individual who is blind not only to position and interconnect but also to explore complex constructions.

For example, an individual who is blind could navigate through the elements on a NeatTools desktop and stopping on each item would provide information about the item as well as the various inputs and outputs available. Walking around the inputs and outputs would allow them to begin or terminate the wire which they "carried." If a wire were already present, the system could indicate its terminations and allow the user to jump to any of those terminations as they were annunciated. Thus, an individual could trace the circuit in much the same way an individual with normal vision might trace the visible lines on a display screen.

The NeatTools system is complex enough that there are many problems that need to be solved. Among them are the effective auditory techniques for providing more global overviews (e.g. auditory greeking) that can be achieved, for example, using visual layout on the screen. We believe, however, that it will be feasible to create strategies for allowing individuals who are completely blind to effectively and efficiently use the NeatTools visual programming environment. In the process, we expect to identify a number of key techniques and strategies for addressing the issues faced by individuals who are blind in working with complex graphical user interfaces, such as found on most modern computer systems, as well as the even tougher challenge of their access to vision-based direct-manipulation interfaces.

Three tests are planned to evaluate the effectiveness of the strategies developed.

Test 1: Individuals who are blind will be asked to construct complex dataflow networks using NeatTools with its built-in enhanced interface that will become part of the standard program. Children who are blind and who show advanced aptitude in science and logic will be provided with NeatTools packages with the enhanced interface and asked to create both simple designs and free-form "inventions" with the tools. The experiment will involve children in the 5th, 8th, 12th grades, as well as undergraduate and graduate students. Individuals with severe physical disabilities that prevent their use of mouse will also be asked to carry out the same exercise.

Test 2: Individuals who are blind will be asked to help mentor and troubleshoot circuits constructed by younger children who can see. The ability of individuals to collaborate will be tested using older individuals who are blind, who can act as mentors to less experienced (perhaps some younger and some the same age) individuals who are sighted. The goal is to determine whether the individuals who are blind can truly work collaboratively on a NeatTools desktop, where the individual who is blind can analyze and provide meaningful and constructive advice to the individual who is sighted and using this visual programming tool.

Test 3: The individuals who are sighted will be shown the tool with the enhanced features disarmed. After they have used it for a while, the enhanced features that allow individuals with disabilities to more easily access the product will be enabled. Then, after the individuals have used it for a while more, they will be observed to see whether or not they turn the features on when they use the product or if they leave them turned off when on subsequent exploratory or play operations.

Test 4: A controlled study will include a large number of individuals. This will be arranged in classes in local schools in Syracuse (where the PI and his department have strong ties with science teachers in the area) as well as in the Seattle area (coordinated by co-PI Burgstahler). Some students, chosen at random, will use NeatTools with the enhancements turned on, and the others with them turned off. The goal will be to see which ones find it easier to use the tools, as measured either by their speed in constructing a network or troubleshooting an imperfect network, or by their ability to correctly construct a system.

Access to Networks with TangoInteractive

{… heading used to be Knowledge Networks from KDI98}

With the progressive increase in computers and networks in schools and colleges, we are in the early stages of a technological revolution in education delivery and in modes of learning. Interactive collaborative systems such as Tango will play a significant role in education, notably in science education. Because of the multimedia nature of the systems, however, individuals with disabilities, such as hearing impairment or deafness and visual impairment or blindness, are in danger of being excluded. Systems that require fine motor control and direct manipulation may also exclude individuals with physical disabilities. Since it is highly unlikely that the nation will build a second science education system for these different disabilities, it is important to figure out how to create systems that can be used by individuals with physical and sensory disabilities. At the same time, though, we can maximize the use of the senses and manipulative abilities of those who possess them.

It is interesting to note that individuals with disabilities are at no more of a disadvantage in these new virtual interaction spaces than they would be in a regular education space. Individuals who are deaf would have the same difficulties interacting easily in classrooms where the teachers and others are not able to sign and where real-time captioning is not provided. Individuals who are blind would have the same difficulties interacting in classrooms with printed books and teachers writing on the blackboard. As interactions on the Web are increasingly made to imitate interactions in daily life, the problems faced by individuals with disabilities on the Web increasingly mirror the problems they face in the real world—with one exception.

That exception is that, in the electronic world, all the information is being mediated electronically. As a result, it is easier to inject modifications into the data streams, and even translations of them. For example, in an environment where workers in their offices are talking back and forth using telecommunication, it is much easier to isolate clear signals from the individual participants and run them through voice recognition software to generate a visual representation of what they are saying. Even if two people are talking simultaneously, it is possible to carry out voice recognition in this environment, since it is usually possible to separate their individual auditory data streams. As the usage and quality of recognition software steadily increases, possibility is opened for practical use of speech recognition in this limited environment, perhaps even before it would be possible in real life environments. Also, in real life environments, it would require instrumenting the various speakers, whereas in this environment, they already are.

Similarly, information that is presented visually, papers that are exchanged, and information presented in slides or other visual media can all be much more easily processed through OCR and, perhaps some day, image description processes that could instantly render printed text into auditory or Braille form for individuals who are blind. Small text can be enlarged for individuals with low vision. Even individuals who have reading disabilities, from dyslexia or for other reasons, can access printed information by having it rendered vocally. All of the visual presentations of text, which would otherwise be inaccessible in real life situations, can be made accessible in a transparent fashion.

Some types of information will remain inherently inaccessible because they are designed for the primary sense that a user may lack. For example, if the Mona Lisa or the Guernica were included on a slide in someone's presentation, an individual who is blind would not have any easy mechanism for rendering it in a form that they could perceive. However, this would be no more of a problem in this interactive collaborative environment than it would be in real life. They may, however, be able to print Guernica on swell paper which would give them a raised image representation of it and have, in a matter of seconds, a much better idea of what was being discussed than they would otherwise.

Of course, this is all much easier to talk about in theory than to put into practice. Signal levels are poor, voice recognition is not yet good enough, and the architectures do not necessarily support easy separation of data streams for analysis or translation. Gestures and pointing are often used and effective mechanisms have yet to be demonstrated (although they can be envisioned) to allow individuals who are blind to quickly ascertain what objects or words are being pointed to, what gestures are being made, etc.

The purpose of this portion of the project will be to examine the problems faced by individuals with different disabilities in collaborative environments and to draw up a series of requirements for the infrastructure to better support cross disability accessibility and translation. Specifically, individuals who are blind and individuals who are deaf, as well as individuals with physical disabilities, will participate in collaborative work sessions along with individuals who have no disabilities. Each individual will have an assistant and an observer. The assistant will continually provide information to the individual with a disability to help cover for the information that they are unable to perceive (or, for the individual with physical disabilities, the activity which they are unable to perform at all, or quickly enough). The observer will note the type and character of the assistance needed (the sessions will also be videotaped). Care will be taken to separate information the individual truly needed for the interaction from the excess information that might be provided by the assistant.

The various types of information or physical functions that require assistance would be noted. The source of the information (software vs. a person), its characteristics (text, handwriting, voice, gesture), and the way it is transmitted would also be noted. The project teams will then try to theorize strategies that could be used today or in the future to make this information accessible to individuals with disabilities. For example, text displayed on the screen might be run through OCR. When someone points to text, a gesture recognition engine might recognize an elongated object showing up over the top of printed text and automatically alert the individual as to the block of text, located at the end of the oblong object, at the end of the object that does not run off screen, etc. The team will then try to identify changes that would need to be made in the architecture to help support this capability. For example, if voice recognition were used to translate voice into visual presentation, then the architecture should be able to request and receive a higher quality audio text stream. It should also keep the text streams separate so that they could be individually run through voice-recognition algorithms. The structure also needs to be able to handle the simultaneous display of the speech and the recognized text as a standard part of the display functionality. Those hard of hearing could select which to listen to.

Where possible, the actual hypothesized mechanisms will be tested, that is changes made to the architecture and filters or translators introduced to test the ideas. Wizard of Oz simulations (i.e. a human "behind the scenes" in effect simulating an electronic simulator) may be used to stand in for speech recognition or OCR activities that are on the horizon but not yet ready for this difficult an application.

Results of this phase of the project will be threefold: a) a report delineating the strategies and the required infrastructure features needed to support this type of accessibility, b) a report delineating the success of individuals with disabilities participating in interactive environments with real or simulated (Wizard Of Oz) filters and translators in place, and c) changes to the Tango infrastructure to better support accessibility now and in the future.

Multimodal Display

{The following is edited lightly from Geoffrey’s 2/3/99 e-mail; please modify according to feedback from Al}

We will adopt an approach suggested by the Trace Center as a framework for cross-modal display optimized to each client (user profile). We will make a reasonable assumption—at least for future education and training—that all the information is web-based and described by the emerging W3C document object model (DOM; www.w3.org/DOM/). We will use "active pages" obtained today by capturing all events and properties from a Web page in version 4 browsers. As dynamic HTML gains acceptance and browsers improve, we expect this support to improve. However it is already good enough today for the limited support needed for common authoring tools for educational material. We now note that this model is already that used by TangoInteractive to support shared dynamic HTML in the form of messages passed between clients sharing displays and expressing content in the W3C DOM. Although one usually considers TangoInteractive as linking different machines, one can use this mechanism internally to a given browser to allow dynamic interpretation of documents. The events received by Tango on each client will be passed through a software agent that will use a user profile to map modalities optimally for the user. Here resides the NeatTools custom support for users who are blind or motor-impaired. This approach can be applied in three useful ways. First, one can map modalities internally to a single client. Secondly, note that today’s falling cost of PCs implies that one can consider it quite practical for a given user to access multiple PCs and their displays concurrently. TangoInteractive can then allow users to view simultaneously different modality-mappings of a given page and choose the most appropriate one(s). This mode also elegantly supports comparative evaluation of particular display strategies. Finally, one can naturally use TangoInteractive with multiple PCs and multiple users, so that a set of collaborating users (teachers and students in our education example) can choose very different displays—each optimized by the SoftBot to a different profile. TangoInteractive’s WebWisdom subsystem (multimedia database) already does this in a simple case when it allows each client to separately choose the resolution of a shared image, so as to make best use of network bandwidth.

Interface Technology Development and Research

{Note: adapt the following text to become more generic; Gregg advises downplaying our own ‘toys’ in such a proposal, and rather showing how we are doing research with broad implications w.r.t. visual direct-manipulation interfaces and interactive collaborative environments; we ‘just happen to have’ Tango, NeatTools, and interfaces devices as models to conduct such research and exploration.}

We will expand the scope of NeatTools so that people with disabilities can exploit it to access computers and thereby participate actively in the ongoing revolution in information and communication technology. Our recent work has focused on students with quadriplegia, and we have enabled them to control computers with the help of custom interfaces. We will extend NeatTools, so that the visual programming interface is accessible to users and developers who are blind (see above). With audible and tactile cues, such users will be able to develop applications in NeatTools to enhance their own learning experiences, lifestyles, and professional effectiveness, as well as develop applications that will assist others in need. We will enhance NeatTools so that users and developers who cannot hear can nevertheless experience multimedia functionality (color will be the main quantity to substitute for sound).

Beyond this objective of extending NeatTools so that users who are blind can do visual programming, we will make this direct manipulation visual interface that is generally cross-disability accessible both for programming, and for end-user application for control of graphical user interfaces to enhance education and occupations in computer-science fields.

We will continue to expand our repertoire of sensors and transducers and associated mounting hardware components. As new TNG interface devices are developed by our affiliated company, MindTel, these will be incorporated into the project. Currently, TNG-3 accommodates eight analog and eight digital inputs. In 1999, MindTel will produce a TNG-4 interface, which will have eight analog inputs and 22 bidirectional digital lines. This already exists as a fully working prototype. The remaining effort will involve design of the enclosure(s) and connectivity options. Whereas TNG-3 operates from power derived from serial port handshaking lines, TNG-4 will require separate power (perhaps obtained from a PS/2 port Y-adapter). Plans are underway to develop a TNG-5 interface that will operate from the universal serial bus, which is becoming common on new computers (PCs and Macs).

Curriculum Implementation and Project Experiments

{TBA — specifics about the four courses and what we propose to do about UA-ing them}

Formative Evaluation and Assessment

The actual research on this project, which will be both exploratory and developmental, will be performed as follows: a) quantitative performance assessment, including database recording of user events and other particulars (note: these capabilities are already built into NeatTools, and have been programmed and demonstrated in applications); b) qualitative assessment by project participants and professional staff (subjective reporting, questionnaires, etc.); c) formative evaluation throughout the project for continual mid-course corrections (records will be maintained of these evaluations for subsequent research analysis). Hypotheses will be formulated in which we compare our technology to available existing ones that have already been in use by the participants with disabilities. Time and efficiency variables will be defined and monitored.

As has been our practice in every instance to date when we begin working with a new participant with disabilities, we will formulate initial proof-of-concept configurations of our interface software and hardware, based on careful assessment of what the individual is capable of doing. In other words, we map our technology to the person, develop a proof of concept, and then work towards a functional system. In our work with Eyal and Brooke, we have been able to provide independence to the respective families in the use of our technologies. We continue in contact for maintenance or refinement as needed.

With respect to extending our core technologies, we will aim for higher level proof-of-concept demonstrations. For example, we will enhance NeatTools so that a person who is blind will be able nevertheless to perform "visual" programming efficiently and effectively, using audible and tactile feedback cues. Preliminary demonstration of success will constitute a proof of concept on the basis of which we will provide a full implementation. In keeping with the general principles of universal design, this extension will provide enhanced usability for all users of NeatTools (to the extent they choose to avail themselves of the auditory and tactile feedback functionalities).

In the full proposal, more will be need to be said about actual curricular implementation for education and careers.

 

Background

TangoInteractive

TangoInteractive (or Tango; http://www.npac.syr.edu/tango) is an advanced, powerful, and extensible Web collaboratory, and is perhaps the most flexible system of its type. It is not aimed at exploring research issues in collaborative system design, but rather at exploring applications such as those proposed here. In this regard, great effort has been put into making the base infrastructure quite robust, so that it can be used outside a tolerant research environment.

Tango is written in Java, but supports collaborative applications in any language. Further Tango is fully integrated with Web browsers, and this provides the basis of convenient, familiar interfaces. To run Tango, one starts the system from a browser and connects to a Tango server. Both the client and server code for Tango are freely available on CD-ROM or from our Web site, which also contains the well documented API’s for C++, Java, Java Beans, and JavaScript. Once in the system, the user can select from over 25 collaboratory applications to work on projects with partners, play a game of Bridge or Chess, take a class at a virtual university, create and use a public or private chat room, conduct a videoconference, view a movie, or surf with friends using the powerful shared browser. It is possible to do all this at the same time, in any combination, and multiple copies of applications such as chat rooms can be launched. Further, Tango can provide shared sessions for either client- or server-side applications. The latter include both shared (Web-linked) databases (as in Oracle-based WebWisdom curriculum management system) and shared CGI scripts (as in our integration of NCSA’s Biology Workbench with Tango). We believe that no other collaboratory system, public domain or commercial, gives you so many applications under such consistent and simple session and floor control.

Besides running Java applets under Tango, one can run JavaScript-based client-side Web applications. Moreover, in Tango the user can take an arbitrary HTML page and automatically turn it into a shared entity. To build a 3D VRML world, populate it with avatars, and let them interact, Tango provides support via two integration modes: VRML JavaScript nodes and External Authoring Interface. Applications written in C or C++ (e.g. PowerPoint) can also be readily adapted to run collaboratively under the Tango API. In this proposal, we will use the C++ interface of Tango to link the NeatTools specialized interfaces. Note that the shared collaboration model of Tango allows each client to have different views of the same shared application, and this is essential for universal access. Shared display systems such as Microsoft’s NetMeeting are less flexible.

NeatTools

We have been developing NeatTools, a visual-programming and runtime environment, for interfacing humans and computers (www.pulsar.org; http://www.pulsar.org/ed/manuscripts/mmvr7/MMVR99_paper_5.htm). It enables users to input information to a computer through various kinds of sensors and devices and, among other things, displays the information in various multimedia formats. One constructs a dataflow network (visual program) in this environment by dragging and dropping objects (modules) from an on-screen toolbox to the desktop workspace and then connecting these with input or output controls and control of parametric lines. Editing and execution of programs occur simultaneously, so that no compilation is necessary.

NeatTools is written in C++ on top of a Java-like cross-platform application programming interface (API) so that it can run on multiple platforms including Windows 95/98/NT, Unix, Irix, and Linux. Macintosh will be supported once its multithreaded operating system is released. NeatTools is simple, object-oriented, network-ready, robust, secure, architecture neutral, portable, high-performance, multi-threaded, extensible, and dynamic. It can interface with serial, parallel, and joystick devices (and see below). Other significant features include Internet connectivity; display of time signals; mathematical and logic functions; character generation; multimedia; Musical Instrument Device Interface (MIDI) controls; and a visual relational database with multimedia functions. A developer’s kit, for writing new external modules, is also available online for those proficient in object-oriented programming in C++.

Devices

In conjunction with development of the NeatTools environment, we have also produced the palm-sized TNG-3 hardware interface box, which detects signals from sensors and switches. Both TNG-3 and the latest version, TNG-3B, have 8 analog and 8 digital input channels and stream data to the serial port of a personal computer at 19200 baud. We also have a working bench prototype of TNG-4, which has more capacity and versatility, with 8 analog and 22 digital lines that are dynamically bidirectional. In other words, each digital line can serve as an input or an output, and this can be dynamically reconfigured at any time within NeatTools by manual or automatic control. We have used NeatTools to interface various types of hardware devices to TNG-3, including displacement potentiometers, photocells, magnetic sensors (Hall Effect transducers), pressure transducers, and bend sensors. The customizable and extensible features of these modular hardware and software systems are important for the project goal of extending such technology to accommodate users with a broad range of disabilities.

Exemplary Projects to Date Using Interface Technologies (Syracuse)

In 1996 we began working with Eyal Sherman, a teenager with brain-stem quadriplegia enrolled at Nottingham High School in Syracuse (http://www.pulsar.org/eyal/). Eyal became quadriplegic at age 5, as the result of a brainstem stroke, following surgery to remove a large brainstem tumor. He depends upon a ventilator for breathing and is unable to speak or move his head, so he clearly represents extreme circumstances. Notwithstanding Eyal’s disability, he has normal intelligence; indeed, he is an honors student at Nottingham High School in his senior year. He has applied for admission to Syracuse University in the fall of 1999; his strong academic record {not to mention a strong letter of recommendatin by the PI} will assure his acceptance. Eyal has agreed to continue working with us as an undergraduate research participant.

Our work with Eyal has allowed him to actively participate in school, where he is enrolled in New York State Regents classes, a program for college-bound students. In addition, Eyal now frequently communicates with others by using e-mail, listserv mail lists, and Usenet newsgroups and routinely navigates the World Wide Web, using search engines and other resources. For example, Eyal now corresponds by e-mail with his sisters in remote cities and even carries on immediate conversations with them using AOL-Instant Messenger.

The current system that Eyal uses specifically employs a chin joystick (extracted from a $20 Nintendo-compatible game controller) for mouse cursor control and facial switches mounted to his eyeglasses. The PI, who has led the project with Eyal, wrote the JoyMouse Network, a NeatTools application that enables Eyal to control the Windows graphical user interface by his facial gestures (for images, downloads, and documentation, follow icon link for NeatTools on our home page, www.pulsar.org, proceed to the NeatTools download page and find there the JoyMouse ). He can type by using a low-cost commercial on screen keyboard (Fitaly One-Finger Keyboard from TextWare Solutions; www.twsolutions.com); and can generate speech by using software that is included with SoundBlaster sound systems.

Thanks to our work, Eyal and his family are now largely independent of our team. His mother is able to set up the custom interface hardware and software in a matter of minutes and then she can leave Eyal literally to his own devices. In the coming years, Eyal will continue to be an active member of our research team (see Project Personnel, below) while he is enrolled at Syracuse University. Given his recent experience in successfully using and guiding the development of our technologies, he will be a key member of the present project, although his time may be limited by the demands of his undergraduate program. In the words of his father, Rabbi Charles S. Sherman, Eyal is now "guarantee(d) ... a productive future" and "has become ... a symbol of the possibility and hope for others, not just within the upstate New York area, but literally throughout the world." (letter on file).

The modular interface hardware and software that we developed for Eyal and others is a highly extensible system consisting of interchangeable components. Thus, we can easily modify the system for the unique needs of students with different types of disabilities. We have also worked extensively with Brooke Kendrick, a seven-year-old child with cerebral palsy and spastic quadriplegia (http://www.pulsar.org/brooke). We have prototyped a set of interface devices for her, including ultra low-cost conductive-plastic-film pressure sensors and a custom joystick fabricated by students in the Industrial Design program at Syracuse University. These devices worked remarkably well and Brooke has responded enthusiastically to the customized configurations prepared with our NeatTools software. Brooke can now play educational computer games and use other computer programs designed to aid her cognitive development. In the words of Brooke's mother, Suzanne Henderson-Kendrick (Coordinator, Graduate Medical Education Services, SUNY Health Science Center), "she will be able to use this software package (NeatTools) to type her own thoughts on a computer instead of using her pre-programmed communication device" (letter on file).

Our work in Syracuse was featured nationally via the Associated Press wire service during and after the July 4, 1998 weekend. During the weekend of August 1, 1998, our work, particularly with Eyal, was shown in a two minute segment of CNN Headline News (See "media" link on www.pulsar.org home page.)

 

Results from Prior NSF support

Edward Lipson (PI) and Geoffrey Fox (Co-PI)

NSF award number: ASC-9523481 Dates: 11/1/95–10/31/99 Amount: $927,935 (total costs for three years, plus current no-cost extension; not including two supplements discussed below); "Integration of Information Age Networking and Parallel Supercomputing Simulations into University General Science and K–12 Curricula"

This Metacenter Regional Alliances grant is concerned with developing Web-based educational modules based on four supercomputing simulations projects: a) membrane fluctuations, b) fluid dynamics, c) crackling noise and associated hysteresis, and d) crack propagation in societal structures, such as dams. The former two projects are conducted at Syracuse University, respectively in the physics department and in aeronautics. The latter two take place at Cornell under a subcontract. Because of space limitations, this report will focus on the physics department activity. Additional information on all four modules is available via our grant project Web site (www.simscience.org). We have created Java applet versions of both our fluid- and crystalline-membrane simulations (which arise from representations of "string" theories in particle physics and cosmology). We have also written several other Java applets to illustrate other ideas in physics and principles behind the main simulations. For example, we have written an applet that simulates a simple spring—how the force and stored energy change with extension—to illustrate how the springs used in our crystalline membrane applet work. In addition to Java applets and digital video we have used virtual reality modeling language (VRML) to visualize the output of off-line membrane simulations. We are using examples from everyday life and biology in particular to motivate explanation of the concepts underlying membrane physics. We have also demonstrated collaborative versions of some applets using NPAC's innovative Tango collaboratory system (www.npac.syr.edu/tango/). The project is carried out with the participation of one postdoctoral research associate at Syracuse and several graduate and undergraduate (REU) students at both institutions. We were awarded a $25,000 REU supplement in the summer of 1997. In addition, we have been awarded a $350,000 supplement for integration of this project with vBNS/Internet II. However, these funds will be spent only after connectivity is achieved (first quarter 1999).

Publications

Sheryl Burgstahler (DO-IT)

NSF Award #HRD-9550003, $1,539,282, 10/1/95-9/30/99; PPD: DO-IT Extension

The goal of this project is to increase the representation of people with disabilities in the academic and career fields of science, engineering, mathematics and technology. Computers, adaptive technology, and the Internet network are used as empowering tools for project participants. Data available from this project includes survey data collected from participants, mentors and parents; summaries of electronic mail discussions; and program planning steps. Publications from the DO-IT project include:

{This list can be reduced/contracted if we need the space}

Burgstahler, S.E. (1998). Universal design: Making your web pages accessible to all. Puget Sound Computer User, 13 (11), 68.

Burgstahler, S. E. & Comden, D. (1998). Creating a level playing field for the world wide web. Ability, 98 (2), 56-59.

Burgstahler, S. E. (1998). Making Web pages universally accessible. Computer-Mediated Communications Magazine, 5 (1).

Burgstahler, S. E., Baker, L. M., & Cronheim, D. (1997). Peer-to-peer relationships on the Internet: Advancing the academic goals of students

with disabilities. SIGCUE Outlook: Special Interest Group for Computer Users in Education, 25 (3), 12-22.

Burgstahler, S. E. (1997). Peer support: What role can the Internet play? Journal of Information Technology and Disability, 4 (4).

Burgstahler, S. E., Comden, D., & Fraser, B. (1997). Universal design for universal access: Making the Internet accessible for people with disabilities. ALKI, 13 (3), 8-9.

Burgstahler, S. E. (1997). Students with disabilities and the online classroom. In Z. L. Berge and M. P. Collins (Eds.), Wired Together: The Online Classroom in K-12, Volume I. Cresskill, NJ: M. Hampton Press, Inc.

Burgstahler, S. E. (1997). Teaching on the Net: What's the difference? T.H. E. Journal, 24 (9), 61-4.

Burgstahler, S. E., & Comden, D. (1997). World wide access: Focus on libraries. Journal of Information Technology and Disability, 4 (1-2).

Burgstahler, S. E., Comden, D., & Fraser, B. (1997). Universal Access: Electronic resources in libraries - Presentation materials. Seattle, WA: University of Washington.

Burgstahler, S. E., Baker, L. M., & Cronheim, D. (1997). Peer-to-peer relationships on the Internet: Advancing the academic goals of students

with disabilities. In National Educational Computing '97 Conference Proceedings. Washington, D. C.: T. H. E. Journal and NECA, Inc.

Gregg Vanderheiden (TBA)