A Cross-Disability-Accessible Knowledge Network for Education and Collaboration in Science and Technology
Project Description

1. Goals and Guiding Principles

The basic goal of the proposed work is to build a Cross-Disability-Accessible Knowledge Network (CDAKN) and then evaluate and advance its effectiveness in both distance education in science and technology curricula and for scientific collaboration. This goal and the project are based on the following principles:

  1. People need to be integrated into society and its activities irrespective of physical disabilities.
  2. Web technologies and pervasive communication infrastructure provide a universal backbone for which one can build more effective cross-disability access (CDA) with specialized perception and expression capabilities optimized for individuals.
  3. The ‘anyplace’ characteristic of the Internet is particularly attractive for individuals with disabilities, who may find their geographical location limited. Thus Internet collaboration is especially important for building knowledge networks involving individuals with disabilities.
  4. Best practice in information system standards, especially the work of the W3C including their document object model (DOM http://www.w3.org/DOM/), provide an organizing framework on which to build towards cross disability access.
  5. The Trace Research and Development Center (http://trace.wisc.edu) in Wisconsin has pioneered the principles of universal design for computer interfaces and brings a broad national knowledge network. Their contacts with the Web Consortium W3C allow us to both influence and to be influenced by key national standards.
  6. Syracuse University has developed a state-of-the-art collaboration (TangoInteractive http://www.npac.syr.edu/tango) system with an architecture supporting customized cross disability delivery.
  7. Science, mathematics, engineering and technology (SMET) education is a national priority, for which cross disability universal participation is highly desirable.
  8. DO-IT (Disabilities, Opportunities, Internetworking, and Technology at University of Washington http://weber.u.washington.edu/~doit/) and CAST (Center for Applied Special Technology http://www.cast.org) are recognized for their pioneering work for applying and evaluating technology to help those with disabilities both in educational and job training areas. Their contacts will give team appropriate testbeds for our CDAKN's.
  9. The best practice interface technology for sensory and physically disabled individuals is available through the team with Syracuse's low cost technology and deployment projects (http://www.pulsar.org/ TNG/NeatTools) allowing us to extend the KN to those with severe muscular disabilities.
  10. Distance education, including both teachers and students with disabilities, exemplifies the general goal of implementing societal functions in a way that allows universal participation.
  11. There is natural synergy with telemedicine applications including education as part of rehabilitation and here the Rehabilitation Engineering Research Center at Catholic University (http://www.hctr.be.cua.edu/RERC/) brings innovative interfaces and broader testbed activities.
  12. Distance education provides an attractive early testbed for new technology, because it has more structure than spontaneous collaboration and so puts less stress on base hardware and software technologies. We have shown this in Syracuse's successful distance education experiments including those between Syracuse and Jackson State (historically black college in Jackson, Mississippi) using TangoInteractive.
  13. Scientific research collaborations increasingly depend on electronic communication. A CDAKN can advance science by inclusion of team members regardless of geographical location or (dis)ability.

Bringing these themes together, this project proposes to explore the proposition that multisensory interactive collaborative environments can be created, which allow participation by individuals who have different types of physical and sensory limitations, acquired either at birth, through adventure or as a result of aging. Specifically, we propose to create a knowledge network to both explore this issue and to act as a test bed for the topic.

In implementing this project we will actually create two knowledge networks. One will be based around the topic of science education. This area is chosen because it represents an already existing base of knowledge, which can be used as a test bed early in the project to explore these issues. A second knowledge area and network will be established over the course of the project, and will be focused on the topic of cross-disability access to collaborative environments and collaboratories. Using these two test beds, we will proceed to explore both the issues surrounding access to multimodal environments (visual, auditory, and interactive) by individuals who have visual, auditory and manipulative limitations, and, research into strategies for addressing access by these groups.

Although, it is a common assumption that systems cannot be designed, which are simultaneously useable by individuals with multiple disabilities, the Trace center has found that this is not necessarily the case. They have, for example, developed multimedia touchscreen kiosks, which are simultaneously useable by individuals with low vision, who are blind, who are hard of hearing, who are deaf, who have reading problems, who cannot read, and who have physical disabilities involving both weakness and severe thetosis. Moreover, the technologies have been transferred to commercial production and currently are in airports, libraries, and will soon be distributed nationwide in voting booths. These production systems demonstrate that it is both practical and valuable to research systems that cover a broad range of disabilities.

The challenges posed by interactive collaborative environments are much more severe, but we have high expectations that this project will lead to both pragmatic solutions and a series of very interesting research questions and technology challenges. Moreover, we expect more people without disabilities to benefit from this research than people with disabilities, even though people with disabilities are the primary target of the research. This is a natural consequence of providing more flexible interfaces and cross-modality translation capabilities. For example, the beneficiaries will include all mobile computing users, any users wanting to interact with systems verbally, anyone using artificial agents (which are inherently deaf and blind), and anyone wishing to access information in hostile or constrained environments.

In the following sections we describe the proposed projects in detail, while sec. 3 contains technical background. In accordance with NSF regulations, separate parts of the proposal contain a discussion of appropriateness of proposal for KDI & roles of project personnel, results from prior NSF support, a plan for dissemination of results and institutional commitment, performance goals, and the management plan. These follow the project description while a separate proposal section contains references.

 

  1. Cross Disability Access Knowledge Network

2.1: Project Methodology and Activities

We propose to research the issues underlying CDAKN's by building such as a system and evaluating two distinct types of testbed. Firstly the scientific collaboration testbed formed by this proposal team itself. Secondly and more broadly, we will establish a KN aimed at science and mathematics education with cross disability access.

The project can be divided into three phases, which correspond roughly to the each year of the three-year proposal. Firstly we build the scientific collaboration KN using existing technology and experiment with different approaches to cross disability interfaces aimed at two particular user classes -- the blind and those with severe physically disabilities. Note that it is understood in the universal access field to be important to target all disabilities and so although it is defocusing, we will where possible target the full population as only this can verify that our approach is sound. In the second year we will use a natural extension of our existing collaborative system to deliver a set of science and mathematics courses around the country. In the third year we will deploy a more sophisticated knowledge integration framework incorporating new base infrastructure and build cross disability access for it. Throughout the project we will do extensive evaluation and iteratively feed results of this process into our technology and testbed activities. We will follow and incorporate national standards including those from W3C (Web Consortium), IMS (Educause) and ADL (Dept. of Defense). We will follow relevant leading edge technology research by ongoing interaction with the Center for Innovative Learning Technologies (CILT) and the EOT (Education Outreach and Training) group of the NSF Partnerships in Advanced Computational Infrastructure. We will test the generality of our ideas by investigating relevance to related knowledge networks such as telemedicine and to emerging interfaces including virtual reality. Section 3 has many technical details and in this section we just try to describe the broad principle and activities.

We make one important assumption. Namely our KN will be built by the integration of people and information. We assume that the information is all web-based and that the knowledge network is built around the Web. There are many important forms of web-based information but we will focus on that which can be organized in terms of the W3C document object model. This roughly says we assume that we will use web pages built in terms of (advanced) HTML. This allows us to a quantitative framework for our CDAKN in terms of the sophisticated albeit not yet complete W3C document object model. Note that the Web gives us a successful model for retained knowledge that is ready to be shared at a distance and the DOM in some sense quantifies this information model. However the Web does not yet have a consolidated, successful, process model for computer supported collaborative work or more specifically for teaching and learning. We intend to build on the current courses being successfully taught at a distance via TangoInteractive as another technical building block.

The W3C DOM specifies hierarchical dynamic organization of document fragments with an event model and a defined interface to scripting languages enabling browsing and interpretation of user interactions. Scriptable style sheets allow one to customize dynamically cross disability rendering of information. Systematic use of XML with domain specific ontology is a key concept as it allows one to give a more precise expression of knowledge and its cross disability rendering. Formally a major deliverable of our project will be enhancements of the W3C DOM to support cross disability access and this product will be disseminated and discussed through the W3C working groups. The Web Content Accessibility Guidelines that will be released soon by the W3C are an important contribution and we want to build on them and address their limitations. The W3C work provides a good guide for cross-disability success in learning resources, which are to be used asynchronously in self-directed learning. Building on the lessons from successful Tango courses, this project will be able to extend the scope of cross-disability-access technology to more dynamic (including synchronous) cases.

Note that although the key information and knowledge representation underpinnings, W3C DOM and XML, are still evolving, they are implemented well enough in existing version 4 and 5 browsers with dynamic HTML and JavaScript, that we can build our testbeds. Note we emphasize information represented in XML and HTML as these have a unifying DOM -- our approach can be generalized to Java applets and other sophisticated web environments but this will not be our major emphasis. We will however use authoring system such as PowerPoint, which have only modest web export although this is improving. This allows us to reuse investment in teachers knowledge of existing tools.

Initially, we will form the initial collaborative network using TangoInteractive with the cross disability knowledge domain being the material defining this project. We will identify prototypes of the educational material to be used in the later testbed deployment phases of the project. As described in sec. 3.1, we have tentatively chosen course modules from computer science and physics as the basis of our CDAKN for the testbeds. We need to develop methodologies to allow web technologies work across disabilities particularly in an interactive environment. We will analyze our initial information resource and rendering devices from three points of view:

  1. Are the "documents" themselves flex-modal? -- in particular can they be viewed visually or auditorially and have all of the information presented? Further can all of the manipulations necessary be done across disability via text commands i.e., from the keyboard or assistive device used by the physically disabled - this would also make them operable by voice.
  2. Are the players that are used to present the information cross-disability accessible? Can they be controlled via text (i.e., from the keyboard)? Further do they have a way of visually representing any captions, which are built into the material to accompany any auditory presentation? Do they provide self-voicing for those who cannot see (best) and are they compatible with screen reading technologies so that they can be read if viewed on a platform that has screen readers?
  3. Are the interaction channels cross-disability accessible? In distance education and collaborative environments such as TangoInteractive, there are audio and visual channels that allow direct communication and interaction between the parties. We need to make provision for all of the audio channels to be translated into visual form by translating speech to text, identifying speakers and support translation of multiple people speaking at the same time by having multiple text blocks appearing on the screen. Other non-speech sounds must also be translated and presented. Finally visual information is described and if possible and appropriate presented tactilely. Generalizing, this implies making basic collaborative functions of TangoInteractive cross disability. Initial work has been done by ATRC from Toronto on WebCT Chat and whiteboard tools. Special tools such as the "raised-hands" applet of TangoInteractive would of course also need to be modified to be universally accessible. We also expect to develop tools that allow participants in the CDAKN to better support universal rendering by imposing more formal structure and by asking participants to present key material in multi-modal form.

The essential technical idea is that TangoInteractive shares the XML (initially HTML) specification of information and this is mapped using the web scripting API of Tango separately on each client workstation to modify the style sheets used. In the first year we will focus on the existing DOM using the conventional JavaScript API to meet the goals described above. We will use the Syracuse's NeatTools software to build appropriate device interfaces. We will of course have to evaluate and chose from existing or modify interface devices that can support the desired cross-disability rendering. Trace will lead this in the sensory impaired area while for the physically disabled, Catholic University will co-develop interface technology with Syracuse team, and will develop assessment ‘instruments’ for formative evaluation.

Concurrently with this technical activity, another thrust will study the organization of the educational material into knowledge domains. A focus of this project is the process by which knowledge evolves as a topic (unit knowledge domain) and flows through a life cycle from research to teaching to textbook and heap of recombinant courseware modules, which we view technically as forming a shareable object space described in section 3.3.6. To make them accessible to people with disabilities, course modules need certain minimum information content (redundancy). To make them easy to share at commodity prices, we layer the object spaces as much as possible on current Web data formats and the emerging document object model interoperation norm of the W3C. A key to having recombinant modules is that the modules have a powerful summary, so that planning can be done which integrates modules with both coverage of the desired instructional domain and continuity in terms of meeting prerequisites. Each of the knowledge refinement stages produces a more tightly integrated set of information units, which are technically easier to express in XML and allow a more precise universal rendering. The articulation of what it takes to comprehend the domain is progressively more thorough and expressed in more widely accessible terms as one systematically abstracts and organizes the material in hierarchical fashion.

As an example take the case of blind students and/or teachers. Here, as described in section 3.3 we will add support to TangoInteractive for an automated "orientation view" (who's there, what's happening) comparable to the role of the "table of navigation" in the DAISY/NISO digital talking book. Here there was substantial participation from blind people and people expert in serving the blind so that the process was effective. We hope our project can replicate this success and extend it to a shared cross disability abstraction of knowledge. Articulating the relationships among the discourse fragments provides a higher level of knowledge consolidation and makes the course experience more ready to re-use and re-combine.

As part of this integration thrust, we will extend the archiving capability of TangoInteractive to be cross disability so we capture material in multiple renderings and in the original XML form. This capture will be non-invasive and capable of immediate review. It is well known (c.f. the general accessibility of information in Usenet FAQ documents) that questions people actually ask before they know are the key to making knowledge accessible.

These activities will start in the first year and will be ongoing with continual gathering of requirements from user, content and technology points of view. In year 2, there will be a limited deployment with education sessions organized by CAST and DO-IT using the approach described in sec. 2.2. At this stage we will start to formalize our work in terms of new design principles for such a DOM which will provide important extensions to the current World Wide Web Consortium model which does not have cross disability built into it. For instance, currently terms such as "onclick" or "onkeydown specify event handlers but as these reference user capability-specific and not universal devices, this is clearly not in CDA form. TangoInteractive traps all DOM events for sharing with its JavaScript interface and we will define a more abstract syntax, which defines user events structurally rather than in terms of an explicit I/O device. This will lead to a CDADOM where content and events are specified abstractly with mappings on conventional machines leading to familiar handlers. Although we will use the concept of a CDADOM to guide the project, it would be too ambitious to fully implement such an extended DOM. Our research and testbed experience will, however, help in the future revision of current W3C document object models in order to become truly CDA.

As always in such projects, there will be ongoing experimentation motivated by the research objectives which imply hypotheses concerning CDA, KN (functionality, effectiveness, usability) and formative evaluation with consequent refinement. For example, can users who are abled, blind, deaf, or quadriplegic access the CDAKN and keep up with one another in interactive sessions? We will identify problems and take corrective design actions in an iterative fashion. The project will include quantitative performance assessment in Tango and in NeatTools interface programs (event tracking, database recording, data analysis). In this way we can strive toward developing a CDA-multimedia-interactive KN.

We will emphasize evaluation of both our concepts and separately of the particular realization in terms of TangoInteractive. The primary criteria to be used in evaluating the success of the techniques are twofold. Firstly there is the ability of the individuals with functional limitations to participate side-by-side with their peers who do not have disabilities. This would include the ability of these individuals to get similar information from the experiences, and to score similarly on tests of comprehension of materials or interactions. Secondly we will examine the reported benefit of the techniques to individuals who do not have any type of functional limitations. This criterion is very important as if the techniques and strategies do not have inherent benefit for everyone, than their promulgation is likely to be slow and limited.

It should be noted that this project does not propose to fully solve these issues. It does propose to have a significant impact on defining the key issues and identifying all of the "low-hanging fruit". This, in itself, can be of tremendous benefit to the two user groups (both with and without disabilities), as the more difficult issues are addressed.

Note that our approach making material universally accessible implies a model for information specification, which allows us to deliver material at a distance. This has obvious value to the disabled and will be an ongoing theme of the testbed activity. In the final year of the work, the major initial thrusts (cross disability representation and knowledge integration) will be firmly linked and we will deploy and evaluate the cross disability knowledge synthesis and archiving capabilities described above and in section 3.

We will of course, continue to experiment and plan for further work, which seems likely to be attractive in this important emerging area. In such a rapid moving field, we cannot predict well even a year or so in the future but areas in which we will experiment include more general (than W3C) object models such as those implied by Java applets and sophisticated multimedia authoring systems.

    1. Deployment and Assessment

Essential to our project is an iterative process of deployment, formative evaluation, revision and re-deployment. To accomplish that research cycle efficiently, our project includes two organizations whose primary purposes are educational research and development with individuals who have disabilities. Both CAST and DO-IT have access to an active, diverse community of learners to test, customize, and apply knowledge networks that are accessible to individuals with a wide range of disabilities. They also have ongoing research projects which are intimately related to this project and which will allow us to leverage their methodological expertise and their research sites. DO-IT will make a unique contribution to the proposed project because of its ongoing work with a large group of high school, college, and professional individuals with disabilities who are interested in science, engineering, technology, and mathematics. In addition, DO-IT has developed an extensive network of contacts in K-12 schools throughout the Washington State. CAST has recently completed two projects, which have investigated accessibility of a commercially available web-based course delivery system. Conducted at several colleges and universities in New England (University of Southern Maine, University of Southern Connecticut, and Northeastern University), that research involved essentially identical subjects and methodologies as will be involved in this research. Having developed good research relationships with those institutions and with individuals who have disabilities within them (as well as their disability offices) CAST will repeat the process for this research. CAST also has two Department of Education research grants which are investigating access issues to web-based learning environments for students at the high school level who have various high-incidence learning disabilities (e.g. dyslexia, attention deficit disorder). Another related NSF project is investigating desktop captioning in science classes for students who are deaf.

The basic process will involve training CAST and DO-IT staff on the base technologies to be used in this project. They will then collaborate with the developers of the course material to develop appropriate training material. DO-IT and CAST will be responsible for identifying the participants, offering the workshops, and then providing on-line and on-site support to the participants. They will conduct evaluations described below, which will be fed back into both the technology development and curriculum development project components. This iterative process will be repeated and drive the project forward.

CAST has found that it is sometimes difficult to obtain an adequately diverse sample of disabled students in a single institution and in this case, we will fill out the sample across several institutions. In previous research they have found that close observation and follow-up with a small number of students is more informative than cursory or summative evaluation with a larger number. Therefore, through contacts at offices of academic support for students with disabilities, they will identify a minimum of 6-10 students who span all three categories of disability identified above. Because of the individualized nature of the disabilities involved and the assistive technologies that will be used by these students, testing will be conducted at the student’s local optimal setting. Where additional technologies are warranted for evaluation purposes, the CAST laboratory will be used. Note that TangoInteractive can deliver to very many different sites simultaneously and this deployment strategy fits our technology design. We will also involve abled participants so that we properly test our assertion that our approach will also generate better learning environments for all participants.

Students in this sample will thus test the evolving system by engaging in realistic assignments as determined by the course material and in a setting where the individual is comfortable, well-supported, and private. CAST and DO-IT staff will introduce the system, train the student in its use, and conduct the on-site evaluation through structured observation of the example lessons, survey questionnaires, and extended interviews (see proposed CAST methodology in table below). Each evaluation may consist of multiple sessions so as to obtain optimal results. In the final year, data collection for the evaluation collection will be embedded within actual course trials rather than in isolation.

The qualitative data will be analyzed to identify system compatibility with conventional assistive technologies, to evaluate within-system features and functions, and to ascertain user satisfaction with individual components and with the aggregate system. This information will be used to provide a report for modifications in development and for evaluation of the overall system in realistic settings

Research Goal

Specific information sought

Instrument

Sample Size

General sample characteristics and preferred adaptive device information.

Demographics of the sample; particular assistive technologies utilized

Survey questionnaire and checklist developed for this project from previous CAST and DO-IT work.

6-10 individuals.

Qualitative measure: customary use of adaptive technologies and learning supports.

Information about students’ customary use of assistive technologies and software in study settings.

Structured observations of student work (nature of assignments, tools and system supports used, difficulties, advantages, comments).

10-20 observations,
1-2 of each
individual student
in the investigation.

Qualitative measure: enhanced use of experimental CDAKN system

Information about students’ capacity to learn and use the system under design and to learn from it.

Structured observations of student work while using the system (including compatibility estimates with existing technologies, usage of specific features, difficulties, advantages, etc.

10-20 observations,
1-2 of each
individual
student in the investigation

Qualitative measure: usability and desirability of experimental CDAKN system

Information about students perceptions of usability and advantages/disadvantages of system.

Individual Interviews with students conducted by staff.

Interviews with all
6-10 students who participate conducted early and late in the year.

 

2.3 Project Outcomes and Research Issues

The fundamental outcome of the proposed research will be knowledge on how easy or difficult it is to create CDAKNs, how to identify barriers, and how to overcome them. The main practical outcome will be the creation of the CDAKN itself – the first of its kind. This will serve as a model for further research and for widespread application of CDAKNs. We intend careful evaluation of its effectiveness and continual improvement of it during and beyond the proposed work. . We intend to sustain the project and its results for the long term, and will seek continued funding from NSF and other sources, while bringing in additional partners for broader implementation and testing. For instance, as we continue to stress truly universal access, we can mention the area of accessibility by people with learning disabilities and non-readers. We expect that the practical experience of CAST will allow techniques developed in this project, to be extended to these groups.

Another substantial outcome will be the research generated by the technology and knowledge integration thrusts needed to build the CDAKN. Computer science research issues addressed in this project, include: a) architecture of CDAKN and implications for a CDADOM, b) Knowledge synthesis and its universal specification, c) Linkage of collaborative systems to knowledge and information resources and d) abstract specification of customizable interfaces and modular interface hardware.

In the companion research area of universal access, we can also identify important research issues, which will be addressed in this project. These include a) How can interactions, which are heavily speech laden, be presented so that individuals who are deaf can interact on equal footing? b) What strategies can be used to offset the inherent delays in any translation process produced, when such delays inherently destroy interaction patterns in active discussions? c) How can the fact, that the audio tracks from individuals are available as discreet audio signals, be capitalized on to provide multiple-parallel conversational tracks, especially when people are speaking simultaneously? These need to be perceivable not only by individuals who were deaf or hard of hearing, but also helpful for all members of the interaction? d) How can visual props and presentational materials be made accessible in real time to individuals who are blind? What are the gestural and real time visual events which accompany typical collaborative interactions, and can be done to prevent them from breaking down the ability of individuals who have low vision or blindness to participate in interactive collaborations or educational endeavors? e) How can pre-scripted pseudo real time interactions be capitalized on, to enhance accessibility of collaborative instructional materials? (E.g., instead of being an actual live interaction, the student is interacting with an intelligent agent which acts out scripts or responds along with pre-recorded or pre-programmed schemas.) TangoInteractive already allows instant replay of all sessions and we need to provide an intelligent cross disability interface to this.

We will make extensive use of the Web for dissemination of project information and free software (TangoInteractive and NeatTools). Information on how to obtain low-cost modular interface hardware will also be provided. This would include computer interface boxes and sensor kits and other commercial components listed in the Trace Resource Book, as appropriate. DO-IT has a long history of developing accessible Web pages and will help assure that all Web-based project materials follow the guidelines of the Web Accessibility Initiative (WAI) of the World Wide Web Consortium. Traditional-style presentations, publications including the DO-IT newsletter and workshops will also disseminate project results using the developed CDAKN methodology to make our knowledge truly universal.

A final major outcome of this CDAKN research and development project will be that users with disabilities will have far greater opportunities for SMET education (active learning in constructivist paradigm, lab participation, lifelong learning) and SMET careers.

 

3: Technical Background

3.1: Knowledge Domains of the CDAKN

As described above, the CDAKN will be used in two distinct roles. Firstly the team of content and technology developers (Syracuse/CUA/NRH), designers (Trace) and outreach sites (CAST, DO-IT) will use it to define the project itself and to develop initial CDA educational modules. Secondly as described in sec 2.2, the outreach sites will use the CDAKN to deliver material of increasing sophistication in both education and literacy modes. We have chosen to use material already developed, but not universally accessible at present. Our first area is Internetics at http://www.webwisdom.org , which is a curriculum developed by PI Fox that combines computational science and modern information /communication technologies. This is a popular course and easiest test case as all the material is already prepared in XML and stored in a database with the architecture described in the following section. Another major focus is Science for 21st Century, a large- enrollment course at Syracuse developed by co-PI Lipson and others with modular approach to teaching science in an integrated way to non-science majors. Two current NSF grants, associated with this course, support development of interactive Web-based educational modules; see http://www.simscience.org and www.phy.syr.edu/courses/CCD_NEW/. We will stress the Science for 21st Century modules, as these are broadly useable at both high school, undergraduate and general science literacy levels. It will also give us examples of a knowledge domain making extensive use of web links not developed internally and so requiring special cross disability attention.

3.2: TangoInteractive Background

TangoInteractive (or Tango; http://www.npac.syr.edu/tango) is an advanced, powerful, and extensible Web collaboratory, and is perhaps the most flexible of systems of its type. It is not aimed at exploring research issues in collaborative system design, but rather at exploring applications such as those proposed here. In this regard, great effort has been put into making the base infrastructure quite robust, so that it can be used outside a tolerant research environment.

Tango is written in Java, but supports collaborative applications in any language. Further Tango is fully integrated with Web browsers, and this provides the basis of convenient, familiar interfaces. To run Tango, one starts the system from a browser and connects to a Tango server. Both the client and server code for Tango are freely available on CD-ROM or from our Web site, which also contains the well documented API’s for C++, Java, Java Beans, and JavaScript. Currently some 40 separate downloads are made of TangoInteractive software each week.

Once in the system, the user can select from over 25 collaboratory applications to work on projects with partners. One play a game of Bridge or Chess, take a class at a virtual university, create and use a public or private chat room, conduct a videoconference, view a movie, or surf with friends using the powerful shared browser. It is possible to do all this at the same time, in any combination, and multiple copies of applications such as chat rooms can be launched. Further, Tango can provide shared sessions for either client- or server-side applications. The latter include both shared (Web-linked) databases (as in Oracle-based WebWisdom curriculum management system described below) and shared CGI scripts (as in our integration of NCSA’s Biology Workbench with Tango). We believe that no other collaboratory system, public domain or commercial gives you so many applications under such consistent and simple session and floor control.

Besides running Java applets under Tango, one can run JavaScript-based client-side Web applications. Moreover, in Tango the user can take an arbitrary HTML page and automatically turn it into a shared entity. To build a 3D VRML world, populate it with avatars, and let them interact, Tango provides support via two integration modes: VRML JavaScript nodes and External Authoring Interface. Applications written in C or C++ (e.g. PowerPoint) can also be readily adapted to run collaboratively under the Tango API. In this proposal, we will use the C++ interface of Tango to link the NeatTools specialized interfaces. Note that the shared event collaboration model of Tango allows each client to have different views of the same shared application, and this is essential for cross disability access. Shared display systems such as Microsoft’s NetMeeting are less flexible.

3.3: Systems Architecture and Software Infrastructure

3.3.1 Overall Design Principles

As discussed in sec. 2, we intend to build and deploy a CDAKN, which means that we must make particular choices in today’s rich and evolving technology world. We do this in the context of a knowledge model described in sec. 3.1 with technology choices based especially on the open standards of organizations like W3C. However limitations in commercial systems (e.g. bugs and unimplemented features in web browsers) means that these lofty principles must be leavened with practical and sometimes ugly implementation choices. Further although we will articulate and test general architecture principles in this project, we must build on existing software to develop systems which are appropriately robust and functional. Thus we intend to build on two key NPAC technologies, TangoInteractive and WebWisdom, developed to support distance training but with no delivered cross disability support. We believe this is justified not only because of our familiarity with them but because they exhibit two key capabilities. WebWisdom supports the managed integration of distributed educational objects while TangoInteractive’s (unique?) collaborative JavaScript API naturally allows cross disability interfaces to Web documents. Where it is necessary to reference the resultant system, we will term it CDAWebWisdom. NPAC and the Trace Center have produced the preliminary design (http://www.npac.syr.edu/users/gcf/webwisdomrefs/) of this cross disability extension of the WebWisdom/TangoInteractive technologies and will continue their partnership in this proposal.

Our proposed software will be built around an emerging architecture for distributed systems that builds on ongoing convergence of web and distributed object technologies (from COM, CORBA, Java and W3C) to form what is usually called the object web.

Both the hardware and software infrastructure of the object web is changing with remarkable speed and so our plans are necessarily tentative especially in out years. However we believe that the activities discussed below illustrate our approach and in some sense represent a lower bound to our goals for they do not require any major new object web base technology developments. Of course, we will take advantage of any significant new relevant technologies that become available during the performance period and modify our plans accordingly.

3.3.2: Architecture of CDAWebWisdom

fig.1: Architecture of Cross Disability Rendering

This proposal aims to help and accelerate the development of common information structures that can both express the application in a general fashion and support well cross disability interfaces. In this fashion, our project will help the development of both cross disability access and the ongoing activities defining key object web standards. The Trace center is already a participant in the key W3C object model discussions. Our CDAKN is built on the concept that knowledge is formed iteratively by successive organizations of base information "nuggets". These are viewed technically as "distributed educational objects" with a four level navigation scheme described below. Cross disability access is needed for both the unit information objects and perhaps even more importantly for their synopsis and indices describing their integration into knowledge. We support the knowledge management by using conventional databases (in our case Oracle) to store persistent information objects and their dynamic organization. Java servers using JDBC map the stored object model into the user view, which is accessed (as in modern web-linked databases) through XML templates. XML is converted into HTML either on the server or (increasingly in the future) browser. The XML/HTML Web documents are shared through TangoInteractive, which allows client profiles to optimize the rendering of both the information nuggets and their synopsis. This pragmatic mix of conventional databases, Java Servers, and XML specification of knowledge and information objects illustrate CDAWebWisdom’s technical choices. JavaScript is used to capture interactive events and allow cross disability rendering of dynamic information objects, which respect the web document model (DOM). Currently this DOM is rather erratically designed and implemented by Netscape and Microsoft but we expect the recent W3C proposals to bring more power and uniformity during the time period of this project.

TangoInteractive can share essentially any distributed object with its defined API’s to multiple languages but we stress web pages here as these are natural realization of shared information for the activities in this project. However this is more general than appears at first sight, because web pages can be the user interface to general server or client side objects – databases (as above), CORBA object brokers, CGI Scripts (as in TangoInteractive’s shared web form interface to NCSA’s Biology workbench), etc.

3.3.3: Collaborative Knowledge and Cross Disability Rendering

TangoInteractive manages the sharing of educational objects and allows each client to optimize its view of the information based on user preferences and capabilities of the client machine and network connection. This capability is available in any system using a shared event collaboration model, which allows separation of display and shared object specification. As a simple example, a client with a low bandwidth network connection would request the low resolution version of an image and one serving a user with impaired vision, the audio augmentation of this image. As shown in fig. 1, we encapsulate this optimized choice of document fragment rendering in terms of a user profile, which can be implemented as a knowledge agent. Collaborative systems like TangoInteractive can be used to share distributed objects between different users or between different display devices for a given user. This replication of object between different display modalities can be implemented within a single machine or between multiple machines serving a single user.

3.3.4: Integration of Asynchronous and Synchronous Learning Models

We note that our model for information includes both asynchronous and synchronous modes supported in a common fashion for cross disability access. We assume that in each case, students and teachers access curriculum material stored as web pages or more generally distributed objects on web servers, object brokers or equivalent. Asynchronous or self-paced learning occurs when each participant accesses this material in his or her own time. Synchronous learning occurs when this same material is replicated among a class and discussed interactively. This model allows a single approach to cross disability access, which is independent of learning model.

3.3.5: Two Level Navigation Model for Distributed Objects

We start with a conventional hybrid information object model and define a distributed information object by a tuple (Page_URL, Component_DOM). This approach views information as a collection of document fragments (labeled by Component_DOM) arranged in pages labeled by Page_URL. When one uses a backend database, this conventional label is mapped into a reference to a database cell and distributed objects can be constructed at any level of granularity as a collection of the contents of multiple cells. Each cell corresponds to a document fragment specified in XML at the client side and converted in a Java servlet to a JDBC access to the database. Pages are accessed through web address, file location, CORBA or Java naming service or whatever hierarchical naming scheme evolves on the object web. A "Page" is, for information underlying traditional education, the basic curriculum unit. It is a "screenfull" or "foil" which is discussed by the lecturer or studied by the student as a single unit with cross referencing between concepts not requiring tiresome browsing and reloading of the browser page. The conventional hierarchical labeling of Page_URL seems quite natural for future web education and training with, some name like university/ college/ department/ program/ course/ lecture. However the information within a given page is much less structured and consists of some often-haphazard arrangement of multimedia information nuggets. Further fragments within the page can be repositioned dynamically using dynamic HTML as evolved in the W3C DOM.

This two level model will be used in our initial work in this project as it essentially represents current practice. We will support a limited view of knowledge integration at this stage with all participants allowed to browse the hierarchical page structure and to dynamically arrange pages into new information streams. The XML templates that define the interface to document fragments in the database will be extended to support customized rendering as shown in fig. 1.

3.3.6: Four Level Navigation Model Supporting Knowledge Integration

As part of this project, we will investigate a new approach to document object models, which is designed to support both an easier definition of the overall structure of the document and the dynamic linkage of input-output devices to components. We return to the hierarchical structure labeled by the tuple (Page_URL, Component_DOM). We wish to support the hierarchical grouping of information described in section 3.1. In this regard we consider a four way grouping of information – namely the Internet or World Wide Web, the SessionWeb, the Page and the document fragment. As emphasized earlier, we will follow the market place in the area of resource discovery and coupling to the hierarchical URL namespace defining the World Wide Web. We will use appropriate metadata such as those proposed by Educause's IMS project to integrate educational objects to the topology of the resource-discovery world. Here however, we focus on the natural organization of knowledge in a "session" such as is found in a lecture or a single self study activity. We now discuss this limited fine grain or local Session Wide Web, which we abbreviate to SessionWeb. This is a subset of the object web whose transactions are the natural units of learning and whose contents are persistent objects whose methods support such transactions. For instance for a lecturer, the SessionWeb consists of all pages relevant to a particular lecture as well as all their subcomponents. This local SessionWeb is of course likely to be dynamically updated with outside links as topics come up during the lecture. We include in this concept all local navigation both within pages and within the document space of a given learning session. In particular this definition allows the lecturer to pick and choose between presentation material with an order that is determined in real-time. This contrasts with clumsy frameset technology and the static sequential order convenient in most systems (e.g. PowerPoint) today. In a more general browsing activity, a student learner's SessionWeb would be less structured and roughly consist of all pages and components stored in the browser cache. In this way, we can customize the display to accommodate different learning styles for each student. Technically the SessionWeb is quite small and so able to support richer linkage and access models using very fast client side technologies such as Java and JavaScript with the data structures stored in memory. One approach to the SessionWeb that is attractive today is based on Sun Microsystems JavaSpaces and Jini technologies but these are of course only illustrative of appropriate technologies and better choices may become available.

We will build a prototype of such a rich SessionWeb object model linked to TangoInteractive. This will be in last half of the project after we have further experience from using the existing W3C DOM. We expect this SessionWeb model to give considerable insight to future designs of object models with richer navigation models supporting the knowledge structure discussed in sec. 3.1, with definition of document components and their dynamic linkage as well as their interface to input-output devices.

3.3.7: Range of Authoring Strategies

We will look at cross disability access for the following types of educational pages which show increasing sophistication in terms of authoring tools and hence internal W3C DOM structure. Each authoring method supports either synchronous or asynchronous views of curricula as described in sec. 3.3.4.

  1. Conventional and dynamic HTML Pages.
  2. PowerPoint exported to the web using Microsoft’s Internet Assistant and modest restructuring to better define object components (labeled by Component_DOM).
  3. PowerPoint accessed via COM components stored in the backend database, which allows one to properly define a base object model. Web export uses XML templates, which allows support of the multi-resolution images and cross disability access discussed earlier.
  4. We will elaborate the object structure seen in pages of the types 1) through 3) in various ways, such as through the addition of pointers, glossaries, notes and quizzes in fashions popularized by tools like WebCT.
  5. Java applets are used in some of the best interactive educational curricula and these are well supported with our existing collaborative technology -- especially if they are constructed according to the Javabean design frameworks.

3.4 Assistive Devices and Cross Disability Interfaces

NeatTools Background. We have been developing NeatTools, a visual programming and runtime environment, for interfacing humans and computers. It enables users to input information to a computer through various kinds of sensors and devices and, among other things, displays the information in the form of text, graphics, audio, video, or other methods. One constructs a dataflow network (visual program) in this environment by dragging and dropping objects (modules) from an on-screen toolbox to the desktop workspace and then connecting these with input or output controls and control of parametric lines. Editing and execution of programs occur simultaneously, so that no compilation is necessary. NeatTools is written in C++ on top of a Java-like cross-platform application programming interface (API) so that it can run on multiple platforms including Windows 95/98/NT, Unix, Irix, and Linux. It can interface with serial, parallel, and joystick devices and other significant features include Internet connectivity; display of time signals; mathematical and logic functions; character generation; multimedia; Musical Instrument Device Interface (MIDI) controls; and a visual relational database with multimedia functions. A developer’s kit, for writing new external modules, is also available online for those proficient in object-oriented programming in C++.

Devices Background. We have also developed the palm-sized TNG-3 hardware interface box, which detects signals from sensors and switches. Both TNG-3 and the latest version, TNG-3B, have 8 analog and 8 digital input channels and stream data to the serial port of a personal computer at 19200 baud. We also have a working bench prototype of TNG-4, which has more capacity and versatility, with 8 analog and 22 digital lines that are dynamically bi-directional. In other words, each digital line can serve as an input or an output, and this can be dynamically reconfigured at any time within NeatTools by manual or automatic control. We have used NeatTools to interface various types of hardware devices to TNG-3, including displacement potentiometers, photocells, magnetic sensors (Hall Effect transducers), pressure transducers, and bend sensors. The customizable and extensible features of these modular hardware and software systems are important for the project goal of extending such technology to accommodate users with a broad range of disabilities.

 

 

A Cross-Disability-Accessible Knowledge Network for Education and Collaboration in Science and Technology

Appropriateness for KDI and Roles of Project Personnel

This project is inherently cross disciplinary involving central themes of computer science and universal access and having importance in the areas of learning environments, interface physics and telemedicine. The project directly addresses the goals of the KDI program as it researches the issues involved in building cross disability collaborative and learning networks. It will build testbeds that instantiate such knowledge networks.

The P.I. Fox will naturally be responsible for overall coordination of the project. NPAC , directed by Fox, will be responsible for the software systems needed for the prototype testbeds developed in this project and this effort will be led by Podgorny. This work includes continual enhancement of TangoInteractive to support cross disability rendering of web documents respecting the evolving W3C document object model. The database backend, XML/HTML views of it and archiving learning sessions will be supported by NPAC.

The Trace Center will address the issues that arise in creating CDA multi-modal interactive environments and ensure the project is integrated with the standards development at W3C and IMS. They will actively work with NPAC on overall design of system which is responsibility of Gilman and Fox.

Lipson at Syracuse will lead the identification and development of the special assistive interfaces and their needed device drivers. Catholic University will co-develop interface technology with this Syracuse team, and will develop assessment ‘instruments’ for formative evaluation. Their work will provide alternative rendering of the knowledge network and so enable more quantitative assessment of the chosen human computer interfaces.

CAST (Boston) and DO-IT (U Washington) will be responsible for identifying appropriate users and deploying and evaluating with necessary assessment infrastructure the testbed developed by the collaboration.

Note that Vanderheiden from the Trace center and Fox are team leaders in the joint Alliance/NPACI EOT (Education, Outreach and Training) activity in areas of universal access, learning technologies and graduate education. We expect to use the EOT teams as an informal resource throughout the project.

 

A Cross-Disability-Accessible Knowledge Network for Education and Collaboration in Science and Technology

Results from Prior NSF Support

Geoffrey Fox and Edward Lipson

NSF award number: ASC-9523481 Dates: 11/1/95–10/31/99 Amount: $927,935 (total costs for three years, plus current no-cost extension; not including two supplements discussed below); "Integration of Information Age Networking and Parallel Supercomputing Simulations into University General Science and K–12 Curricula"

This Metacenter Regional Alliances grant is concerned with developing Web-based educational modules based on four supercomputing simulations projects: a) membrane fluctuations, b) fluid dynamics, c) crackling noise and associated hysteresis, and d) crack propagation in societal structures, such as dams. The former two projects are conducted at Syracuse University, respectively in the physics department and in aeronautics. The latter two take place at Cornell under a subcontract. Additional information on all four modules is available via our grant project Web site (www.simscience.org). We have created Java applet versions of both our fluid- and crystalline-membrane simulations (which arise from representations of "string" theories in particle physics and cosmology). We have also written several other Java applets to illustrate other ideas in physics and principles behind the main simulations. For example, we have written an applet that simulates a simple spring—how the force and stored energy change with extension—to illustrate how the springs used in our crystalline membrane applet work. In addition to Java applets and digital video we have used virtual reality modeling language (VRML) to visualize the output of off-line membrane simulations. We are using examples from everyday life and biology in particular to motivate explanation of the concepts underlying membrane physics. We have also demonstrated collaborative versions of some applets using NPAC's TangoInteractive collaboratory system. This project drove the design of the TangoInteractive interface to Java and Javabean modules. The cross product, spring, planetary motion and computational fluid dynamics collaborative applets have been shown in numerous talks and exhibits to demonstrate the principles behind the use of shared Java in Education. This project also drove research by Fox into new approaches to integrating physics and computer science into education as part of the concept of Internetics. These ideas were described in a book chapter cited in the publications and will be tested this fall in a new undergraduate course PHY 300 aimed at non-science majors interested in communicating ideas using the physics and computer science ideas developed in this and other grants. The MRA project is carried out with the participation of two postdoctoral research associates at Syracuse (one in physics and one at NPAC) and several graduate and undergraduate (REU) students at both institutions. We were awarded a $25,000 REU supplement in the summer of 1997. In addition, we have been awarded a $350,000 supplement for integration of this project with vBNS/Internet II.

Publications

 

Gregg Vanderheiden Ongoing NSF Support from NSF Partnership in Advanced Computational Science PACI Program

The Universal Design/Disability Access Program (UD/DA) for Advanced Computational Infrastructure project contributes to PACI progress by assisting the PACI partners in applying Universal Design practices by facilitating the inclusion of people with disabilities in all aspects of the PACI program.

The program advances these two goals by five kinds of activities: Education and Awareness, Resources, Tools, Assistance and New Techniques through Synergy. This five-fold strategy persists over the five year life of the program and is developed in more detail in the project home page at http://trace.wisc.edu/world/paci/.

All five of these strategies have produced results in GFY '99:

Education and Awareness: UD/DA also conducted research demonstrations in the EOT-PACI Booth at SC'98. The Trace center also presented PACI and other NSF-funded accessible educational technology to the assistive technology and disability access community in it CurbCuts Room at the CSUN Conference. Additional educational opportunities were found through participation in a conference on reacing senior citizens with health information via the WWW and a workshop on future Home Care Technologies sponsored by the NIDRR RERC on TeleRehabilitation, among others. Collaboration with the TeleRehabilitation researchers promises to be a productive strategy for the achievement of robust and educational Interaction Environments.

Resources: The UD/DA team contributed to the development by the EOT-PACI Team of the "Touch the Future" outreach volume for the SC'98 Conference and an overhaul of the team website at that time. The Web Content Accessibility Guidelines ( from the W3C have gained a great deal in both comprehensibility and authority during this period. As of April 30, 1999 they have achieved the status of a Proposed Recommendation and are under review by the W3C member companies.

Tools: A module that prompts accessible practices for inclusion in Web editors, and a tool to speed the creation of captions and related access aids for multimedia are under development and have demonstrable prototype versions.

Assistance: The UD/DA team has helped NPACI articulate "common application needs for Interaction Environments" and approaches to the use of XML in client/server communication. This is just begun and is to be continuted. Contact has been made with the Instructional Management System project of EDUCAUSE and accessibility approaches for that Learning Technologies standards project are under discussion. It was helpful to the Home Care Technologies workshop that we were able to inform them of the developments underway in Immersive Environments and Interaction Environments in PACI.

New Techniques through Synergy: Combining the EOT activities from both the Alliance and NPACI into one team has allowed a critical mass to form around the topic of Learning Technologies. This is a positive development because the projection of Advanced Computational Infrastructure into learning and teaching situations has a lot of the same flexibility and scalability requirements as universal design and disability access. Through the formation of this focus area in the EOT-PACI the UD/DA team has been able to form strategic alliances such as that with this KDI team.

 

Currently there are no publications from this activity

 

Sheryl Burgstahler Summary of Prior NSF Support

NSF Award: #HRD-9550003, amount: $1,5000,000, period: 10/1/95-9/30/99.

Title: DO-IT Extension

Summary of Results: summer programs and Internet activities for students with disabilities, printed publications, videotapes, World Wide Web site, workshops, mentoring, conference presentations, participant tracking. This project also produced summaries of survey data from participants including high school students with disabilities, mentors, parents, and instructors.

Publications from Project:

Burgstahler, S. E. (1998). Making Web pages universally accessible. Computer-Mediated Communications Magazine, 5(1).

Burgstahler, S. E., & Comden D. (1998). World wide web. Creating a level playing field. Ability, 98(2), 56-59.

Burgstahler, S. E. (1997). Teaching on the Net: What's the difference? T. H. E. Journal, 24 (9), 61-4.

Burgstahler, S. E. (1997). Peer support: What role can the Internet play? Journal of Information Technology and Disability, 4(4).

Burgstahler, S.E. (1997). New kids on the net: A tutorial for teachers, parents and students. MA: Allyn and Bacon.

Burgstahler, S. E. (1997). College: You can do it! Closing the Gap, 16, 1.

Burgstahler, S. E. (1997). Students with disabilities and the online classroom. In Z. Berg and M. Collins (Eds.), Wired Together: The Online Classroom in K-12, Volume I. Cresskill, NJ: M. Hampton Press, Inc.

Burgstahler, S. E., Baker, L. M., & Cronheim, D. (1997). Peer-to-peer relationships on the Internet: Advancing the academic goals of students with disabilities. National Educational Computing Conference Proceedings.

Burgstahler, S. E., & Comden, D. (1997). World wide access. Focus on libraries. Journal of Information Technology and Disability, 4, 1-2.

Burgstahler, S.E., Comden, D., Fraser, B. (1997). Universal access: Designing and evaluating web sites for accessibility. Choice 34, 19-22

Burgstahler, S. E. (1996). Creating an electronic community on the Internet. National Educational Computing Conference Proceedings.

Burgstahler, S. E. (1996). Equal access to computer networks for students and scholars with disabilities. In T. M. Harrison and T. D. Stephen (Eds.) Computer Networking and Scholarly Communication in the Twenty-First-Century University, 233-241. Albany, NY: State University of New York Press.

Burgstahler, S. E. (1996). How to create a successful electronic community on the Internet. Proceedings: Technology and Persons with Disabilities, Eleventh Annual International Conference. Northridge, CA: California State University.

Burgstahler, S. E. (1996). Teaching science lab courses to students with disabilities. Proceedings: Technology and Persons with Disabilities, Eleventh Annual International Conference. Northridge, CA: California State University.

Burgstahler, S.E. & Olswang, S. (1996). Computing and networking services for students with disabilities: How do community colleges measure up? Community College Journal of Research and Practice, 20(4), 363-376.

Burgstahler, S.E. (1995). Distance learning and the information highway. Journal of Rehabilitation Administration, 19(4), 271-278.

Burgstahler, S.E. (1995). Faculty facilitate research for students with disabilities. Council on Undergraduate Research Quarterly, 8-11.

Burgstahler, S.E. (1995). Cooperative education and students with disabilities. Journal of Studies in Technical Careers, 15(2), 81-87.

Burgstahler, S.E. (1995). Technology eases the transition to college for students with disabilities. Learning and Leading with Technology, 23(1), 39-41.

 

Corinna E. Lathan Summary of Prior NSF Support

PI on Current NSF Small Grant for Exploratory Research

IIS-9813548, $75,000, Oct. 1, 1998-Oct 1. 1999

 

Title: Personal Augmentation Devices (PADs): Exploratory Agents To Enable Tele-Interaction, Evaluation, And Development Of Abilities In Persons With Severe Disabilities

 

The scope of this grant is the initial exploration and development of prototype personal devices, controlled by physiological signals, for the purpose of augmenting human function. The objective is to provide children, who have severe motor disabilities, a device that can navigate and manipulate the external environment under their control.

 

Number of Students Supported: 2 Graduate Students, 2 Undergraduate Students

Number of Papers Generated: 3 Conference Paper, 1 Journal in Progress, 1 Patent Pending

 

PI on Previous planning grant

IRI-9712526, $18,000, Aug 15, 1997-December, 1998

Title: Quantitative assessment in complex multisensory human-interface environments

This is a planning grant to identify and assess advanced input and output devices associated with complex multisensory interfaces from a human computer interface design perspective and explore potential methods for measuring performance in environments that use these interfaces in the rehabilitation community.

Number of Students Supported: 1

Number of Papers Generated: 2 Conference Papers

 

 

A Cross-Disability-Accessible Knowledge Network for Education and Collaboration in Science and Technology

Dissemination of Results, and Institutional Commitment

 

We have summarized our plan for dissemination in section 2.3 of the project description. As our proposal is built around novel web-based approaches to communication in a cross disability fashion, we must surely put our results into practice using them where appropriate in the dissemination process. We have already started to use Tango Interactive to support remote lectures as a natural extension of its application to education and training. We will extend this as part of the dissemination plan of this project. Note that our methods are carefully designed to allow the same material to be cross-disability whether used asynchronously as a conventional web site or delivered interactively using Tango. We will also use electronic and traditional newsletters and exploit the existing outreach channels of the participants. These will include the NSF Partnerships in Advanced Computational Infrastructure where we collaborate through Trace and NPAC in the EOT (Education Outreach and Training) program. An important activity we identified in the main text is integration of our results into the activities of relevant standards bodies. These will include Educause's IMS (Instructional Management System) where we are already planning the use of Tango Interactive to demonstrate their messaging standards as well as working with them on integrating accessibility issues into their instructional technology specifications. Equally important is the work of the Trace center with W3C on cross disability issues in their user interface (HTML, DOM) standards for web documents.

The facilities needed by this project include the normal institutional resources, which will be available, as all partners have well established activities and support mechanisms that we will be leveraging in this proposal. Operationally our project needs significant network and computer resources. We have substantial experience from numerous Tango Interactive events as to the requirements in this area. The good news is that high bandwidth is not needed as a single Tango Client needs about 100 kilobits per second to support video and audio. In cases where one drops the video component of digital audio-video conferencing, ordinary dial up modem performance is sufficient. However the bad news is that one does need excellent quality of service (QoS) and current technology and network deployment plans emphasis high bandwidth and not QoS in the near term. We will use the vBNS (as installed at major project sites and giving QoS as a byproduct of bandwidth) to enable this project and NPAC staff will help identification of network issues in our deployment testbeds managed by CAST and DO-IT. Conventional Windows 98 and NT clients with cheap peripherals ($200) will be needed at deployment sites. The Java Tango Interactive servers do not require large servers and modest UNIX or NT boxes are sufficient. However the Web and database servers will see high traffic and here we will use major Sun resources just installed at NPAC through a hardware donation from Sun Microsystems. This will be supplemented with an appropriate network of proxy servers and if necessary mirror sites in the deployment testbeds.

For the work of co-PI Lathan, we note that the Rehabilitation Engineering Research Center (RERC) on Telerehabilitation's headquarters is in Catholic University's Biomedical Engineering's Home Care and Telerehabilitation Technologies Center (http://www.hctr.be.cua.edu/). The RERC will provide support in the form of laboratory space for hardware and software integration as well as access to clinical expertise through ties with the National Rehabilitation Hospital in DC and Sister Kenny Institute in Minnesota.

 

 

 

 

A Cross-Disability-Accessible Knowledge Network for Education and Collaboration in Science and Technology

Performance Goals

1. Baseline Plan (Month 3)

We will meet and summarize our current capabilities and establish an initial strategy to meet broad goals of proposal. This will use evolution of existing technologies and strategies. It will establish process for establishing testbeds, appropriate initial curricula, and driving development of novel strategies and technologies for Cross-Disability information access. We will start using TangoInteractive for the project knowledge network.

  1. Initial CDAWebWisdom 1.0 Release (Month 10 or earlier)

This software will be aimed at supporting distance training of visually impaired users and will drive activities at CAST DO-IT and Trace on establishing testbeds and setting up an evaluation plan.

3. Cross Disability DOM Study (Month 14 or earlier)
Using initial results from CDAWebWisdom 1.0 and general experiences, we will establish the requirements and approach for a version of the W3C DOM supporting the CDAKN.

4. Initial Testbeds and early Evaluation (Month 14 or earlier)

We will have conducted initial training sessions and completed informal evaluation to allow immediate feedback to CDAWebWisdom system.

5. Release of CDAWebWisdom 1.1 supporting range of disabilities (Month 18 or earlier)
This will support users with hearing or physical disabilities and the latter will be integrated with the interface work at Catholic University and Syracuse. This system will also support cross disability knowledge summaries as a prototype of SessionWeb concept described in Sec 3.3 of project description.

6. Major Mid Term outside Review (Month 21 or earlier)

At this stage we will have performed initial deployment across a range of disabilities and obtained initial formal evaluations. We will discuss progress and next steps with a group of outside experts.

7. Release of CDAWebWisdom 2.0 supporting range of disabilities and an improved architecture built around concept of CDADOM and SessionWeb(Month 26 or earlier)
This will enable the last round of testbed deployment and evaluation activities. Throughout the technology evolution there will be a corresponding activity to prepare curricula in CDA form and by this date, all material from both physics and comoputer science will be available.

8. Final Report and Public Workshop (Month 36 or earlier)

This will be a consolidated description of recommended practices for knowledge capture and access, to ensure seamless integration of people with different disabilities in collaborative work and learning/teaching activities employing both retained knowledge and real-time interaction. We will summarize our experiences and either hold a separate open workshop or bootstrap such an activity as a part of a national meeting.

 

 

A Cross-Disability-Accessible Knowledge Network for Education and Collaboration in Science and Technology

Management Plan

The PI and co-PIs will together constitute an executive committee that will jointly coordinate the project. They will meet at least monthly using TangoInteractive in a multimedia videoconferencing mode with as developed the cross disability support to be designed and built in this project; the meeting agendas will be posted in advance on the Web. When appropriate, other team members at the various sites—including those with disabilities—will participate during part of these meetings to present results and raise any issues of general concern. In any case general meetings will be conducted online at least bimonthly. Continual communications among all participants will take place by e-mail, telephone, and Web page postings (with e-mail alerts). An enlarged technical committee will also be formed and will communicate similarly.

In addition, the results, and plans of the project will be maintained on cross disability Web sites at all participating institutions to compare notes and progress at our various sites. As stated in the project description, the mode of collaboration itself will constitute part of our study of cross disability knowledge networks.

The main project will be divided into subprojects in the following areas:

As summarized in discussion of senior personnel roles, individual members of the executive committee will be assigned to be in charge of one or two of these respective areas. Overall management will be organized and tracked using a program like Microsoft Project to establish goals, targets and assigned roles to the team members. The Trace center will be responsible for supporting this management structure as the lead editor of a set of documents and online resources systematically produced during the project and capturing requirements, technologies, lessons and evaluations keyed to the performance goals summarized in the previous section.

Both the Trace Center and Syracuse University are familiar with large multidisciplinary projects, as they are both part of the NCSA Alliance while Fox has been part of the NSF Center for Science and Technology CRPC since its inception. This experience will be used in setting up and implementing the management for this project.