Printer Friendly

Snapshots of interactive multimedia at work across the curriculum in deaf education: implications for public address training.

A review of the literature yields many intriguing applications of interactive multimedia technology that can be seen through a series of "snapshots" describing current projects and initiatives for deaf education. The five main categories chosen to represent these activities are: instructional design, communication bridges, skill development simulations, distance learning practices, and discovery learning. Throughout the discourse on these projects, the reader will be supplied with relevant data regarding bandwidth, digital divide, feedback, digital rights management, and distribution issues. Attention will then turn to the secondary goal of connecting the lessons learned and the resources available in these arenas to the specific topic of public address training. The author concludes that a survey is necessary to identify the perceptions about technology in regards to its ability to aid in public address practice or performance among deaf students.

**********

The practice of teaching materials through the use of interactive multimedia is a growing phenomenon that has the potential to radically alter the way instruction is delivered and processed. Deaf education has embraced this relatively new technology and is putting it to the test in many areas of academia and beyond. In fact, "Active Learning has long been considered a proven method in improving attention, motivation and retention of concepts taught in class. This is especially true for deaf students ...," reported Burik and Kelly (2003, p. 1). The idea that technology can be a catalyst for that active learning approach is at the heart of this article. So what then is "interactive multimedia?" Although there are many definitions, Azra Akhtar (2003) described it as using many different media (print, audio, video, etc.) to present more comprehensive information than any medium alone can, accommodating children with different learning styles, and employing interactivity to stimulate children to become active, motivated learners. She goes on to say, "Therefore the demand to develop interactive learning material is of paramount importance as it is intended to help enhance the learning experience of deaf students and make the curriculum more accessible" (Akhtar, p. 2). Throughout this discussion, the designation "deaf" will be used to identify individuals in a cultural minority group that have varying degrees of hearing loss.

The purpose of this article is two-fold. First, a review of the literature yields many intriguing applications of interactive multimedia technology that can be seen through a series of "snapshots" describing current projects and initiatives for deaf education. The five main categories chosen to represent these activities are: instructional design, communication bridges, skill development simulations, distance learning practices, and discovery learning. Throughout the discourse on these projects, the reader will be supplied with relevant data regarding bandwidth, digital divide, feedback, digital rights management, and distribution issues. Attention will then turn to the secondary goal of connecting the lessons learned and the resources available in these arenas to the specific topic of public address training. In the words of James Fernandes (Fernandes & Fernandes, 2002), "The eloquent words and signs of deaf orators helped shape the passage of history of the American deaf community" (p. 1). However, a limited amount of research is present in the literature on this vital topic especially in regards to the contribution of technology. Thus an initial framework for survey and empirical research will be proposed. As interactive multimedia becomes less cost prohibitive and more user-friendly, the potential increases for teachers, students, and parents to move this research from theory to practice to benefit a wide-range of deaf children and youth.

PROJECT SNAPSHOTS

Interactive Multimedia Promotes Instructional Design

The lack of accessible interactive materials for deaf students is a widely known problem (National Center to Improve Practice [NCIP], n.d.). In fact many deaf educators will accept even low-quality materials with enthusiasm due to the limited software available for this specialized market. However, in recent years more CD-ROMS have been developed for deaf students both by corporations and by individual teachers. A project in the late 1990s entitled Acquiring Literacy through Interactive Video Education (ALIVE) allowed instructors to create hyperlinks, charts, pictures, and other elements intertwined with video to cultivate indepth discussions. Project coordinator, Dr. Cynthia King of Gallaudet University, reported that the professional-looking appearance was motivating and students were able to join in by developing their own multimedia presentations (NCIP, n.d).

In the past few years, the most revolutionary innovation for producing interactive multimedia for deaf children has come from Vcom3D, a private business that often collaborates with the National Science Foundation and other agencies (Roush, 2003). They have developed 3D animated signing "avatars." Avatars can be manipulated in terms of signing speed and angle, thus adding another element of interaction for the user. They require much less bandwidth than traditional video (Parton, 2002). By using the SigningAvatar technology, educational websites and software become useful for deaf educators that would otherwise be inappropriate. Amber Emery (2002) explained, "Regrettably, quality science materials designed to engage students in experiences that result in mastery of standards-based learning outcomes are often inaccessible to students who are deaf or hard of hearing and whose first language is sign language." Therefore, two science curriculum units, part of the Kids Network, were among the first to be designed with avatars incorporated into the material (Emery). The interested reader is encouraged to visit "What's the Weather" website (Signing Science, 2003). The project has many other interactive features beyond the avatars including data analysis and sharing through web-based forms that lead to "hands-on, minds-on" experiences. An important issue to consider with the SigningAvatar technology is the distribution method. Products and websites can be encoded with the commands to create the sign translations, but individuals can only see the avatars if they have the player. The high costs have prevented most schools for the deaf, let alone public schools with mainstreamed deaf children, from obtaining it. A "catch 22" cycle persists whereby more sites would use the signing avatars if more students had the player, but more students would buy the player if more sites incorporated the avatars. As explained later in this article, many teachers are opting to use newly available tools to create their own multimedia lessons. Vcom3D does have a tool to create the animations, the Sign Smith Studio (Roush), however it costs $3,000--again emphasizing the digital divide between those schools that can and cannot afford to invest in it. One important strength of the Studio is that it automatically grants the user the copyright to his/her animated sequences meaning they can be captured and displayed in multiple locations.

Switching the focus from science to math, a research-team lead project follows in Vcom3D's footsteps of using 3D animation, but using hands not complete avatars. Adamo-Villani and Beni (2003) wanted their project to "increase the opportunity of incidental learning of mathematical concepts via interactive media." The resulting, noncommercial product was developed with Macromedia Director. Another research-team initiated project recently occurred in the United Kingdom and focused on reducing barriers for college age deaf students. Akhtar (2003) explained the rationale for the program:
 A questionnaire sent out to 200 UK colleges and Universities
 indicated a demand for interactive CD-ROM product development for
 deaf students studying on a number of courses, including
 Accounting ... Very little research has been conducted to assess the
 effectiveness of producing multimedia interactive learning materials
 for deaf students in secondary. (p. 5)


These materials, designed with a visual emphasis, are vital to deaf students, and the interactivity promotes active, motivated learners regardless of their learning style (Akhtar). The researches summarize, "The participative interaction includes much more than simply navigating through the content. Learners should be able to actively engage in interacting with the multimedia content instead of receiving it passively" (Akhtar, p. 6). One can see that this and similar projects are working to implement more advance stages of interaction with a strong sense of instructional design.

Beyond the commercial and researcher-oriented instructional design projects, perhaps those with the most practical impact are the ones created by individual instructors. As the design tools become easier to use and more affordable, teachers can craft specially designed interactive software in a reasonable amount of time (Rawlings, n.d.). For example, the new "ASL Clip and Create" allows users to combine text, sign illustrations, graphics, and other elements to create a wide range of lessons. Robust, general-purpose software packages such as Macromedia Flash or Director have graphical user interfaces and enough flexibility to benefit a multitude of tasks. On a smaller scale, "While not recognized in the category of multimedia authoring programs, Clicker4 has the capacity and the tools to support the design of interactive multimedia resources" (Rawlings). It is described as having flexible design features with onscreen displays (grids) where graphics, text, video files all combine to create learning resources (Rawlings). The focus of Rawlings' work is on teacher-developed lessons that reinforce English literacy skills through the interaction and feedback. She reported that the instructors have been able to design the materials themselves to the extent they have access to technology such as video editing software. However, digital rights management is an issue since the storybook-based lessons require scanning the illustrations, which are sometimes copyrighted (Rawlings). One final example illustrates the digital divide between interactive multimedia products in the United States and other countries. Phil Thomson, a deaf educator in New Zealand, reported:
 In the US, educationalists have been making interactive programmes
 with video clips of ASL--American Sign Language--since at least
 1991. These programmes are of no use to us here in NZ. The language
 is not the same! After much searching, an easily-learnt multimedia
 authoring programme was found which enabled me to use my experience
 as a teacher of the deaf to create an interactive learning
 environment for deaf children on CD-ROM. This was not done by
 experienced programmers but by a practicing teacher with no
 programming skills thanks to Illuminatus (Comm Unique, n.d.).


Every year more interactive material is developed by the teachers and that in turn has led to students participating in the design process themselves. At the Laurent Clerc National Deaf Education Center, students have collaborated to build an interactive digital video dictionary, for example (Stifter, 2001). ThinkQuest, a competition for student-created interactive websites, is another such example that provides a hands-on task that motivates and challenges children today. Interactive multimedia was once only feasible for large corporations to produce, but now it is being designed in classrooms across the world.

Interactive Multimedia Promotes Communication Bridges

Communication between deaf and hearing persons can be difficult. Historically, a human interpreter has been required to translate between a spoken language and a sign language. Sometimes a written version of English or another language is used as well--the results varying depending on the literacy level (in English) of the deaf person (Chastel, 2003). Recent technological inventions have changed that equation. Research projects, in various stages of progress, have tackled voice-to-text, voice-to-sign, and sign-to-text transliterations. It is beyond the scope of this article to examine the translation processes; rather a few individual cases will be highlighted in terms of their interactive components.

Captioning provides English written text equivalents to the spoken or signed passages in videos, television, and movies. Dr. Paine (2002) reflected, "The variety of methods then available [1992] for captioning were tedious, expensive, and required great precision" (p. 1). However, an ongoing research project at the Rochester Institute of Technology has created instant captioning through the use of voice recognition software to benefit their distance learning courses (Paine). Vcom3D, in collaboration with the National Center for Accessible Media, is also developing a tool called SignSync, which allows "... authors to add both [text] captions and sign language animations to dynamic media" (Roush, 2003). Of perhaps more relevance currently for students are tools that allow them to interact with digital video to create English captions for American Sign Language (ASL) thereby practicing translation skills. Margaret Chastel (2003) asserted:
 Multimedia technology allows us to bridge ASL and English through
 syncing text with digital video. Multimedia captioning (as opposed
 to traditional video captioning) allows the user to jump forward or
 back in the video sentence by sentence or use the text to jump to a
 particular sentence within a video.


In Chastel's study, the students who used interactive multimedia to make the translations experience more success than those who translated directly from video. She concludes by saying, "One of the great advantages of multimedia technology, as mentioned earlier, is that it allows greater scrutiny of the ASL presentation" (Chastel).

Sign language dictionaries, once consisting of hand-drawn images in books, have come alive with the power of interactive multimedia. Last year, Geoffrey Poor, a professor at the National Technical Institute for the deaf, released the ASL Video Dictionary and Inflection Guide. It is unique in that signs are not displayed statically in a 1:1 ratio with English words; rather the user chooses the appropriately inflected ASL phrase depending on a variety of meanings (Poor, 2001). An example would be the English word "rain"--the user would indicate on a scale the degree of rain intended, thus producing the sign with proper emphasis. Additionally, users can search for signs with similar handshapes rather than only by the English translation. Still other new dictionaries, such as the Burton Vision ASL, boast the largest ASL interactive, multimedia signing dictionary on the market. It provides a traditional text and graphical locator scheme for young children. Learning sign language is easier than ever thanks to these type projects and products.

Perhaps one of the highest forms of interaction exists through wearable computing. In the past few years, prototypes for translating or transliterating sign to voice have started to show up in research. Typically either virtual reality gloves and body wear or cameras are used to capture movements and then computers use various algorithms to make sense of the movements; however, limitations in terms of vocabulary and natural language processing still exist (Parton, 2006). In August of this 2003, Jose Hernandez Rebollar pushed the research one step further. "He attached 13 sensors to a glove and four more to his arm. Together, they follow a signer's three-dimensional movements" (Daily Times, 2003, p. 1). Using a 200-word vocabulary, a computerized voice speaks the words of the signer. It is unclear whether the computer repeats the words in the order signed (transliterating) or performs true translation from ASL to English, but Rebollar indicates the ability of the machine to change ASL into Spanish as well (Daily Times). Although not developed specifically for educational reasons, the future possibility of deaf children in mainstream classes, for example, being able to communicate to their peers without the aid of an adult interpreter can be clearly seen.

The other side of the communication cycle occurs when a hearing person speaks and that voice needs to be revealed through sign language. More progress has been made on developing tools to accomplish this task than the reverse. A commercial product, the iCommunicator, is already used in some schools. The teacher wears a microphone and then her words appear on a student's laptop as text or as a string of individual Signed English video clips (VoiceAbility, n.d.). The producers report, "This very powerful software provides a multisensory, interactive communication solution for persons who are deaf or hard of hearing ..." (VoiceAbility, p. 1) A more complex project underway at DePaul University, that targets airport settings, serves as an illustration for what schools could someday do as well.
 A personal digital translator that would translate English into
 American Sign Language, would better bridge the gulf between deaf
 and hearing worlds. Such a tool would provide greater privacy
 accessibility for the Deaf.... Although it shares some vocabulary
 with English, ASL is not a word-for-word translation of English
 words and sentence structure. It presents many of the same
 challenges of any language translation process, and adds the
 complexity of changing modality from aural/oral to visual/
 gestural.... Any computer system attempting to display ASL must
 incorporate facial expressions (Lancaster et al., 2002).


The 3D model that DePaul University designed underwent changes from what was affectionately called an "ET hand" (a marionette-type model with jointed rigid components) to one that closely approximates the human hand (Lancaster et al.). Animation was chosen over video because translations can be produced "on the fly" rather than have a static set of phrases to deliver. "Four modules support the system: an ASL transcriber, a database of transcribed signs, a speech recognition and translation module, and a graphics display module" (Lancaster et al.). In the test demonstration, airport security personnel were able to give deaf flyers verbal instructions that were automatically translated and displayed on a screen. Similarly to the glove translation project, this one could benefit education in numerous ways. It is worth noting, however, that the equipment to perform these translations is still out of the economic range of most schools. Thus it will be some time before we see the fruits of this research in education.

And finally, attention in the communication domain is turned toward tablet PCs and how they can facilitate interactive multimedia. Beil (2003) stated, "The relatively new release of the tablet PC and the new software surrounding its development, offers promise in a variety of areas in the field of deaf education." Some of those areas include: (a) improved notetaking (by hearing students) that incorporates drawings, photos, color, and text and is uploaded instantly for the deaf student through wireless technology, (b) chat capabilities among deaf students and faculty, and (c) wireless sharing of documents through cooperative work (Beil). This author sees the combination of the tablet PC and software like that being created at DePaul University as a future realization that would result in more independence for students.

Interactive Multimedia Promotes Skill Development Simulations

Many skills, such as job interviewing, can perhaps be learned best through role-playing or simulations. Interactive multimedia has a large role to play in these types of activities. The Virtual Interview Exercises for Workplace Success program (VIEWS), created by Vcom3D and being tested in fall of 2003, allows deaf students to practice interviewing skills in a virtual office (Roush, 2003). The animated characters are part of a 3D environment and interact appropriately with the user. Another simulation tested was a fire-fighting training module.

Speech training, although not valued by many deaf individuals, is also benefiting from interactive simulation software. "Advances in interactive language technologies will eventually revolutionize learning and training.... Learning will be interactive, individualized, self-paced and infinitely variable" (Cole et al., n.d.). In this research study, the animated conversational agent (similar to an avatar) is "Baldi" and the students interact with him through the keyboard, mouse, and speech. "For example, during language [speech] training, a talking face can be made transparent to show how the tongue moves within the mouth during speech production" (Cole et al.). The feedback described in this program is of most important. For instance, after Baldi demonstrates the proper mouth motions to a sound, he can record the child trying to produce the sound and play back the video so he/she can see it herself. (Cole et al.) The article pointed out:
 Given the synergy between basic and applied research, lessons
 learned with today's conversational agents will provide a useful
 testbed to guide research and development of more sophisticated
 agents in the future. We believe the development of animated agents
 is a worthy pursuit because they have awesome potential to improve
 human computer interaction ... We have witnessed emotional bonding
 between our students with profound hearing loss and Baldi.


Additionally, the researchers designed a toolkit--Rapid Application Developer (RAD)--so that the students and teachers themselves could create interactive media systems incorporating Baldi (although not simulations per se). This project illustrates the potential in education for high-level thinking applications.

Interactive Multimedia Promotes Distance Learning Practices

Distance education, whether it be through web-based programs or videoconferences, is playing an important role in deaf education (Parton, 2005). This author's earlier work, "Distance Education Brings Deaf Students ..." details many specific projects in both domains. It is sufficient for this discussion to make some general comments with respect to the interactive dimension of distance education. Juhas (2001) wrote, "Like the Internet, videoconferencing offers educators opportunities for experiential learning that is interactive, authentic, and collaborative" (p. 2). One of the major considerations with videoconferencing is bandwidth, especially when deaf children are signing across the line. "Lack of visual clarity and latency or lag time can be problematic for hearing users but it is an even greater disadvantage to deaf users" (Juhas, p. 2). A rate of 512kbps is recommended. Web-based courses have the same bandwidth concern, so it is important to consider the effect of connection speed when attempting "live signing chats" or streaming video. Many educators have found that a combination of inclass meetings and web portions of a class provide a balanced framework for deaf students who enjoy the interactive, direct access to materials on the Web along with human guidance (Parton). An advanced science course at the Western Pennsylvania School for the Deaf has many devices connected to a wireless network including an interactive SMARTboard, computers, and scalar microscopes. Students can analyze data and automatically collaborate and compile it with other students whether they are physically in the class or not (Burik & Kelly, 2003). Designed correctly, distance education courses tend to have much built-in interactivity that deaf students enjoy.

Interactive Multimedia Promotes Discovery Learning

Whether through formally designed discovery units or through extracurricular "entertainment" style activities, the notion of children learning through self-directed activities is a viable approach to education. An example of the former technique is the Interactive Computer Identification and Correction of Language Errors (ICICLE) project at the University of Delaware (McCoy & Masterman, 1997). "The ICICLE project uses intelligent computer-aided instruction to build a tutorial system for Deaf or HH children that analyzes their English writing and makes tailored lessons and recommendations" (Parton, 2006, p. 98). When completed, this ambitious project will provide deaf students with an interactive tool they can use independently. On the other hand, the deafplanet website uses entertainment to pull in deaf children, who then begin to learn through informal activities of their choosing. The site uses interactive games, non-fiction literature, and signed video segments on such topics as electricity and energy (Deaf Planet, n.d.). It resembles a "Sesame Street" segment with the added benefit of being a virtual meeting place for deaf children. Traditionally, deaf students miss much of the incidental, informal learning that takes place outside the classroom, but programs like this one are helping breakdown those old barriers.

IMPLICATIONS FOR PUBLIC ADDRESS

It is clear that interactive multimedia is playing a large role in many different aspects of deaf education, yet there has been little if any research in regards to the role technology plays in public address instruction (Weller, 1998). Gallaudet University, the world's only liberal arts college for the deaf, offers a bachelor's degree in Communication Arts, which includes two to three courses on public speaking. (Public speaking is a broad term and does not imply that the students speak--most deliver their speeches via ASL.) In a recent survey of past graduates, 91% stated that the required coursework was beneficial (Weller, Harrison, & Strassler, 1999). A recent publication, Signs of Eloquence: Foundations of Deaf American Public Address, and the corresponding class at Gallaudet, provide insight into the history of famous deaf persons known for their public addresses (Fernandes & Fernandes, 2002). However, the literature and course syllabus does not reveal any use of multimedia to aid in either the practice or actual delivery of speeches for these classes; therefore, one of the outcomes of this article is to establish a survey framework with the proposed goal of using it in a future research project. It is noteworthy that using technology may play a different role in a mixed classroom (hearing and deaf) versus the setting at Gallaudet due to the interpreter factor. For example, Jarrow (1998) reported that some faculty members "balk at the idea of having a deaf student who uses sign language involved in courses such as Public Speaking ... because I am grading on how well the student presents himself, not how expressive, articulate, or effective the interpreter can be" (p. 5). In either setting, the first portion of the proposed survey must seek to establish current practices. The following is a list of possible public address prompting strategies employed:

* None (Speech is memorized.)

* Note cards with English text

* Note cards with drawings (clipart) of static signs for key concepts

* Note cards with "Sign Writing" (a controversial iconic representation of signs)

* Computer prompts through Power Point (or similar) cues in English

* Computer prompts using video clips in ASL

* Computer prompts using animated sign clips in ASL

The second portion of the proposed survey must identify the perceptions about technology in regards to its ability to aid in public address practice or performance. Some of the factors would include:

* Do students perceive a reliance on technology more for speeches given "live" or for those that are pretaped and then shown to the target audience?

* What role does the primary language preference play in determining the type of media desired (i.e., English users may prefer teleprompter style software such as the one available from Serious Magic whereas an ASL user may prefer animated sequences)?

* Do the advantages of using technology during speeches, change based upon the student's education level or age?

* Are there times where media would be appropriate for rehearsal but not for "show time?"

* Would interactive multimedia be more helpful for prepared addresses or spontaneous debate-style addresses?

Using the survey results and the lessons learned from interactive multimedia projects in other academic areas, empirical research could then be designed and conducted. Many of the technologies are already available and useful for public address instruction. For example, using speech recognition software, a computer could display back to the deaf student the English text of the interpreter's translation of the presentation in real-time. This process would empower the deaf speaker to correct any errors due to interpretation--a frequent occurrence when interpreters are ask to perform sign-to-voice services. As another example, deaf students whose first language is ASL could have organized, signed, video clip notes arranged for quick access during a presentation. Still another application of technology, might be to simply use videoconferencing to connect the deaf public speaker with a more skilled interpreter than locally available so that the voicing of the speech truly reflects the abilities and style of the signer. There are many avenues to take to integrate interactive multimedia into the curriculum of public address, thus this author recommends assessing the current situation and perceptions through a questionnaire sent to Gallaudet communication students initially and later to high school age students.

CONCLUSION

This article has presented snapshots into the role of interactive multimedia in deaf education specifically in the areas of instructional design, communication, simulations, distance learning, and discovery learning. The overriding theme is that technology is a welcomed motivator and equalizer (Roush, 2003). Recent years have also ushered in a period of affordable, user-friendly authoring programs that put interactive multimedia creation in the hands of teachers and students. While not directly discussed, the author expects there is a significant manifestation of the digital divide between the "core" deaf schools that typically participate in research studies (and thus receiving funding for cutting-edge technology) and the average public school with a few deaf students. The recommendation of this article is to initiate research into technology uses for public address instruction by reflecting on the projects already in progress in other disciplines. Multimedia is here to stay, so it is time to harness its power into the important, but often overlooked, field of public address.

References

Adamo-Villani, N., & Beni, G. (2003). Teaching mathematics in sign language by 3D computer animation. Proceedings of the International Conference on Information and Communication Technologies, (Vol. 2, pp. 918 -921), Spain.

Akhtar, A. (2003, June). A study of interactive media for deaf learners in post 16 education. Paper presented at the Instructional Technology and Education of the Deaf Symposium, Rochester, NY. Retrieved January 23, 2006, from http://www.rit.edu/~techsym/cgi-bin/sort/sessions.cgi?year=2003

Beil, D. (2003, June). The new new thing--demonstration, and implications in deaf education. Paper presented at the Instructional Technology and Education of the Deaf Symposium, Rochester, NY. Retrieved January 23, 2006, from http://www.rit.edu/~techsym/cgi-bin/sort/sessions.cgi?year=2003

Burik, L., & Kelly, W. (2003, March). Active learning through technology-creating a technology- infused environment to engage deaf students in the learning process. Paper presented at the Technology and Disabled Persons Conference, Los Angeles, CA. Retrieved January 25, 2006, from http://www.csun.edu/cod/conf/proceedings_index.htm

Chastel, M. (2003, June). Student generated captions: A multimedia bridge between ASL and English. Paper presented at the Instructional Technology and Education of the Deaf Symposium, Rochester, NY. Retrieved January 23, 2006, from http://www.rit.edu/~techsym/cgi-bin/sort/sessions.cgi?year=2003

Cole, R., Massaro, D., Villiers, J., Rundle, B., Shobaki, K, Wouters, J., et al. (n.d). New tools for interactive speech and language training: Using animated conversational agents in the classrooms of profoundly deaf children. Retreived January 23, 2006, from http://mambo.ucsc.edu/psl/dwm/dwm_files/pdf/matisse.pdf

Comm Unique. (n.d.) A little bit of NZ history is made. Retrieved November 20, 2003, from http://www.comm-unique.com.au/html/letter.html

Daily Times (2003, August). Deaf speak using a computer, golf glove. Retrieved January 23, 2006, from http://www.deaftoday.com/news/archives/002959.html

Deaf Planet. (n.d.) Homepage. Retrieved January 23, 2006, from http://www.deafplanet.com

Emery, A. (2002, March). Adapting the kids' network online science curriculum for deaf students. Paper presented at the Technology and Disabled Persons Conference, Los Angeles, CA. Retrieved January 25, 2006, from http://www.csun.edu/cod/conf/proceedings_index.htm

Fernandes, J. & Fernandes, J. (2002, June). Overview of signs of eloquence: Foundations of deaf American public address. Retrieved January 23, 2006, from http://academic.gallaudet.edu/prof/deafleadersL.nsf/Category3/f405baa62c79e06f85256bce005422b8?OpenDocument

Jarrow, J. (1998, October). Look who's talking. NETAC Networks. Retrieved November 5, 2003, from http://www.netac.rit.edu/publication/newsletter/oct98/fifth.html

Juhas, S. (2001, June). Going the distance to meet the New York state social studies standards. Paper presented at the Instructional Technology and Education of the Deaf Symposium, Rochester, NY.

Lancaster, G., Alkoby, K., Campen, J., Carter, R., Davidson, M., Ethridge, et al. (2002, March). A better model for animating American Sign Language. Paper presented at the Technology and Disabled Persons Conference, Los Angeles, CA. Retrieved January 25, 2006, from http://www.csun.edu/cod/conf/proceedings_index.htm

Lancaster, G., Alkoby, K., Campen, J., Carter, R., Davidson, M., Ethridge, et al. (2003, March). Voice activated display of American Sign Language for airport security. Paper presented at the Technology and Disabled Persons Conference, Los Angeles, CA. Retrieved January 23, 2006, from http://www.netac.rit.edu/publication/newsletter/oct98/fifth.html

McCoy, K., & Masterman, L. (1997, July). A tutor for teaching English as a second language for deaf users of American Sign Language. Paper presented at ACL/EACL, Madrid, Spain.

NCIP (n.d). Interactive video, hypermedia & deaf students. Retrieved January 23, 2006, from http://www2.edc.org/NCIP/library/v&c/Alive.htm

Paine, R. (2002, March). Voice recognition technology for instant captiong: Part IV: New and extended applications. Paper presented at the Technology and Disabled Persons Conference, Los Angeles, CA. Retrieved January 25, 2006, from http://www.csun.edu/cod/conf/proceedings_index.htm

Parton, B. (2005). Distance education brings Deaf students, instructors, and interpreters closer together: A review of prevailing practices, projects, and perceptions. International Journal of Instructional Technology & Distance Learning, 2(1). Retrieved January 23, 2006, from http://www.itdl.org/Journal/Jan_05/article07.htm

Parton, B. (2006). Sign language recognition and translation: A multi-disciplined approach from the field of artificial intelligence. Journal of Deaf Studies and Deaf Education, 11(1), 94-101.

Poor, G. (2001, June). American Sign Language dictionary and inflection guide. Paper presented at the Instructional Technology and Education of the Deaf Symposium, Rochester, NY.

Rawlings, L. (n.d.). Clicker4 and more--multimedia and multiple support. Retrieved January 23, 2006, from http://education.qld.gov.au/curriculum/learning/students/disabilities/resources/information/at/ast-22.doc

Roush, D. (2003, June) Providing sign language access to digital information using 3D animated technology: An overview. Paper presented at the Instructional Technology and Education of the Deaf Symposium, Rochester, NY. Retrieved January 23, 2006, from http://education.qld.gov.au/curriculum/learning/students/disabilities/resources/information/at/ast-22.doc

Signing Science (2003). What's the weather. Retrieved January 23, 2006, from http://signsci.terc.edu

Stifter, R. (2001, June). Integrating technology and literacy: Digital video dictionary. Paper presented at the Instructional Technology and Education of the Deaf Symposium, Rochester, NY.

VoiceAbility (n.d). iCommunicator 4.0 Overview. Retrieved January 23, 2006, from http://voiceability.com/Icommunicator.htm

Weller, R. (1998, November). Public speaking instruction and the hearing impaired; guidelines for classroom management: Technology and resources. (Abstract) Panel discussion at the Senior College and University Section of the National Communication Association Annual Conference, New York.

Weller, R., Harrison, R., & Strassler, M. (1999). A survey of Gallaudet communication arts graduates. Journal of the American Deafness and Rehabilitation Association, 31(2-3), 32-37.

BECKY SUE PARTON

University of North Texas

USA

parton@cc.admin.unt.edu
COPYRIGHT 2006 Association for the Advancement of Computing in Education (AACE)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Parton, Becky Sue
Publication:Journal of Educational Multimedia and Hypermedia
Geographic Code:1USA
Date:Jun 22, 2006
Words:5558
Previous Article:Volitional aspects of multimedia learning.
Next Article:Getting ready for mobile learning--adaptation perspective.
Topics:


Related Articles
Rehabilitation Act Amendments and the Helen Keller National Center Act of 1992: implications for consumers with hearing loss.
The perils of assimilation in modern France: the deaf community, social status, and educational opportunity, 1815-1870.
A Collaborative Approach for Creating Curriculum and Instructional Materials.
Model collaborative career program established at the National Center on Deafness.
Rehabilitation counselors' knowledge of hearing loss and assistive technology. (Knowledge of Hearing Loss).
Libraries for lifelong learning in Queensland: towards the smart state.
Teacher stress and burnout in deaf education.
Utah Deaf Videoconferencing Model: providing vocational services via technology.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters