Printer Friendly

Interactive design of seamless collaboration media.

Computer-supported cooperative work (CSCW) emerged in the mid-1980s as an identifiable interdisciplinary research field focused on the role of computer and communication technology to support group work [1, 9-11, 13]. CSCW examines how people work together in groups and how groupware technologies can support collaboration.

"Groupware" is a label for computer-based systems explicitly designed to support groups of people working together. It is growing rapidly as a new application category in the computer industry [7, 8]. Familiar groupware examples include email, bulletin boards, group schedulers, group decision support systems, workflow systems, collaborative authoring tools, and screen sharing software.

Most of the current groupware systems are devoted to computational support and are designed under the constraint of limited communication bandwidth. However, the deployment of broadband digital networks opens a new future for multimedia collaboration environments that integrate real-time audio and video communication links with computer-based shared workspaces [3, 20]. We have been exploring the future of collaboration media that make good use of broadband networks through the iterative design of various CSCW systems. Progression of our collaboration media design is illustrated in Figure 1.(1)

Our CSCW research started in 1988 and was motivated by the study of shared drawing space [22] in the Media Space environment [2].(2) The focus of our research is on the creation of new application concepts, rather than the development of technologies themselves. While most of the current video telephony have been designed to see "talking heads," our goal is to go beyond this model and demonstrate new usage of video communication technologies.

Our research has taken a user-centered approach instead of a technology-driven approach, following Buxton's design principle: Let's do smart things with stupid technology today, rather than wait and do stupid things with smart technology tomorrow.(3) For quick prototyping and verification of new application concepts, we have mainly used hybrid networks consisting of analog video networks and digital data networks.

This article introduces the progression of iterative media design from TeamWorkStation to ClearBoard. These systems were designed to support focused real-time collaboration by distributed group members. The key concept behind our iterative design is "seamlessness." Seamless design pursues the following two goals:

1. Seamlessness (continuity) with existing work practices:

People develop their own work practices after using a wide variety of tools and interacting with a large number of people. We believe the continuity with existing work practices and everyday skills is essential. Groupware that asks users to abandon their acquired skills and to learn a new protocol is likely to encounter strong resistance [12].

2. Seamlessness (smooth transition) between functional spaces:

Collaboration requires us to shift among a variety of functional spaces or modes. Seamless design undertakes to decrease the cognitive load of users as they move dynamically across different spaces. For example, TeamWorkStation was designed to enable smooth transition between individual workspaces and shared workspaces by allowing users to keep using both familiar desktop tools and computer tools. ClearBoard realizes seamless integration of interpersonal space and shared workspace allowing people to use various nonverbal cues such as a partner's gaze direction for smooth focus switching between these two spaces.

TeamworkStation-1 and Seamless Shared Workspaces

People do a lot of their work without computers, or using different tools on different computer systems, and have developed their own work practices for these situations. Even in a heavily computerized individual workplace, users often work both with computers and on the physical desktop. Neither one can replace the other. For example, printed materials such as books and magazines are still an indispensable source of information. Therefore, when designing real-time shared workspaces, depending on the task and the media of the information to be shared (paper or computer file), coworkers should be able to choose either computers or desktops, and to switch between them freely. One person's choice should be independent of the other members' choices. Group members should be able to use a variety of heterogeneous tools (computer-based and manual tools) in the shared workspace simultaneously. To realize such a seamless shared workspace, we designed TeamWorkStation-1 (TWS-1) [19].

The key design idea of TWS-1 is a "translucent overlay" of individual workspace images. TWS-1 combines two or more translucent live-video images of computer screens or physical desktop surfaces using a video synthesis technique. Translucent overlay allows users to combine individual workspaces and to point to and draw on the overlaid images simultaneously. We chose video as the basic medium of TWS because it is the most powerful for fusing presentations of a variety of traditionally incompatible visual media such as papers and computer documents.

System Architecture of TWS-1

Figure 2 shows an overview of the first prototype, TWS-1. Two charge coupled device (CCD) video cameras are provided at each workstation: one for capturing live face images of the group member, and the other for capturing the desktop surface images and hand gestures. TWS-1 provides two screens. The individual screen (private workspace) is on the left and the shared screen is on the right. These two screens are contiguous in video memory, and this multiscreen architecture allows users to move any application program window between the individual and shared screens by merely mouse dragging. Therefore, it is easy to bring your own data and tools from each personal computer into the shared workspace to use in remote collaboration. Hard copy information can also be shared easily by placing it under the CCD camera (i.e., on the physical desktop). Figure 3 shows an image of a shared screen where two users are discussing the system configuration by annotating and pointing electronic diagrams in a drawing editor by hand.

The first prototype TWS-1 was implemented on Macintosh computers to provide small work groups (2 to 4 members) with a shared workspace. The system architecture of TWS-1 is illustrated in Figure 4. In order to connect distributed workstations, an analog video network (NTSC and RGB) was developed and integrated with an existing data network (Local-Talk) and with a voice (telephone) network [19].

The video network is controlled by a video server that is based on a computer-controllable video switcher and video effecter. The video server gathers, processes and distributes the shared computer screen images, desktop images, and facial images. Overlay of video images is done by the video server. The results of overlaying are redistributed to the shared screens via the video network.

Experience of TWS-1

Through experimental use of TWS-1, we found that users liked the feature that allowed them to keep using their favorite individual tools, especially papers and pen, while collaborating in a desktop shared workspace. That is, there was no need to master the usage of new sophisticated groupware. The drawback of this overlay approach is that the results of collaboration cannot be shared directly. Since individual workspaces are overlaid as video images, the marks and the marked documents occupy different "layers" in the shared screens. They are actually stored separately in different places in different media (in computer files or on paper). We mainly used a video printer or videotape recorder to record the results and the collaboration process.

"Shared workspace" is taken by many computer scientists to mean "data sharing." However, we think it is not required that all the outcomes of the work-in-progress be directly "manipulable" by all the participants. We seldom felt the necessity to edit the other's diagrams directly. If a diagram was to be changed, usually the originator would change it according to the comments made by the other. One reason appears to stem from the respect paid to the ownership of the outcomes. This seems to be a very natural feeling, even in a close collaborative session. The overlay solution provides us with a comfortable work environment, because the overlaid layers keep the person's own layer of work intact.

Since TWS-1 was designed for laboratory experiments to verify the concept of seamless shared workspaces, we did not pay much attention to the number of cables or the communication bandwidth. As a result, the system configuration became complex and difficult to maintain. This complexity prevented us from conducting the field tests using publicly available digital networks, and motivated us to start designing a completely new system, TeamWorkStation-2 (TWS-2).

TeamWorkStation-2 for N-ISDN

TeamWorkStation-2 (TWS-2) was designed to provide a shared workspace over narrowband ISDN (N-ISDN) Basic Rate Interface (2B+D) and the Primary Rate Interface (H1/D) using the CCITT H.261 standard of moving picture transmission [16]. We chose N-ISDN, especially Basic Rate Interface as the target network because of its widespread availability in Japan.(4)

We devised a new multiuser interface called ClearFace for TWS-2. ClearFace superimposes translucent, movable, and resizable face windows over a workspace image to enable more effective use of the normally limited screen space. We found users have little difficulty in selectively viewing either the facial image or the workspace image.

System Architecture of TWS-2

We radically simplified the system architecture. Figure 5 shows the system architecture of TWS-2. We targeted dyadic communication to make the centralized video server unnecessary and to eliminate complexities that would arise from multipoint connection. The two TWS-2 terminals are connected by one ISDN link.(5) Each terminal is composed of three major components: a TWS-2 box, a video CODEC, and a PC-9801 personal computer. All video processing functions (e.g., translucent overlay, picture-in-picture) are supported at each terminal. All the hardware for video processing, camera control units, audio amplifiers, and power units were encapsulated into a single TWS-2 box.

The PC-9801 computer is mainly used to control the video processing hardware in the TWS-2 box and the video CODEC. If direct sharing of information stored in the computer is required, we can use screen sharing software while overlaying desktop video images with the shared computer screen.

Figure 6 shows the appearance of a TWS-2 terminal in use. A headphone with a small microphone is provided for voice communication. Like TWS-1, TWS-2 provides two CCD cameras--one to capture the user's face image and another to capture the physical desktop image. The TWS-2 box also provides an external video input port that can be used to show recorded video clips by connecting a video player.

TWS-2 provides only one screen instead of the two screens (individual and shared screens) of TWS-1. The experimental use of the previous system, TWS-1, led us to observe that most work was done in the "desktop-overlay" mode in which only the video images of physical desktop surfaces are overlaid. It is quicker to point to part of a local or remote document with a finger than with a mouse pointer, and marking and annotating with a pen is also quicker and easier than using computer programs. Even though the documents were stored as computer files, users preferred to print and share the hard copies (papers) as desktop-overlays rather than using direct computer screen overlays. Based on these observations of TWS-1 use, we concluded that the support of desktop images is more important than the support of computer screens. We decided to make "desktop overlay" the basic service of TWS-2, and to make "computer screen overlay" an option. This decision led to the one-screen architecture of TWS-2.

Figure 7 and Figure 8 show typical screen images of TWS-2 in use. In Figure 7, users A and B are discussing the system architecture using hand drawing and gestures. In Figure 8, teacher A is teaching calligraphy to student B using red ink to correct B's strokes made in black. This calligraphy example demonstrates the important TWS feature that all of the collaborators share not only the results of drawing, but also the dynamic process of drawing and gesturing.

Experimental Use of TWS-2

We have tested TWS-2 since 1992 by connecting our offices in Tokyo, Yokosuka, and Osaka by INS-Net 64. We conducted several controlled laboratory experiments as well as the tests of real work outside of laboratories [16]. Before we started the TWS-2 experiments, many people felt unsure of the ability of INS-Net 64 to support real-time activities because of their previous experience with the jerky displays of video-phones. However, the subjects generally commented that they could smoothly interact with their partner and that they were absorbed in the task. Although the subjects noticed some delay and jerkiness in the remote desktop video image, these did not hinder subjects from concentrating on their work. However, all the subjects noted that they could not clearly see their partner's desktop image. This confirmed that the CIF (Common Intermediate Format) (352 pixels/line X 288 lines/picture) standard is definitely insufficient to see small characters or fine drawings in the remote documents.(6)

Beyond Videophone

Videophones and video conferencing are the most typical video applications that use N-ISDN, and they represent the effort at imitating "being there," which has long been the goal of telecommunication technologies [14]. Real-time video is used only to see the remote partner's facial expressions, postures and gestures in these applications.

In contrast to these "talking head" applications, TWS-2 demonstrates a new direction for the usage of real-time video: the creation of a virtual shared workspace. The main focus of TWS-2 is not the imitation of face-to-face communication but rather is the sharing of overlaid desktop images for collaboration.

The experiments to date confirm that TWS-2 has one large advantage over ordinary videophones as the preeminent N-ISDN service. The advantage is due to the bandwidth limitation and human perception. People are especially perceptive to changes in facial expressions. If facial expression is the main means of communication, even slight asynchronism between the voice and the movement of eyes and lips is immediately noticed, and makes smooth conversation difficult. Since the facial expression is always changing and the face and body are always moving, delay in transmitting the partner's image increases perceived discontinuities and hence increases the negative impression of users.

The main difference between the desktop and face images is that the desktop images are relatively static. Images of papers and the marks drawn on the papers do not change quickly. Only the hands move on the desktop when users gesture or draw. Thus the total amount of motion is far less than that experienced with videophone displays. This more static nature of the desktop surface increases the effective video frame rate. Although quick hand motions look jerky, TWS-2 users can be more productive than their videophone counterparts since they can visually share objects and work on them.

Seamless Integration of Interpersonal Space and Shared Workspace

One major focus of groupware development has been the creation of virtual shared workspaces in distributed computer environments. Shared workspace activities include sharing information, pointing to specific items, marking, annotating, and editing. These activities can be supported by computer-based groupware, including shared screen systems, shared window systems, and multi-user editors [8].

In face-to-face meetings, we speak, make eye contact, and observe each other's facial expressions and gestures. These verbal and nonverbal channels are important in building confidence and establishing trust [5]. The focus of telecommunication technologies such as the videophone and video conferencing has been the creation of interpersonal spaces that maintain a sense of "telepresence" or "being there" [14] through the visibility of gestures and facial expressions of distributed group members.

Both shared workspace and interpersonal space are essential for remote, real-time collaboration. Many desktop multimedia conferencing systems such as TeamWorkStation, PMTC [23], and MERMAID [24] support both spaces, but they have a major limitation: an arbitrary seam exists between the shared workspace and the face images. We realized that this problem is not just the superficial physical discontinuity of spatially separated windows. Absent are the non-verbal cues that would enable a smooth shift in attention between the shared workspace and the partner's face image. Current groupware and video conferencing technologies do not support these cues.

Lack of eye contact is another problem of TWS. Camera positioning prevents one person from knowing the direction of the other's gaze--it could be directed toward the face image, toward objects in the shared workspace window, or elsewhere. A shift in focus is not apparent until accompanied by a visible gesture or an audible remark. Awareness of gaze direction and mutual eye contact are impossible.

ClearBoard is designed to overcome these limitations by seamlessly integrating interpersonal space and shared workspace. A design goal of ClearBoard is to allow a pair of users to shift easily between interpersonal space and shared workspace using familiar everyday cues such as the partner's gestures, head movements, eye contact, and gaze direction.

ClearBoard Metaphor

The key metaphor of ClearBoard design is "talking through and drawing on a big transparent glass board." Figure 10 shows "ClearBoard-0" which is the simple mockup of this ClearBoard concept for colocated pairs of users. ClearBoard-0 consists of a glass board positioned between the partners on which they draw or post objects. ClearBoard requires less eye and head movement to switch focus between the drawing surface and the partner's face than is needed in either the whiteboard or the desktop environment. However, a real glass board has the problem that written text appears reversed to one's partner. We were able to solve this problem by mirror-reversing video images in ClearBoard-1 and 2 as described in the following section.

Design of ClearBoard-1

Figure 11 shows ClearBoard-1--our first prototype to support remote collaboration [17]. Two users are discussing a route by drawing a map directly on the screen surface. Both users can share a common map orientation. The partner can read all the text and graphics in their correct orientation.

To implement the remote version of ClearBoard, we devised the system architecture called "drafter-mirror" architecture illustrated in Figure 12. Each terminal is equipped with a tilted screen, a video projector and a video camera. Users can write and draw directly on the surface of the screen using color paint markers. The video camera located above the screen captures the drawings and the user's image as reflected by the half-mirror as a continuous video image. This image is sent to the other terminal through a video network and projected onto the partner's screen from the rear so that both users can share a common orientation of the drawing space. The partner can draw directly over this transmitted video image.(7)

Experimental Use of ClearBoard-1

The ClearBoard-1 prototype has been used in experimental sessions since 1990. We observed effortless focus switching between the task and the partner's face. Users could read their partner's facial expression, achieve eye contact, and utilize their awareness of the direction of their partner's gaze. Easy eye contact even during drawing-intensive activities increased the feeling of intimacy and copresence. No subjects reported difficulty with the mirror-reversal of the partner. This may be because our faces are quite symmetric, or our own images are reversed in mirrors.

We found ClearBoard provides the capability we call "gaze awareness"; which is the ability to monitor the direction of a partner's gaze and thus his or her focus of attention. A ClearBoard user can tell which screen objects the partner is gazing at during a conversation more easily and precisely than is possible in an ordinary meeting environment with a whiteboard.

To understand the implication of gaze awareness, we conducted a collaborative problem-solving experiment on ClearBoard using the "river crossing problem" [18]. This experiment confirmed that it is easy for the instructor to say which side of the river the student was gazing at. This information was quite useful in understanding the student's thinking process and in providing advice. The importance of eye contact in the design of face-to-face communication tools is often discussed. However, we believe the concept of gaze awareness is more generalized and is a more important notion. Eye contact can be seen as a special case of gaze awareness.

An interesting and less critical misunderstanding occurred when users directly drew over their partner's image, playfully adding eyeglasses or a mustache, for example. Clearly, they had a "WYSIWIS" (what you see is what I see) expectation, not realizing that although the drawing is shared, the facial images are not, with each person seeing only the other's image. Thus, the metaphor of the ClearBoard is not always entirely assimilated.

Design of ClearBoard-2

In using this ClearBoard-1 prototype, we found several problems. The projected video image of a drawing is not sufficiently clear. Lack of recording capabilities is an obstacle to reusing the work results. To overcome these problems in ClearBoard-1, we decided to design a new computer-based prototype: "ClearBoard-2" [18]. Instead of using color paint markers, ClearBoard-2 provides users with "TeamPaint," a multiuser computer-based paint editor and digitizer pen.


TeamPaint is a groupware application for shared drawing. It runs on networked Macintosh computers, and it is based on a replicated architecture. TeamPaint offers several functions, including recording of working results, easy manipulation of marks, and the use of data contained in computer files. TeamPaint provides an intuitive interface based on the metaphor of drawing on a sketch pad with a color pencil as shown in Figure 13.

Each user is provided with individual layers and can only modify his or her own layers by default. All members see the composite image of all the layers. Because each layer is isolated from the others, no access control is necessary. TeamPaint has no floor control mechanisms but enables simultaneous gesturing and drawing by multiple users. Gestures, in the form of cursor movements, and through them the drawing process, are visually shared by all members.(8)

ClearBoard-2 System and Its Use

Using TeamPaint, transparent digitizer sheets, and electronic pens, we implemented the computer-based prototype, ClearBoard-2. Figure 14 shows the ClearBoard-2 prototype in use, and Figure 15 shows the system architecture of the prototype. The composite drawing image of TeamPaint is made to overlay the face images with a special video overlay board. The mixed RGB video image is projected onto the screen's rear surface. TeamPaint makes it easy to get a new blank sheet and the drawing marks easier to see. The lower screen angle decreases arm fatigue, but gives the impression that the partner is under the screen, rather than behind it as in ClearBoard-1.

The use of RGB video and the chroma-keying overlay technique does increase image clarity. Furthermore, the capability of recording results and reusing the data produced in previous sessions or from any other application program promises to add tremendous value to an already practical tool. Through the use of ClearBoard-2, it was often observed that the user's gaze follows the partner's pen movements. We confirmed that "gaze awareness" is as well supported in ClearBoard-2 as it was in ClearBoard-1. One can easily tell which object on the TeamPaint screen the partner is looking at.

We see the evolution from ClearBoard-1 to ClearBoard-2 as being very important. Computer and video-communication technologies have, until now, evolved independently. Although they have been loosely coupled using arbitrary multi-window interfaces in many desktop multimedia conferencing systems, they have never been integrated seamlessly from the user's cognitive point of view. ClearBoard-2 succeeds in naturally integrating the technology of computer-based groupware with that of video conferencing. We expect that the seamless integration of computer and video communication technologies will be an integral part of the next generation of collaboration media.

Moreover, ClearBoard-2 can be seen as an instance of the paradigm shift from traditional HCI (Human-Computer Interaction) to HHI (Human-Human Interaction) mediated by computers. We are interacting not with computers, but through computers.

We believe the ClearBoard design is not only beyond the traditional desktop metaphor based on a multi-window interface, but also suggests a direction of "beyond being there" [14]. We expect ClearBoard to be useful both as a collaboration medium and as a vehicle to investigate the nature of dynamic human interaction.

Summary and Future Work

We have presented an evolution of our collaboration media design from TWS-1 to TWS-2 to ClearBoard-1 to ClearBoard-2. TWS-1 demonstrates a new usage of real-time video for collaboration, by providing distributed users with a seamless shared workspace. Using a translucent video overlay technique, real-time information such as hand gestures and handwritten comments can be shared, as can information contained in printed materials and computer files. Users can continue to use their favorite application programs or desktop tools, so there is only a minor cognitive seam between individual workspaces and shared workspaces. TWS-2, a redesigned version that uses N-ISDN Basic Rate Interface, demonstrated the advantage of this application over ordinary videophones given the same bandwidth limitation.

In order to integrate the shared workspace and the interpersonal space seamlessly, we designed ClearBoard. ClearBoard-1 permits co-workers in two different locations to draw with color markers while maintaining direct eye contact and the use of natural gestures. Through experiments, we discovered that ClearBoard also supported the important feature of "gaze awareness." In order to offer new functions, such as recording of working results, easy manipulation of marks, and the use of data contained in computer files, we designed a computer-drawing version, ClearBoard-2. ClearBoard-2 supports shared drawing with TeamPaint and electronic pens.

Through the iterative design of these collaboration media, we believe it is most important to respect the skills that people use in everyday life [4]. Our design focuses on basic skills such as drawing, gesturing, talking, gaze reading, and using computers. We believe skill-based design will lead to cognitive seamlessness.

We are now very much interested in how the next generation of collaboration media may augment the process of collaborative creation by people such as artists, musicians, designers and children. NTT's vision video, "Seamless Media Design" illustrates our expectation of future collaboration media based on the ClearBoard concept.(9) Figure 16 shows snapshots of a session by children from the vision video.

Broadband Digital Network

Although all these prototype systems except for TWS-2 were implemented using hybrid (analog video + digital data) networks, it is obvious that the hybrid networks have serious limitations in extending their scale.

We expect the new international telecommunication standard--B-ISDN (Broadband Integrated Services Digital Network) and ATM (Asynchronous Transfer Mode) [20]--will provide a universal and scalable infrastructure for various collaborative applications including TeamWorkStation and ClearBoard. ATM is expected to be a common technology for both local- and wide-area networks. ATM also provides "bandwidth-on-demand" to meet the requirements of various applications.

Although N-ISDN (Narrowband Integrated Services Digital Network) provides users with fixed communication bandwidth, we expect that ATM technology will provide users with the flexibility to dynamically change the appropriate bandwidth and the balance between the frame rate and resolution of motion pictures on demand (based on the contents and the usage of video). For example, a TWS session using a detailed blueprint for a new building may require more bandwidth for higher resolution of shared documents compared to a TWS meeting with shared sheets of blank paper for freehand drawing. Competitive negotiation tasks may require both higher frame rate and resolution to read your colleague's subtle facial expression, rather than documents. ClearBoard requires much more communication bandwidth (higher resolution, higher frame rate, and less delay) than TWS since ClearBoard presents a life-size partner's image and users want to read subtle and quick changes of a partner's gaze.

Since required bandwidth changes dynamically both within a single application depending on the contents and usage of video, and among various applications, rapid reassignment of bandwidth on demand will be a critical feature to support seamless transitions among various collaboration modes.

From Multimedia to Seamless Media

"Multimedia" is a big buzzword in the computer and communication industries. As a result, the number of cables behind a computer, the number of features users need to understand, and the number of incompatible data formats are increasing beyond the limits of human cognitive capability. A variety of media (such as text, video, audio, graphics) and services (on-demand video, video conferencing, electronic newspaper) are becoming available through a single powerful computer on the desktop and a broadband communication network called the "information superhighway." However, each medium and service are still separated from each other and they are not seamlessly integrated from a user's cognitive point of view.

The communication channels of human beings are inherently multi-modal and seamless. It does not make much sense to decompose the representation of information into primitive data types such as text, video, audio, and graphics, and stress the "multiness" of the media. For example, we are speaking, gesturing, and drawing simultaneously in a design meeting. We have great skills to express ideas and understand each other in everyday contexts using all these media as a whole. We believe the multiness of media is not the main issue, but integrating them into a seamless media hiding the various low-level representations, discontinuities among primitive media, and complexity of underlying technologies is the core issue in designing new applications. "Multimedia" sounds like a premature label that represents a stage of media evolution from the monomedia to the seamless media.

Toward Ubiquitous Media

We hope that ClearBoard will change the concept of a wall from being a passive partition to being a dynamic collaboration medium that integrates distributed real and virtual spaces. We are now exploring a vision of new architectural spaces where all the surfaces including walls, ceilings, windows, doors and desktops become active surfaces through which people can interact with other spaces, both real and virtual. In these spaces, both computers and video must be inherently ubiquitous media [6, 25]. Many challenges exist to achieve a seamless extension of spaces and their interconnections. Nevertheless, our design will be based on the natural skills and social protocols people are using in everyday life to manipulate and interact with information, artifacts and one another.


We would like to thank Masaaki Ohkubo and Takashi Yagi at NTT for the contribution to building the TWS-1 and TWS-2 prototypes. We appreciate the contribution of Naomi Miyake at Chukyo University and Jonathan Grudin at the University of California-Irvine to the observational and experimental phase of this research. We thank George Fitzmaurice and Beverly Harrison at the University of Toronto for their careful comments to the draft of this article. We greatly appreciate discussion with members of the Ontario Telepresence Project, and in particular the stimulating discussions with William Buxton on the principles of skill-based design and the ubiquitous media.

1 A videotape that introduces the complete history of this collaboration media design is available in the ACM SIGGRAPH video review, Issue 87:CSCW'92 Technical Video Program, Item 6 [15].

2 "Media space"; originated by Xerox PARC [2] is the environment that integrates video, audio, and computer technologies, allowing individual and groups to work together despite being distributed geographically and temporally. Recent development include Cruiser (Bellcore), RAVE (Rank Xerox EuroPARC), and CAVECAT/Teleypresence (University of Toronto) [21].

3 William Buxton at University of Toronto talked about this design principle in an invited lecture for IFIP WG8.4 conference at Crete in September 1990.

4 In April 1988, NTT started an ISDN Basic Rate Interface service (called INS-Net 64) in Japan using the existing metallic cables providing two 64Kb per second B channels and one 16Kb per second D channel. In June 1989, ISDN Primary Rate Interface service (called INS-Net 1500) was started using fiber optic cables. INS-Net 1500 provides 1.5Mb per second channel at the maximum. Since the introduction of these ISDN services, the number of subscriber lines has grown steadily, and there were more than 200,000 INS-Net 64 subscriber lines in Japan at the beginning of 1994.

5 Latest version of TWS-2 is available via Ethernet as well as N-ISDN.

6 If both members need to see fine details, we expect facsimile transmission to be the technological partner that offsets the weakness of TWS-2 in sharing detailed documents.

7 This shared video drawing technique, which allows remote partners to draw directly over the video image of their coworkers' drawing surface, was originally demonstrated in VideoDraw [22].

8 Awareness based on such a small tele-pointer may have limitations. Actual hand gestures have much more power of expression, and with ClearBoard-2, the real hand and pen gesture images that lie behind the tele-pointer augment the awareness provided by TeamPaint.

9 This video will be available in the ACM SIG-GRAPH video review, CSCW'94 Technical Video Program (October 1994).


1. Baecker, R., Ed. Readings in Groupware and Computer-Supported Cooperative Work. Morgan Kaufmann, San Mateo, Calif., 1993.

2. Bly, S.A., Harrison, S.R., and Irwin, S. Media spaces: Video, audio, and computing. Commun. ACM 36, 1 (Jan. 1993), 28-47.

3. Brittan, D. Being there: The promise of multimedia communications. MIT Tech. Rev. (May/June 1992), 42-50.

4. Buxton, W. Human skills in interface design. In L.W. MacDonald and J. Vince, Eds., Interacting with Digital Environments, Wiley, New York, 1-12.

5. Buxton, W. Telepresence: Integrating shared task and person spaces. In Proceedings of Graphics Interface '92, Morgan Kaufmann, Los Altos, Calif., 1992, pp. 123-129.

6. Buxton, W. Space-function integration and ubiquitous media. Commun. ACM. To be published.

7. Coleman, D., Ed. Proceedings of Groupware '93, Morgan Kaufmann, San Mateo, Calif., 1993.

8. Ellis, C.A., Gibbs, S.J. and Rein, G.L. Groupware: Some issues and experiences. Commun. ACM 34, 1 (Jan. 1991), 38-58.

9. Galegher, J., Kraut, R. and Egido, C. Intellectual Teamwork: Social and Technological Foundations of Cooperative Work. Lawrence Erlbaum, Hillsdale, N.J., 1990.

10. Greenberg, S., Ed. Computer-Supported Cooperative Work and Groupware. Academic Press, London, 1991.

11. Greif, I., Ed. Computer-Supported Cooperative Work: A Book of Readings. Morgan Kaufmann, San Mateo, Calif., 1988.

12. Grudin, J. Why CSCW applications fail: Problems in the design and evaluation of organizational interfaces. In Proceedings of CSCW '88, ACM, New York, 1988, pp. 85-93.

13. Grudin, J. CSCW Introduction, Commun. ACM 34, 12 (Dec. 1991), 30-34.

14. Hollan, J. and Stornetta, S. Beyond being there. In Proceedings of CHI '92, ACM, New York, 1992, pp. 119-125.

15. Ishii, H., Arita, K., and Kobayashi, M. Toward seamless collaboration media: From TeamWorkStation to ClearBoard. SIGGRAPH Video Review, CSCW '92 Technical Video Program, Issue 87, Item 6, ACM, New York, 1992.

16. Ishii, H., Arita, K., and Yagi, T. Beyond videophones: TeamWorkStation-2 for Narrowband ISDN. In Proceedings of ECSCW '93, Kluwer Academic Publishers, Dordrecht, Netherlands, Sept. 1993, pp. 325-340.

17. Ishii, H. and Kobayashi, M. ClearBoard: A seamless medium for shared drawing and conversation with eye-contact. In Proceedings of CHI '92, ACM, May 1992, pp. 525-532.

18. Ishii, H., Kobayashi, M., and Grudin, J. Integration of interpersonal space and shared workspace: ClearBoard design and experiments. ACM Trans. Inf. Syst. 11, 4 (Oct. 1993), 349-375.

19. Ishii, H. and Miyake, N. Toward an open shared workspace: Computer and video fusion approach of TeamWorkStation. Commun. ACM 34, 12 (Dec. 1991), 37-50.

20. Lyles, B. Media spaces and Broadband ISDN. Commun. ACM 36, 1 (Jan. 1993), 46-47.

21. Mantei, M., Baecker, R., Sellen, A., Buxton, W., and Milligan, T. Experiences in the use of a media space. In Proceedings of CHI '91. ACM, New York, 1991, pp. 203-208.

22. Tang, J.C., and Minneman, S.L. VideoDraw: A video interface for collaborative drawing. ACM Trans Inf. Syst. 9, 2 (Apr. 1991), 170-184.

23. Tanigawa, H., Arikawa, T., Masaki, S., and Shimamura, K. Personal multimedia-multipoint teleconference system. In Proceedings of INFOCOM '91, IEEE Communications Society, 1991, pp. 1127-1134.

24. Watabe, K., Sakata, S., Maeno, K., Fukuoka, H., and Ohmori, T. Distributed multiparty desktop conferencing system: MERMAID. In Proceedings of CSCW '90, ACM, New York, 1990, pp. 27-38.

25. Weiser, M. The computer for the twenty-first century. Sci Am. (Sept. 1991), 94-104.


ATM: Asynchronous Transfer Mode

B-ISDN: Broadband Integrated Services Digital Network

CIF: Common Intermediate Format (352 pixels/line x 288 lines/picture)

CODEC: Equipment to code and decode the video and audio data for transmission through a digital network.

H.261: H.261 is the video compression standard recommended by CCITT (International Telegraph and Telephone Consultative Committee) as the compression algorithm for transmitting moving pictures.

ISDN: Integrated Services Digital Network

INS-Net 64: INS-Net 64 service provides two 64Kb per second information channels (B channels) and one 16Kb per second signaling channel (D channel). Using the existing metallic cable, INS-Net 64 service requires only the addition of a compact DSU (Digital Service Unit).

INS-Net 1500: INS-Net 1500 service offers digital communications at 64Kb per second, 384Kb per second, and 1.536Mb per second. This service is accessed by means of optical fiber cable and a DSU (Digital Service Unit).

HIROSHI ISHII is a visiting assistant professor at Computer Systems Research Institute of University of Toronto, and a senior research engineer, supervisor of the CSCW research project in NTT Human Interface Laboratories in Japan. Current research interests focus on human-computer interaction (HCI), CSCW, and the role of CSCW technologies in cross-cultural communication. email: ishii.chi

MINORU KOBAYASHI is a researcher of NTT Human Interface Laboratories, engaged in the design of ClearBoard-1 and 2. Current research interests include HCI, CSCW, video-based groupware design and new input/output devices for visual communication. email: minoru

KAZUHO ARITA is a research engineer of NTT Human Interface Laboratories, engaged in CSCW research and development of TeamWorkStation-1 and 2. email: arita

Authors' Present Address: NTT Human Interface Laboratories, 1-2356 Take, Yokosuka-Shi, Kanagawa, 238-03 Japan
COPYRIGHT 1994 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1994 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Special Issue: Internet Technology; includes glossary of terms
Author:Ishii, Hiroshi; Kobayashi, Minoru; Arita, Kazuho
Publication:Communications of the ACM
Date:Aug 1, 1994
Previous Article:The world-wide web.
Next Article:Scalable Internet resource discovery: research problems and approaches.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |