Printer Friendly

Overcoming current growth limits in UI development.

Over the years, UIs have grown to handle progressively more complicated tasks, making them more accessible and tractable. To see this, we need only compare the early line-oriented text editors with the document preparation systems that are widely available today.

Computer novices are able to generate documents that only experts could have done just 10 years ago.

Much of this progress has been in appliance applications, which, although potentially complex, are straightforward compared to other problems arising today. These appliance applications typically deal with static domains with a simple structure, comprising a small number (10 to 20) of object classes. New applications will need to handle a dynamic (time-varying) state and a more complex structure, with hundreds of object classes. We already see this in some integrated PC applications, where spreadsheets and graphing packages can be combined into documents so that changes to one will be immediately reflected in the other. Moreover, there are industrial applications in which the number of object classes and degree of dynamism is an order of magnitude greater. As the number of interface components increases and the problem becomes more complex, we find that the old interface styles no longer work. The problem scales up, requiring qualitative changes in the approaches to UI development.

Fortunately, humans have a history of dealing with increasing problem complexity, and have developed metastrategies for reducing the problems to manageable dimensions. In this article we discuss these metastrategies and how they may be applied to the problems that arise when UIs must address complex application domains.

General Theory

Our central theme is that as a problem scales up, the strategies for dealing with the problem change. Complexity is primarily an issue of scale. A familiar example to every programmer is the complexity of interfaces as a function of the number of potential end users. Consider the difference among the UIs for tools developed for the following:

* the programmer's own use

* the programmer's immediate coworkers

* a single known customer, who may represent a class of end-users

* a set of unknown customers with known technical training

* a large, unconstrained set of customers, with little technical training

Each of these requires a different strategy for defining and implementing the UI, with progressively greater associated costs. There are many more dimensions that affect the usability of a strategy, including time (to develop, to use, or for the system to respond to input), number of potential actions, the number of other applications the user will use, the percentage of time the user will use this application, the number of items that users have to keep in short-term memory, and number of objects and object classes to be manipulated.

Putting this more mathematically (but still informally): we may address a problem using any of a set of strategies, each of which has an associated cost function g, g([f.sub.1]([n.sub.1]), [f.sub.2]([n.sub.2]),...,[f.sub.m]([n.sub.m])) where the [n.sub.i] characterize the size of the problem components and [f.sub.i]([n.sub.i]) are the costs associated with each component. The cost function g reflects the combined effect of the components costs, typically by adding or multiplying the [f.sub.i]([n.sub.i]). For a given strategy/problem combination, one term (one of the [f.sub.i](n.sub.i])) usually dominates. Thus we summarize the cost function: O(f(n))

This is because, in a given situation, problems scale in particular ways, so that most of the terms stay constant. The function associated with the component that scales is the one whose behavior comes to dominate the cost function. It is the cost associated with this dominant component that drives the selection of a strategy from the set of possible problem-solving strategies.

However, when the environment changes the problem may change the way it scales, making a given strategy inappropriate. For instance, the change may cause the scaling of components that have polynomial or exponential cost functions, degrading the effectiveness of the strategy.

This forces strategic shifts, which can be evolutionary, as in adjustments within a strategy, or revolutionary, as in trying a completely different approach. There is usually an incentive to continue with a strategy, because there has usually been some investment in it. Naturally, the greater the investment, the greater is the resistance to fundamental change. (These changes are akin to paradigm shifts, as described by Kuhn [8]. Like these paradigm shifts, a new generation of developer or product may be the one that effects the change). This inertia is good in that it forces strategies to be more fully developed. It is bad in that changes that ought to happen may never happen. For example, the Dvorak (American Simplified) keyboard has never supplanted the QWERTY keyboard, despite frequent demonstrations of its superiority.

When changing strategies, we use metastrategies, which guide the way we might refine strategies to adapt to changing conditions. Gerlernter [6] describes these in relation to how they may be applied to user interfaces for the display of extremely large, complex data sets. There are three commonly applied ways of reducing complexity due to problem scaling (parenthetical terms match those used in [6]):

* Clustering (Uncoupling). This involves grouping related components with high interaction and separating them from clusters with low interaction. This affects the cost function by constraining and containing high-cost components in a manageable way. Also, it may eliminate components of the cost function by pushing them into a different cluster. The pitfalls of this approach are it limits component interaction in a way that may not be adaptive, and it may be difficult for users to internalize the clustering. Effective clustering requires intimate knowledge of the user and the application domain, since it relies on seeing patterns in the problem domain that will be understandable to the user.

* Hierarchy (Recursive Simplicity). This metastrategy complements clustering, by providing a mechanism for composing the clusters and organizing them for comprehensibility. It reduces the cost of a strategy by abstracting and isolating cost function components and thereby changing their characteristics. As with clustering, there is conceptual cost for the user, who must understand the hierarchy and may miss buried detail.

* Remapping (Espalier, or training to a trellis). This is transforming to a different domain, having rules that make the problem more tractable. This results in a different, better-behaved cost function. Since there is a cost associated with the mapping, the cost of mapping back and forth must be small compared with the benefit of solving the problem in the simpler domain. Moreover, the mapping must preserve information. A special case of this is regularization, or mapping the domain to a regular pattern (as in training a plant to a trellis). This is effective, because a complex problem can be solved in a domain governed by a simple set of rules.

The clustering and hierarchy metastrategies usually produce evolutionary strategic shifts. The remapping metastrategies usually produce evolutionary revolutionary strategic shifts. Thus, when we discover that a strategy is starting to fail, we start by identifying the driving component, the term that is growing in the cost equation and coming to dominate it. Once we identify the term, we apply the metastrategies to bring it under control--either by limiting the damage, or eliminating the component. First we try evolutionary changes, usually involving clustering and hierarchy metastrategies, and if they are ineffective, we try revolutionary changes, involving the remapping metastrategy. We will illustrate these metastrategies, showing their use during the evolution of UIs.

History of UIs

We will briefly review the history of UIs to illustrate how these metastrategies have been applied. We will show how some interaction strategies could not scale up to handle larger applications and how the strategies were replaced.

It is through the UI that the nature of the application domain becomes known to the user. Throughout the 1980s, as the representational power of the technology increased, more application domains became accessible to monitoring and control via GUIs. However, as the representational power increased, so did the difficulty and cost of managing these representations.

With the arrival of time-sharing systems in the early 1970s, the primary interface device was the TTY-type terminal. This device limited developers to prompt and command language style interfaces. The prompt style of interface (where the application asks questions that the user must answer) is suitable for small applications for naive users. Some advantages of this style are first-time users find it easy, and it is simple to develop. Some limitations of this style are the locus of control is in the computer, and the interaction sequence is inflexible and quickly becomes tedious. The evolutionary response is to cluster the prompts into hierarchical menus. The revolutionary response is to remap the locus of control to the user, providing a command-based interface.

The command-based interface continues to be a powerful interface style. It is easy to develop, powerful, and flexible. Its limitations include requiring keyboard entry, which is error-prone and can be tiresome; requiring the user to remember the set of possible legal inputs and to remember the state of the system being controlled; promoting a command-oriented (verb-object) view of the application. Each of these limitations had led to seminal developments in UI technology, both evolutionary and revolutionary. These developments are summarized in Table 1.

Evolutionary responses to the keyboard limitation include:

* supporting command abbreviation, which takes advantage of the clustering of the command space

* clustering the command space differently

* allowing the user to cluster the command space by providing a macro or scripting facility

The last of these is the most important, because it gives the end user partial control of the interface design. Furthermore, it starts to abstract the interface from the application, allowing modification of the interface by someone who did not develop the application.

The replacement of TTY-based terminals with CRTs made a more revolutionary response possible, namely, the substitution of a menu-based interface for the command-based interface. This involves remapping the command space, which is abstract and must be modeled in users' heads, into a concrete, visible representation. Menu-based interfaces further abstract the interface from the application, because they force the developer to consider the command space as a separate entity. This naturally leads to specialization in development teams, where some developers focus on the application domain and others focus on the interface.

The evolutionary responses to the memory limitations were straightforward applications of the clustering and hierarchy metastrategies. They involved creating new commands for querying the system to get the list of currently appropriate commands and to get reports of the current state of the system.

The revolutionary response to the state memory limitation is to represent the current system state to the user. The most common example of this is the screen text editor, which represents a dramatic improvement over the earlier line-oriented text editors. (As for menu systems, this required the replacement of TTY-based terminals with CRTs.) This naturally evolved into the WYSIWYG (what you see is what you get) style of interface that is common today.

Finally, command-based interfaces are limited in that they promote a command-oriented view of the application. This means that commands are usually verb-object, where commands are actions to be performed on entities, named in command arguments. It is a limitation in that verbs must have object selection criteria built in. (For example, 'delete character', 'delete word', and 'delete line' in a text editor.) An evolutionary modification involves replacing it with an object-verb orientation. This separates commands into two components: specification. However, changing to an object-verb orientation does not remove the other limitation of the verb-object approach: it requires users to remember which actions are associated with which objects. This is not an obvious limitation until you try to translate a command-based interface into a WYSIWYG interface and combine it with a menu-based interface. Here, the verb-object approach requires users to pick an action from the menu and then choose the object from the domain representation. This forces a modal approach, in which the user specifies an action and then must go into a selection mode for defining the objects of the action. The object-verb approach eliminates the mode and when combined with menus and a domain representation, results in a direct manipulation interface [1, 14,] usually combined with a Windows Icons Menus Pointer (WIMP) interface.

The combined effect of these changes led to a separation of interface and the application. The Seeheim model [7] formalized this, separating UI development from functional development and defining three communicating components of an application:

* the functional core, which does the domain-specific work

* the presentation component, which displays the graphical representation of the domain and its related interaction objects.

* the dialogue controller, which manages the communication between the functional core and the presentation component and manages the appearance of the interaction objects

The Seeheim paper marked a turn toward the UI management system as an important market and research topic. Recently, the UI Developer's Workshop [3] of SIGCHI revisited the Seeheim model in the light of the rapid infusion of new technology.

During the eight years since the introduction of this model, the underlying technology has evolved into distributed, heterogeneous environments, which must be managed by the interface, and the requirements imposed by the application domains have become far more complicated. GUI development is now a routine part of systems as diverse as the space station and transportation system monitoring and control. In addition, the last 10 years have seen the development of X [17] as an industry standard and Motif widget technology (from OSF [15]) as a standard model for interface development. Currently, there are at least 20 interactive development tools (IDTs) and UI management systems (UIMSs) for interface development available on the market, all based on widget technology. Indeed, these tools are becoming more significant as tools for speeding development, thereby improving developer productivity.

Have we come any closer to really easing the development of systems that must cope with large application domains? Often, the answer is yes. The applications for which this is true tend to be in the area of direct manipulation interfaces, such as document preparation and drawing editors. The evolution from vt100/text style interfaces to graphic workstation/Motif style interfaces has been a process of progressive abstraction for coping with the inherent complexity of the interface. Here, the Seeheim model has to some extent been realized, but real progress has been achieved only in the area of the presentation component of the interface (i.e., the physical appearance of the interface). Application developers still face the same problem as developers of 15 years ago: large application domains present the potential user of a Motif interface with an overwhelming amount of information, which, if not organized and managed effectively, makes the interface unusable.

Limits to Growth

We describe some areas in which we feel the current strategies will start to fail. We then describe possible strategy changes to address those failures.

Point-and-Click Interfaces in UI Development Tools

Clearly, IDTs/UIMSs have taken some steps in the right direction: Better layout editors, interpretive environments and a start at families of reusable interface components. However, point-and-click interfaces are limited in that they are essentially equivalent to programming languages that only support sequencing; that is, they do not have anything even as simple as a test-and-branch, much less more structured control constructs. These control structures are necessary to automate many of the simple design decisions users make. There are two promising approaches to addressing the limits of point-and-click, direct manipulation interfaces: indirect manipulation, and changing tool domains.

Indirect Manipulation: Attribute Clusters and Other Abstractions. The problem with a direct manipulation interface in a WYSIWYG world is that you spend too much time interacting with the objects directly. In this world, a point-and-click interface requires you to repeatedly 1) select objects and then 2) select the operations to perform on those objects. As the number of objects increases this approach becomes tedious. Further complicating the issue is the fact that for computer-aided tasks, such as evolutionary software development, you redo more than you do. As the number of interface and domain objects increase, the point-and-click approach becomes unwieldy.

A simple solution derives from reversing an old saying: A word is worth a thousand pictures.

An example is 'picture' in the preceding sentence. In our fascination with the graphical capabilities of computers, we have lost sight of millennia of linguistic evolution, that is, the applying of symbols to experiences. Words effectively classify experiences, and in the same way names can classify object groups, which can be manipulated simultaneously. This is indirect manipulation, in which you are directly manipulating an abstraction that controls the behavior or appearance of the actual object. A common example is the paragraph formats or style sheets seen in document preparation systems.

The advantage of this approach is that it takes a very large data space, the space of all possible attributes of an object, and allows the users to reduce that to a few semantically significant attribute sets. The disadvantage is that users must learn the abstraction, which takes time and energy. Therefore, the ability to cluster attributes and objects and name those clusters must be consonant with the style of use of the tool.

Changing Tool Domains. One promising solution to complexity derives from the remapping strategy we have described. A UIMS that supports complex problem domains must support multiple isomorphic tools. (Isomorphic means structure-preserving, and means that changing between tools does not lose information.) Since developing an interface is a heterogenous, phased activity, different cost functions obtain at different points in the process. Tools that address the different phases may have cost-benefit curves, as in Figure 1.

Here, the editor tool could be a WYSIWYG editor for building dialog boxes and laying out interface screens, and the programming tool could be an API for the same task. The editor exists in a point-and-click domain, which is equivalent to a programming language with only sequence statements. Such a tool loses its advantage when applied to highly repetitive tasks with minor variations. Such tasks are trivial in the programming domain, since they are easy to parameterize and iterate. However, the programming domain has a significantly higher start-up cost.

Database suppliers have recognized this principle for a long time: Most relational databases come with at least three tool domains: 1) 3GL - APIs for programming languages, 2) 4GL - SQL, and 3) interactive tool such as an ad hoc query facility. Myers [11] discusses a workshop on languages for developing UIs that discusses this further, including a variant of the graph shown.

Apple's HyperCard system [16] has a neat, but partial, solution to this problem, providing a layered interface, allowing graduated access to the more powerful (and more complicated) programming capabilities. However, the HyperCard technology is inherently limited 'in the large'. The problem comes from combining an iconic style of development and a programming style of development. Selecting an icon provides the developer the option of entering a script program to be executed when the interface enters a certain state. For large applications, this style of development quickly fragments the program in a way that is difficult to maintain. See Nielsen, et. al. [12] for a discussion of HyperCard as a programming system.

Ideally, the tool domains should be completely interchangeable, allowing free access to multiple views of the interface development, as in Figure 2.

In practice, partial strategies can be effective. For instance, you can combine clustering and remapping, to break the problem into remappable clusters, separating out portions of the problem for remapping. For code-generation tools, which are essentially one-way remapping tools, you can break the problem into pieces that you plan to map once and pieces that you plan to map often. For the pieces that you plan to map once, you generate a separate interface description, generate the code, and edit the code as necessary. Once you modify the code, you have cemented your decision to move into the programming domain, but you have a head start because you have short-circuited some of the start-up time.

Complete isomorphism, however, may be impractical. In our experience with DataViews (a data visualization tool for interfaces to real-time systems, which has an editor component and a subroutine interface component), we found some things that were easy to do in the programming domain were difficult to do in the interactive editor domain, and therefore we never implemented the corresponding editor functionality. Thus, a subset of the structure is isomorphic.

As with any remapping approach, there is a mapping cost. The cost comes from several sources. Developing the mapping tool in the first place requires resources, which may be difficult to pry from other projects. The mapping tool complicates development task, because there are multiple user models to consider. Finally, users must learn these various models and decide when to use them. Usually, they exhibit a hysteresis affect, sticking with a tool or strategy beyond its point of maximum utility. Thus, the multiple tools must leverage off existing technologies and standards as much as possible. Since tools abound in the programming domain, at least one domain should be symbolic and textual, allowing access by at least one programming language.

WYSIWYG is Good, but YANTSWIBTC

"What you see is what you get" is good, but "you also need to see what is behind the curtain."

While WYSIWYG representations are important in interactive tools, they can be limiting if they are the central representation. First, if it is the object of a direct manipulation style interface, it will have the problems described. Second, it hides the abstractions that structure what you see. Since the abstraction may not have an obvious correlation to what is displayed, users have to rely on memory or understanding of internal structures to infer the abstractions. Human memory is a notoriously expensive cost component in any strategy:

Whenever information needed to do a task is readily available in the world, the need for us to learn it diminishes... In general, people structure the environment to provide a considerable amount of the information required for something to be remembered. [4]

Therefore, these abstractions should rightly be the central representation, the object of any direct manipulation. For example, X-Designer (an interactive interface design tool for Motif), presents users with a tree abstraction of the widget hierarchy, which abstraction is the object of the direct manipulation. The WYSIWYG representation echoes the abstraction, but is a side-effect of it rather than the object of manipulation.

GUIs are not Graphical

Essentially, the UIMS tools that are available today focus on the command/control domain, not integrating the application representation domain. What the market considers a GUI is little more than a glorified menu system, having no graphics. This leaves the graphical representation of the application domain as an exercise for the developer, who must resort to primitive graphics calls, say to Xlib. A true GUI would include a complete representation of the system being controlled, and would support the ability to interact with the system through the representation. The UI development tool would not only have to support graphical modeling, it also should support describing the data being modeled, and the relationship of the graphics to the data.

What happens when the number of data and the graphical components increase? There are now three elements to deal with: data, graphics, and relationships. Each presents its own problems when numbers increase. Summarizing where scaling issues will arise:

* Graphical modeling. As the interface components increase, the need to develop and manage them becomes progressively more onerous.

* Interface dialog management. Any industrial strength UI is more than a collection of graphical components. It is an organization that simplifies the presentation of the components. The design needs to deal with the purpose and relationship of the components, a task that is usually the domain of a dialogue specification language.

* Data acquisition and archival. While this is not directly an interface task, it is correlated with it. Any sophisticated application domain will need to handle data management. What is the data you need and when do you need it? Can you gather it and present it in a timely fashion?

* Data description. In order for graphical representation to work, you must relate data to graphics. For complicated and large data sets, developers would prefer to avoid duplicate entry of information. They do not want to specify the data with a CASE tool and then have to enter the same information into a UIMS.

* Relating Data to Graphics. As the number of data and graphical elements increase, the possible relationships increase as the product of the number of elements. In reality, the number is less because of inherent problem domain constraints, but the problem is nevertheless there.

Since expecting a single development team to address all these issues is usually impractical, most of the time applications will be required to integrate several tools and strategies.

Reusable Interface Components. A few years ago the issue of reusability started to be promoted as a way of addressing the software crisis. One such proposal was for software ICs [4], which would be analogous to hardware ICs in that product development would involve selection and integration of third-party components. This would distribute tested technology in bite-size components. This was supposed to be an improvement on the existing situation, which was presumably more akin to software board-sets. The approach did not take hold, because the integrating infrastructure was not in place. Hardware ICs are not viable if users do not share certain conventions, such as the standard voltages, power consumption, and heat dissipation requirements. In the same way, software ICs, require conventions constraining the ways they will be glued together.

We believe the time is ripe for the software IC approach to work in a specific domain: graphical interface components for Xt-based applications. Here, the X Intrinsics model (Xt) defines the infrastructure and the conventions governing its use. Good or bad, the model has been widely disseminated and a consensus is evolving as to its proper use. Potential interface components include soft instruments, graphs. 3D data visualization, layout (container) widgets, images, and video.

Widespread availability of such widgets raises another scale-related problem: how do customers find the widgets they need? With larger, more monolithic products, a linear search through the ads in a few industry-focused magazines is a reasonable strategy. When the number of products rises by a factor of 10 or 100, this is no longer feasible. New marketing approaches, such as catalogs, preferably vendor-independent, become necessary. Another approach, mentioned earlier, is to collect widgets into families or to provide tools for creating variations on a theme, allowing developers to take the last step of customizing the widget to the requirements. See [13] for examples of other usability problems with reuse in current programming environments.

Too Many Features, What Do I Do Next?

The more features and options there are in a product, the more likely users are to choose suboptimal approaches. Tools developers naturally try to provide as many options as possible to their customers, but this creates classes of users: those who know how to use the options and those who do not. Effective use of any power tool requires learning it: the more features it has, the more difficult it is to learn. Since the market pulls products in the direction of adding features, it pulls products into more complexity. Every feature has a cost, even if it is unused or ignored. Thus, as the number of features scale up, the cognitive load on the users (in this case, the UI developer) increases until the users become progressively less likely to use the product effectively.

Rule-Based Design Aids. Artificial intelligence has matured to the point that rule-based decision aids are practical and can be viably integrated with other products. These can effectively reduce the number of options available to users by constraining them to conform to design guidelines. A well-known example is the rules governing chip design, which have been integrated into ECAD packages. This greatly reduces the set of possible actions a user may take by clustering the potential actions into groups, related by their conformance to design guidelines. Another example of this approach is demonstrational interfaces [10], which infer subsequent actions from initial actions. This is especially useful in cases in which the user would like to perform some action repetitively. The system infers from the first two cases that there is a repetitive component, prompts the user to confirm it, and gives the user the option of using a shortcut.

"I Have a Need for Speed"(1)

We said earlier that features have a cost, even if they are not used. This is not only true from a cognitive perspective, it is true from a performance perspective. Features slow systems down, even if they are not used, because the code must be able to handle their potential use. There is a tendency to minimize the concern for performance by noting that cpu performance is increasing at a dramatic rate, and we have power to spare. However, remember Parkinson's Law, "Work expands to fill the time available," the truth of which has been verified countless times in the software industry. Users are extremely sensitive to poor performance in an interface. Interactive tools that are responsive are preferable to tools that are flashy but slow.

Distributed Processing and Multiapplication Interoperability. One way to improve performance is to distribute work, dedicating more resources to more time-critical components of the system. The X Intrinsics already do this by handling the immediate feedback to user input. More generally, this leads to applications becoming collections of distributed services, each with their own sets of time constraints. Since these services can be provided by multiple vendors, interoperability becomes an issue. Can the services stay out of one anothers' way? Can they communicate information effectively? The Object Request Broker (ORB) promoted by the Object Management Group [20] is an important attempt to standardize the passing of objects among disparate applications and services. Other enabling technologies include RPCs, the client/server model, and Microsoft's Object Linking and Embedding (OLE).

Another way to distribute the resources is to create specialized hardware for standard components. For example, the X standard made it possible to create X-terminals, which can off-load some of the graphics work.

Software Development and Government Regulation

Until recently, government regulation was not much of a factor in UI development. We mention it as an example of how an apparently irrelevant factor can become relevant.

There are a number of legal issues, relating to liability and intellectual property, which have yet to be resolved. The current debate around widening patent protection to cover software may significantly alter the nature of programming and interface development in the years to come. The Economist [5], in its inimitable style, summarizes the effect of this widening protection.

... the most immediate benefit goes to those who do the arguing rather than to those who do the innovating. There is lots of arguing left to do.

This obviously acts as a limiting factor in current user interface development strategies, because it increases the cost of design decisions. Designs must be checked against a growing list of prior technology to make sure they do not violate legal restrictions. Since software development can involve thousands of design decisions, any change that significantly increases the cost of those decisions will have an adverse affect.

The first response of the well-heeled to this situation is to start arguing. We see this in the recent growth in the number of software patents, many of which are considered by computer professionals to be specious.

The response from the rest of us is to look to outside technology suppliers in any questionable area. For instance, the threat of look-and-feel copyright lawsuits has been a motivating force behind the standardization of GUIs. In this way, developers distribute legal risk for aspects of the development that is beyond their area of expertise. This will promote software reuse and application interoperability, which we mentioned earlier.

Summary

As computer technology has advanced to support more complicated applications, the UIs have had to scale to keep pace. Any time a strategy or approach scales up, it runs the risk of reaching practical limits, because of some badly behaved components of its cost function. We have seen this throughout the history of UI development and we are facing it again. When we try to scale UI tools to model the application domain, we encounter new limits. Here we have described some of these limits and possible shifts in strategy to address those limits. We summarize these strategic shifts in Table 2.

The strategic shifts we discussed were indirect manipulation, multiple isomorphic tool domains, making WYSIWG a secondary representation, using reusable, third-party interface components, and rule-based design aids. This is not an exhaustive list and perhaps does not include the most important items. It represents our best guess. As with all attempts at prognostication, it may overlook significant factors.

Nevertheless, we feel the principles illustrated are universal: when problems scale up, the strategies for dealing with them must change. Moreover, there are a small number of metastrategies for changing those strategies: clustering, hierarchy, and remapping--strategies that have been proved repeatedly throughout the history of software development.

(1) This is actually a quote of a quote: David Mandelkern quoting Tom Cruise in the movie Top Gun

References

[1.] Baecker, R.M. and Buxton, W. Readings in Human-Computer Interaction. Morgan Kaufman, Inc., Los Altos, Calif., 1987.

[2.] Bass, L., et. al. The arch model: Seeheim revisited.

[3.] Bass, L. and Coutaz, J. Developing Software for the User Interface. Addison-Wesley, Reading, Mass., 1991.

[4.] Cox, B. Object-Oriented Programming: An Evolutionary Approach. Addison-Wesley, Reading, Mass., 1986.

[5.] The Economist. Policing Thoughts, Aug. 22, 1992.

[6.] Gelertner, D. Mirror Worlds. Oxford University Press, Oxford, 1991.

[7.] Green, M.W. The design of graphical interfaces. Computer Systems Research Institute Tech. Rep. CSRI-170, University of Toronto, Canada, 1985.

[8.] Kuhn, T. The Structure of Scientific Revolutions. University of Chicago Press, Chicago, 1962.

[9.] Morse, A., Visualizing Near Realtime. Sun Tech J. (July/Aug. 1990).

[10.] Myers, B.A. Demonstrational interfaces: A step beyond direct direct manipulation. IEEE Comput. (Aug. 1992).

[11.] Myers, B.A. Report on the CHI'91 Workshop on Languages for Developing User Interfaces. ACM SIGPLAN Notes 27, 12 (Dec. 1992).

[12.] Nielsen, J., Freher, I. and Nymand, H.O. The learnability of HyperCard as an object-oriented programming system. Behavior Inf. Tech. 10, 2 (March/Apr. 1991), 111-120.

[13.] Nielsen, J. and Richards, J.T. The experience of learning and using Smalltalk. IEEE Softw. 6, 3 (May 1989), 73-77.

[14.] Norman, D.A. The Design of Everyday Things. Doubleday, New York, 1988.

[15.] Open Software Foundation, OSF/Motif Programmers Reference Manual, Revision 1.1, Open Software Foundation, Cambridge, Mass., 1991.

[16.] Schafer, D. HyperTalk Programming. Hayden Books, Indianapolis, 1988.

[17.] Scheifler, R.W. and Gettys, J. The X Window System. ACM Trans. Graph. 5, 2 (Apr. 1986), 79-109.

[18.] Shneiderman, B. Direct Manipulation: A Step Beyond Programming Languages. IEEE Comput. (Aug. 1983).

[19.] Shneiderman, B. Designing the User Interface: Strategies for Effective Human-Computer Interaction. Addison-Wesley, Reading, Mass., 1987.

[20.] Soley, R.M. Object Management Architecture Guide. Object Management Group, OMG TC Document 90.9.1, Framingham, Mass., 1990.
COPYRIGHT 1993 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:GUIs: The Next Generation; user interface
Author:Morse, Alan; Reynolds, George
Publication:Communications of the ACM
Date:Apr 1, 1993
Words:5882
Previous Article:Information visualization using 3D interactive animation.
Next Article:Noncommand user interfaces.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters