Article to appear in ACM Computing Surveys 28(4), December 1996. Copyright © 1996 by the Association for Computing Machinery, Inc. See the permissions statement below.


Strategic Directions in
Human Computer Interaction


Revised; Final version of November 13, 1996

Edited by*:
Brad Myers, Carnegie Mellon University, bam@cs.cmu.edu
Jim Hollan, University of New Mexico, hollan@cs.unm.edu
Isabel Cruz, Tufts University, ifc@cs.brown.edu

*This article is based on the results of the Human Computer Interaction Working Group of the ACM Workshop on Strategic Directions in Computing Research, and was authored by:
Steve Bryson, NASA Ames Research Center, Dick Bulterman, CWI, Tiziana Catarci, University of Rome, Wayne Citrin, University of Colorado Boulder, Isabel Cruz, Tufts University (co-chair and editor), Ephraim Glinert, RPI, Jonathan Grudin, University of California Irvine, Jim Hollan, University of New Mexico (editor), Yannis Ioannidis, University of Wisconsin-Madison, Rob Jacob, Tufts University, Bonnie John, Carnegie Mellon University, David Kurlander, Microsoft Research, Brad Myers, Carnegie Mellon University (co-chair and editor), Dan Olsen, Carnegie Mellon University, Randy Pausch, University of Virginia, Stuart Shieber, Harvard University, Ben Shneiderman, University of Maryland College Park, John Stasko, Georgia Tech, Gary Strong, NSF, Kent Wittenburg, Bellcore.


Abstract: Human Computer Interaction (HCI) is the study of how people design, implement, and use interactive computer systems, and how computers affect individuals, organizations, and society. HCI is a research area of increasingly central significance to computer science, other scientific and engineering disciplines, and an ever expanding array of application domains. This more prominent role follows from the widely perceived need to expand the focus of computer science research beyond traditional hardware and software issues to attempt to better understand how technology can more effectively support people in accomplishing their goals.

At the same time that a human-centered approach to system development is of growing significance, factors conspire to make the design and development of systems even more difficult than in the past. This increased difficulty follows from the disappearance of boundaries between applications as we start to support people's real activities; between machines as we move to distributed computing; between media as we expand systems to include video, sound, graphics, and communication facilities; and between people as we begin to realize the importance of supporting organizations and group activities.

This report summarizes selected strategic directions in human computer interaction research, sets them within an historical context of research accomplishments, and tries to convey not only the significance but the excitement of the field.

Categories and Subject Descriptors: H.1.2 [Information Systems]: Human Factors; H.5 [Information Systems]: Information Interfaces and Presentation;

General Terms: Human Factors




Table of Contents




1. Retrospective

Human-Computer Interaction (HCI) is the study of how people design, implement, and use interactive computer systems, and how computers affect individuals, organizations, and society. This encompasses not only ease of use but also new interaction techniques for supporting user tasks, providing better access to information, and creating more powerful forms of communication. It involves input and output devices and the interaction techniques that use them; how information is presented and requested; how the computer's actions are controlled and monitored; all forms of help, documentation, and training; the tools used to design, build, test, and evaluate user interfaces; and the processes that developers follow when creating interfaces.

This report describes the historical and intellectual foundations of HCI, and then summarizes selected strategic directions in human-computer interaction research. Previous important reports on HCI directions include the results of the 1991 [Sibert 93] and 1994 [Strong 94] NSF studies on HCI in general, and the 1994 NSF study on the World-Wide-Web [Foley 94].

1.1. Importance of HCI

Users expect highly effective and easy-to-learn interfaces and developers now realize the crucial role the interface plays. Surveys show that over 50% of the design and programming effort on projects is devoted to the user interface portion [Myers 92]. The human-computer interface is critical to the success of products in the marketplace, as well as the safety, usefulness, and pleasure of using computer-based systems.

There is substantial empirical evidence that employing the processes, techniques, and tools developed by the HCI community can dramatically decrease costs and increase productivity. For example, one study [Karat 90] reported savings due to the use of usability engineering [Nielsen 93b] of $41,700 in a small application used by 23,000 marketing personnel, and $6,800,000 for a large business application used by 240,000 employees. Savings were attributed to decreased task time, fewer errors, greatly reduced user disruption, reduced burden on support staff, elimination of training, and avoidance of changes in software after release. Another analysis estimates the mean benefit for finding each usability problem at $19,300 [Mantei 88]. A usability analysis of a proposed workstation saved a telephone company $2 million per year in operating costs [Gray 93]. A mathematical model based on eleven studies suggests that using software that has undergone thorough usability engineering will save a small project $39,000, a medium project $613,000 and a large project $8,200,000 [Nielsen 93a]. By estimating all the costs associated with usability engineering, another study found that the benefits can be up to 5000 times the cost [Nielsen 93a].

There are also well-known catastrophes that have resulted from not paying enough attention to the human-computer interface. For example, the complicated user interface of the Aegis tracking system was a contributing cause to the erroneous downing of an Iranian passenger plane, and the US Stark's inability to cope with Iraqi Exocet missiles was partly attributed to the human-computer interface [Neumann 91]. Problems with the interfaces of military and commercial airplane cockpits have been named as a likely cause for several crashes, including the Cali crash of December 1995 [Ladkin 96]. Sometimes the implementation of the user interface can be at fault. A number of people died from radiation overdoses partially as a result of faulty cursor handling code in the Therac-25 [Leveson 93].

Effective user interfaces to complex applications are indispensable. The recognition of their importance in other disciplines is increasing and with it the necessary interdisciplinary collaboration needed to fully address many challenging research problems. For example, for artificial intelligence technologies such as agents, speech, and learning and adaptive systems, effective interfaces are fundamental to general acceptance. HCI subdisciplines such as information visualization and algorithm animation are used in computational geometry, databases, information retrieval, parallel and distributed computation, electronic commerce and digital libraries, and education. HCI requirements resulting from multimedia, distributed computing, real-time graphics, multimodal input and output, ubiquitous computing, and other new interface technologies shape the research problems currently being investigated in disciplines such as operating systems, databases, and networking. New programming languages such as Java result from the need to program new types of distributed interfaces on multiple platforms. As more and more of software designers' time and code are devoted to the user interface, software engineering must increase its focus on HCI.

1.2. History

HCI research has been spectacularly successful, and has fundamentally changed computing. Just one example is the ubiquitous graphical interface. Another example is that virtually all software written today employs user interface toolkits and interface builders. Even the spectacular growth of the World-Wide Web is a direct result of HCI technology: applying hypertext technology to browsers allows one to traverse a link across the world with a click of the mouse. It is interface improvements more than anything else that triggered this explosive growth.

In this section we give a brief summary of the research that underlies a few selected HCI advances. By "research," we mean exploratory work at universities and government and industrial research labs (such as Xerox PARC) that is not directly related to products. Figure 1 shows a summary time line. Of course, deeper analysis would reveal much interaction between these three activity streams. For a more complete history, see [Myers 96a]. It is important to appreciate that years of research, typically government-funded, are involved in creating and making these technologies ready for widespread use. The same will be true for the HCI technologies that will provide the interfaces of tomorrow.


Figure 1: Summary time-lines for some of the technologies discussed in this article.


Direct Manipulation of Graphical Objects: The now ubiquitous direct manipulation interface was first demonstrated by Ivan Sutherland in Sketchpad [Sutherland 63]. This system was the basis of his 1963 MIT PhD thesis. SketchPad supported manipulation of objects using a light-pen, including grabbing objects, moving them, changing size, and using constraints. It contained the seeds of myriad important interface ideas. The system was built at Lincoln Labs with support from the Air Force and NSF. William Newman's Reaction Handler [Newman 68], created at Imperial College, London (1966-67) provided direct manipulation of graphics, and introduced "Light Handles," a form of graphical potentiometer, that was probably the first "widget." Another early system was AMBIT/G (implemented at MIT's Lincoln Labs, 1968, ARPA funded). It employed, among other interface techniques, iconic representations, gesture recognition, dynamic menus, selection of icons by pointing, and moded and mode-free styles of interaction. Smith coined the term "icons" in his 1975 Stanford PhD thesis on Pygmalion [Smith 77] (funded by ARPA and NIMH) and Smith later popularized icons as one of the chief designers of the Xerox Star [Smith 82]. Many of the interaction techniques popular in direct manipulation interfaces, such as how objects and text are selected, opened, and manipulated, resulted from research at Xerox PARC in the 1970's. The concept of direct manipulation interfaces for everyone was envisioned by Alan Kay of Xerox PARC in a 1977 article about the "Dynabook" [Kay 77]. The first commercial systems to make extensive use of Direct Manipulation were the Xerox Star (1981) [Smith 82], the Apple Lisa (1982) [Williams 83] and Macintosh (1984) [Williams 84]. Ben Shneiderman at the University of Maryland coined the term "Direct Manipulation" in 1982 and identified the components and gave psychological foundations [Shneiderman 83]. The concept was elaborated by other researchers (e.g. [Hutchins 85]).

Windows: Multiple tiled windows were demonstrated in Engelbart's NLS in 1968. Early research at Stanford on systems like COPILOT (1974) [Swinehart 74] and at MIT with the EMACS text editor (1974) also demonstrated tiled windows. Alan Kay proposed the idea of overlapping windows in his 1969 University of Utah PhD thesis [Kay 69] and they first appeared in his 1974 Smalltalk system [Goldberg 79] at Xerox PARC, and soon after in the InterLisp system [Teitelman 79]. One of the first commercial uses of windows was on LMI and Symbolics Lisp Machines (1979), which grew out of MIT AI Lab projects. The main commercial systems popularizing windows were the Xerox Star (1981), the Apple Lisa (1982), and most importantly the Apple Macintosh (1984). Microsoft's original window managers were tiled, but eventually were overlapping. The X Window System, a current international standard, was developed at MIT in 1984 [Scheifler 86]. For a survey of window managers, see [Myers 88].

Hypertext: The idea for hypertext is credited to Vannevar Bush's famous MEMEX idea from 1945 [Bush 45], but his idea of implementing this using microfilm was never tried. Engelbart's NLS system [Engelbart 68] at the Stanford Research Laboratories in 1965 made extensive use of linking (funding from ARPA, NASA, and Rome ADC). The "NLS Journal," one of the first on-line journals, included full linking of articles. Ted Nelson coined the term "hypertext" in 1965 [Nelson 65]. The Hypertext Editing System, jointly designed by Andy van Dam, Ted Nelson, and two students at Brown University (funding from IBM) was distributed extensively [van Dam 69]. The ZOG project (1977) from CMU was another early hypertext system, and was funded by ONR and DARPA [Robertson 77]. Ben Shneiderman's Hyperties was the first system where highlighted items in the text could be clicked on to go to other pages (1983, Univ. of Maryland) [Koved 86]. HyperCard from Apple (1988) significantly helped to bring the idea to a wide audience. There have been many other hypertext systems through the years. The spectacular growth of the World-Wide Web is a direct result of Tim Berners-Lee's application of Hypertext as the interface to mostly existing capabilities of the Internet. This work was done in 1990 while he was at the government-funded European Particle Physics Laboratory (CERN).

UIMSs and Toolkits: The first User Interface Management System (UIMS) was William Newman's Reaction Handler [Newman 68] created at Imperial College, London (1966-67 with SRC funding). Most of the early work took place at universities (University of Toronto with Canadian government funding; George Washington University with NASA, NSF, DOE, and NBS funding; Brigham Young University with industrial funding). The term UIMS was coined by David Kasik at Boeing (1982) [Kasik 82]. Early window managers such as Smalltalk (1974) and InterLisp, both from Xerox PARC, came with a few widgets, such as popup menus and scrollbars. The Xerox Star (1981) was the first commercial system to have a large collection of widgets and to use dialog boxes. The Apple Macintosh (1984) was the first to actively promote its toolkit for use by other developers to enforce a consistent interface. An early C++ toolkit was InterViews [Linton 89], developed at Stanford (1988, industrial funding). Much of current research is now being performed at universities, including Garnet [Myers 90] and Amulet [Myers 96b] at CMU (ARPA funded), MasterMind [Neches 93] at Georgia Tech (ARPA funded) , and Artkit [Hudson 96] at Georgia Tech (funding from NSF and Intel).

There are, of course, many other examples of HCI research that should be included in a complete history, including work that led to drawing programs, paint programs, animation systems, text editing, spreadsheets, multimedia, 3D, virtual reality, interface builders, event-driven architectures, usability engineering, and a very long list of other significant developments [Myers 96a]. Although our brief history here has had to be selective, what we hope is clear is that there are many years of productive HCI research behind our current interfaces and that it has been research results that have led to the successful interfaces of today.

For the future, HCI researchers are developing interfaces that will greatly facilitate interaction and make computers useful to a wider population. These technologies include: handwriting and gesture recognition, speech and natural language understanding, multiscale zoomable interfaces, "intelligent agents" to help users understand systems and find information, end-user programming systems so people can create and tailor their own applications, and much, much more. New methods and tools promise to make the process of developing user interfaces significantly easier but the challenges are many as we expand the modalities that interface designers employ and as computing systems become an increasingly central part of virtually every aspect of our lives.

As HCI has matured as a discipline, a set of principles is emerging that are generally agreed upon and that are taught in courses on HCI at the undergraduate and graduate level (e.g, see [Greenberg 96]). These principles should be taught to every CS undergraduate, since virtually all programmers will be involved in designing and implementing user interfaces during their careers. These principles are described in other publications, such as [Hewett 92], and include task analysis, user-centered design, and evaluation methods.

1.3. Foundations of the Field

The intellectual foundations of HCI derive from a variety of fields: computer science, cognitive psychology, social psychology, perceptual psychology, linguistics, artificial intelligence, and anthropology. Decades of research in perceptual and cognitive psychology were distilled by pioneers in HCI, beginning in the 1960s (e.g., [Shackel 69]), and several workers have explored the relationship between these sciences and the demands of design (e.g., [Barnard 91] [Landauer 91]).

One influential early effort was directed at producing an "engineering model of human performance" able to make quantitative predictions that can contribute to design (the Model Human Processor [Card 83]). Drawing also on research into human problem solving [Ernst 69] [Newell 72], this led to the GOMS family of analysis techniques that make quantitative predictions of skilled performance. Extensions and refinements of these models have continued to draw on basic psychological theories [Olson 90]. In addition, the needs of HCI have given rise to new psychological theories, e.g., Polson and Lewis's theory of learning through exploration that predicts behavior in walk-up-and-use interfaces and other applications where exploration is the norm [Polson 90].

Donald Norman and his colleagues applied knowledge from the psychology of perception, attention, memory, and motor control to human-computer interaction and design in a series of influential papers and books (e.g., [Norman 86] [Norman 90]). The think-aloud protocol technique [Ericsson 84], developed in cognitive psychology to assist human problem solving research [Newell 72], influenced early HCI work and has become a valuable usability engineering method [Nielsen 93b, p. 195]. Requirements-setting for HCI uses techniques from anthropology (e.g., ethnographic techniques, [Blomberg 93]). Evaluation uses experimental techniques long established in experimental psychology. Social psychology contributes methods for discourse analysis (e.g. [Clark 85]), interviewing, and questionnaires. Using methods researched and validated in other scientific fields allows HCI to move quickly to robust, valid results that are applicable to the more applied area of design.

The intellectual foundations of HCI also include the development of object-oriented programming. This style of programming comes from early work on Simula but was further developed and refined in Smalltalk as a natural way to implement user interfaces [Kay 77]. Early HCI software work drew on compiler theories such as the conceptual/semantic/syntactic/lexical model and parser technologies. Constraint systems and solvers were developed to ease UI implementations ranging from SketchPad [Sutherland 63] and ThingLab [Borning 81] to Amulet [Myers 96b] and Artkit [Hudson 96].

Current widely used interaction techniques, such as how menus and scroll bars work, have been refined through years of research and experimentation. We now know how to provide effective control using the mouse and keyboard for 2D interfaces.

2. Prospective

Although we are encouraged by past research success in HCI and excited by the potential of current research, we want to emphasize how central a strong research effort is to future practical use of computational and network technologies. For example, popular discussion of the National Information Infrastructure (NII) envisions the development of an information marketplace that can enrich people's economic, social, cultural, and political lives. For such an information marketplace, or, in fact, many other applications, to be successful requires solutions to a series of significant research issues that all revolve around better understanding how to build effective human-centered systems. The following sections discuss selected strategic themes, technology trends, and opportunities to be addressed by HCI research.

2.1. Strategic Themes

If one steps back from the details of current HCI research a number of themes are visible. Although we cannot hope to do justice here to elaborating these or a number of other themes that arose in our workshop discussions, it is clear that HCI research has now started to crystallize as a critical discipline, intimately involved in virtually all uses of computer technologies and decisive to successful applications. Here we expand on just a few themes:

  • Universal Access to Large and Complex Distributed Information: As the "global information infrastructure" expands at unprecedented rates, there are dramatic changes taking place in the kind of people who access the available information and the types of information involved. Virtually all entities (from large corporations to individuals) are engaged in activities that increasingly involve accessing databases, and their livelihood and/or competitiveness depend heavily on the effectiveness and efficiency of that access. As a result, the potential user community of database and other information systems is becoming startlingly large and rather nontechnical, with most users bound to remain permanent novices with respect to many of the diverse information sources they can access. It is therefore urgently necessary and strategically critical to develop user interfaces that require minimal technical sophistication and expertise by the users and support a wide variety of information-intensive tasks [Silberschatz 96].

    Information-access interfaces must offer great flexibility on how queries are expressed and how data are visualized; they must be able to deal with several new kinds of data, e.g., multimedia, free text, documents, the Web itself; and they must permit several new styles of interaction beyond the typical, two-step query-specification/result-visualization loop, e.g., data browsing, filtering, and dynamic and incremental querying. Fundamental research is required on visual query languages, user-defined and constraint-based visualizations, visual metaphors, and generic and customizable interfaces, and advances seem most likely to come from collaborations between the HCI and database research communities.

    Information-discovery interfaces must support a collaboration between humans and computers, e.g., for data mining. Because of our limited memory and cognitive abilities, the growing volume of available information has increasingly forced us to delegate the discovery process to computers, greatly underemphasizing the key role played by humans. Discovery should be viewed as an interactive process in which the system gives users the necessary support to analyze terabytes of data, and users give the system the feedback necessary to better focus its search. Fundamental issues for the future include how best to array tasks between people and computers, create systems that adapt to different kinds of users, and support the changing context of tasks. Also, the system could suggest appropriate discovery techniques depending on data characteristics, as well as data visualizations, and help integrate what are currently different tools into a homogeneous environment (see [Brachman 96] [Keim 95]).

  • Education and Life-Long Learning: Computationally assisted access to information has important implications for education and learning as evidenced in current discussions of "collaboratories" and "virtual universities." Education is a domain that is fundamentally intertwined with human-computer interaction. HCI research includes both the development and evaluation of new educational technologies such as multimedia systems, interactive simulations, and computer-assisted instructional materials. For example, consider distance learning situations involving individuals far away from schools. What types of learning environments, tools, and media effectively deliver the knowledge and understanding that these individuals seek? Furthermore, what constitutes an effective educational technology? Do particular media or types of simulations foster different types of learning? These questions apply not only to K-12 and college students, but also to adults through life-long learning. Virtually every current occupation involves workers who encounter new technologies and require additional training. How can computer-assisted instructional systems engage individuals and help them to learn new ideas? HCI research is crucial to answering these important questions.

  • Electronic Commerce: Another important theme revolves around the increasing role of computation in our economic life and highlights central HCI issues that go beyond usability to concerns with privacy, security, and trust. Although currently there is much hyperbole, as with most Internet technologies, over the next decade commercialization of the Internet may mean that digital commerce replaces much traditional commerce [Margolis 96]. The Internet makes possible services that could potentially be quite adaptive and responsive to consumer wishes. Digital commerce may require dramatic changes to internal processes as well as the invention of new processes [Lynch 96]. For digital commerce to be successful, the technology surrounding it will have to be affordable, widely available, simple to use, and secure. Interface issues are, of course, key.

  • End-User Programming: An important reason that the WWW has been so successful is that everyone can create his or her own pages. With the advent of WYSIWYG html page-editing tools, it will be even easier. However, for "active" pages that use forms, animations, or computation, a professional programmer is required to write the required code in a programming language like PERL or Java. The situation is the same for the desktop where applications are becoming increasingly programmable (e.g, by writing Visual Basic scripts for Microsoft Word), but only to those with training in programming. Applying the principles and methods of HCI to the design of programming languages and programming systems for end-users should bring to everyone the ability to program Web pages and desktop applications.

    End-user programming will be increasingly important in the future. No matter how successful interface designers are, systems will still need to be customized to the needs of particular users. Although there will likely be generic structures, for example, in an email filtering system, that can be shared, such systems and agents will always need to be tailored to meet personal requirements. The use of various scripting languages to meet such needs is widespread, but better interfaces and understandings of end-user programming are needed.

  • Information Visualization: This area focuses on graphical mechanisms designed to show the structure of information and improve the cost structure of access. Previous approaches have studied novel visualizations for information, such as the "Information Visualizer" [Card 91] [Robertson 93], history-enriched digital objects for displaying graphical abstractions of interaction history [Hill 94], and dotplots for visualizing self-similarity in millions of lines of text and code [Church 93]. Other approaches provide novel techniques for displaying data, e.g., dynamic queries [Shneiderman 94], visual query languages [Cruz 94], zoomable interfaces for supporting multiscale interfaces [Bederson 96], and lenses to provide alternative views of information [Bier 93]. Another branch of research is studying automatic selection of visualizations based on properties of the data and the user's tasks (e.g., [Mackinlay 86] [Roth 94]).

    The importance of information visualization will increase as people have access to larger and more diverse sources of information (e.g., digital libraries, large databases), which are becoming universally available with the WWW. Visualizing the WWW itself and other communication networks is also an important aim of information visualization systems (see, for example, [Catarci 96]). The rich variety of information may be handled by giving the users the ability to tailor the visualization to a particular application, to the size of the data set, or to the device (e.g., 2D vs. 3D capabilities, large vs. small screens). Research challenges include making the specification, exploration, and evolution of visualizations interactive and accessible to a variety of users. Tools should be designed that support a range of tailoring capabilities: from specifying visualizations from scratch to minor adaptations of existing visualizations. Incorporating automatic generation of information visualization with user-defined approaches is another interesting open problem, for example when the user-defined visualization is underconstrained.

    One fundamental issue for information visualization is how to characterize the expressiveness of a visualization and judge its adequacy to represent a data set. For example, the "readability" of a visualization of a graph may depend on (often conflicting) aesthetic criteria, such as the minimization of edge crossings and of the area of the graph, and the maximization of symmetries [DiBattista 94]. For other types of visualization, the criteria are quite ad hoc. Therefore, more foundation work is needed for establishing general principles (see, for example, [FADIVA 96]).

  • Computer-Mediated Communication: Examples of computer-mediated communication range from work that led to extraordinarily successful applications such as email to that involved in newer forms of communication via computers, such as real-time video and audio interactions. Research in Computer Supported Cooperative Work (CSCW) confronts complex issues associated with integration of several technologies (e.g., telephone, video, 3D graphics, cable, modem, fax, email), support for multi-person activities (which have particularly difficult interface development challenges), and issues of security, privacy, and trust.

    The unpredicted shift of focus to the Internet, intranets, and the World-Wide Web has ended a period in which the focus was on the interaction between an individual and a computer system, with relatively little attention to group and organizational contexts. Computer-mediated human communication raises a host of new interface issues. Additional challenges arise in coordinating the activities of computer-supported group members, either by providing shared access to common on-line resources and letting people structure their work around them, or by formally representing work processes to enable a system to guide the work. The CSCW subcommunity of human-computer interaction has grown rapidly, drawing from diverse disciplines. Social theory and social science, management studies, communication studies, education, are among the relevant areas of knowledge and expertise. Techniques drawn from these areas, including ethnographic approaches to understanding group activity, have become important adjuncts to more familiar usability methods.

    Mounting demands for more function, greater availability, and interoperability affect requirements in all areas. For example, the great increase in accessible information shifts the research agenda toward more sophisticated information retrieval techniques. Approaches to dealing with the new requirements through formal or de facto standards can determine where research is pointless, as well as where it is useful. As traditional applications are integrated into the Web, social aspects of computing are extended.

    2.2 Technological Trends

    Again, the number and variety of trends identified in our discussions outstrip the space we have here for reporting. One can see large general trends that are moving the field from concerns about connectivity, as the networked world becomes a reality, to compatibility, as applications increasingly need to run across different platforms and code begins to move over networks as easily as data, to issues of coordination, as we understand the need to support multiperson and organization activities. We limit our discussion here to a few instances of these general trends.

  • Computational Devices and Ubiquitous Computing: One of the most notable trends in computing is the increase in the variety of computational devices with which users interact. In addition to workstations and desktop personal computers, users are faced with (to mention only a few) laptops, PDAs, and LiveBoards [Elrod 94]. In the near future, Internet telephony will be universally available, and the much-heralded Internet appliance may allow interactions through the user's television and local cable connection. In the more distant future, wearable devices may become more widely available. All these technologies have been considered under the heading of "Ubiquitous Computing" [Weiser 93] because they involve using computers everywhere, not just on desks.

    The introduction of such devices presents a number of challenges to the discipline of HCI. First, there is the tension between the design of interfaces appropriate to the device in question and the need to offer a uniform interface for an application across a range of devices. The computational devices differ greatly, most notably in the sizes and resolutions of displays, but also in the available input devices, the stance of the user (is the user standing, sitting at a desk, or on a couch?), the physical support of the device (is the device sitting on a desk, mounted on a wall, or held by the user, and is the device immediately in front of the user or across the room?), and the social context of the device's use (is the device meant to be used in a private office, a meeting room, a busy street, or a living room?). On the other hand, applications offered across a number of devices need to offer uniform interfaces, both so that users can quickly learn to use a familiar application on new devices, and so that a given application can retain its identity and recognizability, regardless of the device on which it is operating.

    Development of systems meeting the described requirements will involve user testing and research into design of displays and input devices, as well as into design of effective interfaces, but some systems have already begun to address these problems. Some browsers for the World-Wide Web attempt to offer interfaces that are appropriate to the devices on which they run and yet offer some uniformity. At times this can be difficult. For example, the frames feature of HTML causes a browser to attempt to divide up a user's display without any knowledge of the characteristics of that display. Although building applications that adapt their interfaces to the characteristics of the device on which they are running is one potential direction of research in this area, perhaps a more promising one is to separate the interface from the application and give the responsibility of maintaining the interface to the device itself. A standard set of protocols would allow the application to negotiate the setup of an interface, and later to interact with that interface and, indirectly, with the user. Such multimodal architectures could address the problems of generating an appropriate interface, as well as providing better support for users with specific disabilities. The architectures could also be distributed, and the building blocks of forthcoming distributed applications could become accessible from assorted computational devices.

  • Speed, Size, and Bandwidth: The rate of increase of processor speed and storage (transistor density of semiconductor chips doubles roughly every 18 months according to Moore's law) suggests a bright future for interactive technologies. An important constraint on utilizing the full power afforded by these technological advances, however, may be network bandwidth. Given the overwhelming trends towards global networked computing, and even the network as computer, the implications of limited bandwidth deserves careful scrutiny. The bottleneck is the "last mile" connecting the Internet to individual homes and small offices [Bell 96]. Individuals who do not get access through large employers may be stuck at roughly the present bandwidth rate (28,800 kilobits per second) at least until the turn of the century. The rate needed for delivery of television-quality video, one of the promises of the National Information Infrastructure, is 4-6 megabits, many times that amount. What are the implications for strategic HCI research of potentially massive local processing power together with limited bandwidth?

    Increases in processor speed and memory suggest that if the information can be collected and cached from the network and/or local sources, local interactive techniques based on signal processing and work context could be utilized to the fullest. With advances in speech and video processing, interfaces that actively watch, listen, catalog, and assist become possible. With increased CPU speed we might design interactive techniques based on work context rather than isolated event handling. Fast event dispatch becomes less important than helpful action. Tools might pursue multiple redundant paths, leaving the user to choose and approve rather than manually specify. We can afford to "waste" time and space on indexing information and tasks that may never be used, solely for the purpose of optimizing user effort. With increased storage capacity it becomes potentially possible to store every piece of interactive information that a user or even a virtual community ever sees. The processes of sifting, sorting, finding and arranging increase in importance relative to the editing and browsing that characterizes today's interfaces. When it is physically possible to store every paper, e-mail, voice-mail and phone conversation in a user's working life, the question arises of how to provide effective access.

  • Speech, Handwriting, Natural Language, and Other Modalities: The use of speech will increase the need to allow user-centered presentation of information. Where the form and mode of the output generated by computer-based systems is currently defined by the system designer, a new trend may be to increasingly allow the user to determine the way in which the computer will interact and to support multiple modalities at the same time. For instance, the user may determine that in a given situation, textual natural language output is preferred to speech, or that pictures may be more appropriate than words. These distinctions will be made dynamically, based on the abilities of the user or the limitations of the presentation environment. As the computing environment used to present data becomes distinct from the environment used to create or store information, interface systems will need to support information adaptation as a fundamental property of information delivery.

  • 3D and Virtual Reality: Another trend is the migration from two-dimensional presentation space (or a 2 1/2 dimensional space, in the case of overlapping windows) to three dimensional space. The beginnings of this in terms of a conventional presentation environment is the definition of the Virtual Reality Modeling Language (VRML). Other evidences are the use of integrated 3D input and output control in virtual reality systems. The notions of selecting and interacting with information will need to be revised, and techniques for navigation through information spaces will need to be radically altered from the present page-based models. Three-dimensional technologies offer significant opportunities for human-computer interfaces. Application areas that may benefit from three-dimensional interfaces include training and simulation, as well as interactive exploration of complex data environments.

    A central aspect of three-dimensional interfaces is "near-real-time" interactivity, the ability for the system to respond quickly enough that the effect of direct manipulation is achieved. Near-real-time interactivity implies strong performance demands that touch on all aspects of an application, from data management through computation to graphical rendering. Designing interfaces and applications to meet these demands in an application-independent manner presents a major challenge to the HCI community. Maintaining the required performance in the context of an unpredictable user-configured environment implies a "time-critical" capability, where the system automatically gracefully degrades quality in order to maintain performance. The design of general algorithms for time-critical applications is a new area and a significant challenge.

    2.3 Design and Evaluation Methods

    Design and evaluation methods have evolved rapidly as the focus of human-computer interaction has expanded. Contributing to this are the versatility of software and the downward price and upward performance spiral, which continually extend the applications of software. The challenges overshadow those faced by designers using previous media and assessment methods. Design and evaluation for a monochrome, ASCII, stand-alone PC was challenging, and still does not routinely use more than ad hoc methods and intuition. New methods are needed to address the complexities of multimedia design, of supporting networked group activities, and of responding to routine demands for ever-faster turnaround times.

    More rapid evaluation methods will remain a focus, manifest in recent work on cognitive walkthrough [Wharton 94], heuristic evaluation [Nielsen 94]), and other modifications of earlier cognitive modeling (e.g., [John 97]) and usability engineering approaches. Methods to deal with the greater complexity of assessing use in group settings are moving from research into the mainstream. Ethnographic observation, participatory design, and scenario-based design are being streamlined [Schuler 93]. Contextual inquiry and design is an example of a method intended to quickly obtain a rich understanding of an activity and transfer that understanding to all design team members [Holtzblatt 93].

    As well as developing and refining the procedures of design and evaluation methods, we need to understand the conditions under which they work. Are some better for individual tasks, some excellent for supporting groupware? Are some useful very early in the conceptual phase of design, others best when a specific interface design has already been detailed, and some restricted to when a prototype is in existence? In addition, for proven and promising techniques to become widespread, they need to be incorporated into the education of UI designers. Undergraduate curricula should require such courses for a subset of their students; continuing education courses need to be developed to address the needs of practicing designers.

    2.4. Tools

    All the forms of computer-human interaction discussed here will need to be supported by appropriate tools. The interfaces of the future will use multiple modalities for input and output (speech and other sounds, gestures, handwriting, animation, and video), multiple screen sizes (from tiny to huge), and have an "intelligent" component ("wizards" or "agents" to adapt the interface to the different wishes and needs of the various users). The tools used to construct these interfaces will have to be substantially different from those of today. Whereas most of today's tools well support widgets such as menus and dialog boxes, these will be a tiny fraction of the interfaces of the future. Instead, the tools will need to access and control in some standard way the main application data structures and internals, so the speech system and agents can know what the user is talking about and doing. If the user says "delete the red truck," the speech system needs access to the objects to see which one is to be deleted. Otherwise, each application will have to deal with its own speech interpretation, which is undesirable. Furthermore, an agent might notice that this is the third red truck that was deleted, and propose to delete the rest. If confirmed, the agent will need to be able to find the rest of the trucks that meet the criteria. Increasingly, future user interfaces will be built around standardized data structures or "knowledge bases" to make these facilities available without requiring each application to rebuild them.

    In addition, tools of the future should incorporate the design and evaluation methods discussed in section 2.3. These procedures should be supported by the system-building tools themselves. This would make the evaluation of ideas extremely easy for designers, allowing ubiquitous evaluation to become a routine aspect of system design.

    3. Conclusions

    Although some areas of computer science are maturing and perhaps no longer have the excitement they once did, the current generally felt concern with developing human centered systems, that is, those that more effectively support people in accomplishing their tasks, is bringing HCI to the center of computer science. We have never had more interest, positive publicity, and recognition of the importance of the area. And it is warranted. We now have a solid foundation of principles and results to teach in courses and from which to base today's user interface design and tomorrow's research. As computing systems become increasingly central to our society, HCI research will continue to grow in importance. The field can expect a stream of rapidly changing technological developments, challenges associated with integrating research from multiple disciplines, and crucially important problems to address. We look forward to many exciting new HCI research results in the future as well as the benefits associated with their application.

    4. References

    [Barnard 91]
    Barnard, P. "The Contributions of Applied Cognitive Psychology to the Study of Human-Computer Interaction." In Shackel, B. and Richardson, S. (Eds.), Human Factors for Informatics Usability, Cambridge University Press, pp. 151-182. 1991.
    [Bederson 96]
    B. Bederson, J. Hollan, K. Perlin, J. Meyer, D. Bacon, and G. Furnas. "Pad++: A Zoomable Graphical Sketchpad for Exploring Alternate Interface Physics," Journal of Visual Languages and Computing, 7, pp. 3-31, 1996.
    [Bell 96]
    G. Bell, and J. Gemmell. "On Ramp Prospects for the Information Superhighway Dream," Communications of the ACM, Vol. 39, no. 7, pp. 55-61, July 1996.
    [Bier 93]
    E. A. Bier, M. C. Stone, K. Pier, W. Buxton, and T. D. DeRose. "Toolglass and Magic Lenses: The See-Through Interface," SIGGRAPH Computer Graphics Proceedings, pp 73-80, 1993.
    [Blomberg 93]
    Blomberg, J., Giacomi, J., Mosher, A., and Swenton-Wall, P. "Ethnographic field methods and their relation to design." In D. Schuler and A. Namioka (Eds.), Participatory design: Principles and practices, pp. 123-155. Hillsdale, NJ: Lawrence Erlbaum Associates. 1993.
    [Borning 81]
    Alan Borning, "The Programming Language Aspects of ThingLab, a Constraint-Oriented Simulation Laboratory", ACM Transactions on Programming Languages and Systems, (3)4: 353-387, October 1981.
    [Brachman 96]
    R.J. Brachman, T. Anand. "The Process of Knowledge Discovery in Databases." In: U.M. Fayad, G. Piatetsky-Shapiro, P. Smyth, R. Uthurusamy, editors, Advances in Knowledge Discovery and Data Mining, AAAI Press, pp. 37-57, 1996.
    [Bush 45]
    V. Bush. "As We May Think." The Atlantic Monthly. 176(July). pp. 101-108. 1945.
    [Card 83]
    Card, S. K., Moran, T. P., and Newell, A. The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates, 1983.
    [Card 91]
    S. K. Card, G. G. Robertson, and J. D. Mackinlay. "The Information Visualizer, an Information Workspace." Proceeding of the ACM Conference on Computer Human Interaction, CHI'91, 1993, 181-188.
    [Catarci 96]
    T. Catarci and I. F. Cruz (eds.). "Special Issue on Information Visualization", ACM Sigmod Record 25(4), 1996.
    [Church 93]
    K. W. Church and J. I. Helfman "Dotplot: A Program for Exploring Self-Similarity in Millions of Lines of Text and Code." Journal of Computational and Graphical Statistics, 1993, 2, 153-174.
    [Clark 85] Clark, H. H. "Language use and language users." In G. Lindzey & E. Aronson (Eds.), The Handbook of Social Psychology, (3rd ed.). New York: Knopf. 1985.
    [Cruz94]
    Isabel F. Cruz, "User-defined Visual Query Languages", IEEE 10th International Symposium on Visual Languages (VL '94), pp. 224-231, St. Louis, October 1994.
    [DiBattista 94]
    G. Di Battista, P. Eades, R. Tamassia, and I. G. Tollis, ``Algorithms for Drawing Graphs: an Annotated Bibliography." Computational Geometry: Theory and Applications, vol. 4, no. 5. pp. 235-282, 1994.
    [Elrod 94]
    Scott Elrod, Richard Bruce, Rich Gold, David Goldberg, Frank Halasz, William Janssen, David Lee, Kim McCall, Elin Pedersen, Ken Pier, John Tang, and Brent Welch. "Liveboard: A large interactive display supporting group meetings, presentations and remote collaboration." In Proc. of the Conference on Computer Human Interaction, CHI'94, pp. 599-607, May 1992.
    [Engelbart 68]
    D. Engelbart and W. English "A Research Center for Augmenting Human Intellect." 1968. Reprinted in ACM SIGGRAPH Video Review, 1994. 106.
    [Ericsson 84]
    Ericsson, K. A., and Simon, H. A. Protocol Analysis: Verbal reports as data. Cambridge, MA: MIT Press. 1984.
    [Ernst 69]
    Ernst, G. W., and Newell, A. GPS: A case study in generality and problem-solving. New York: Academic Press, 1969.
    [FADIVA 96]
    European (ESPRIT) Working Group "FADIVA, Foundations of Advanced 3D Information Visualization", 1996. http://www-cui.darmstadt.gmd.de:80/visit/Activities/Fadiva/
    [Foley 94]
    Jim Foley and James Pitkow, eds, Research Priorities For The World-Wide Web. October 31, 1994. http://www.cc.gatech.edu/gvu/nsf-ws/report/Report.html
    [Goldberg 79]
    Goldberg, A. and Robson, D. "A Metaphor for User Interface Design," in Proceedings of the 12th Hawaii International Conference on System Sciences. 1979. 1. pp. 148-157.
    [Gray 93]
    Gray, W. D., John, B. E., and Atwood, M. E. "Project Ernestine: Validating a GOMS analysis for predicting and explaining real-world task performance." Human-Computer Interaction, 8, 1993, pp. 237-309.
    [Greenberg 96]
    S. Greenberg. "Teaching Human Computer Interaction to Programmers." ACM interactions. 3(4), July, 1996. pp. 62-76.
    [Hewett 92]
    T. T. Hewett, et. al., eds., ACM SIGCHI Curricula for Human-Computer Interaction ACM Press, New York, NY, 1992, ACM Order Number: 608920.
    [Hill 94]
    W. Hill and J. D. Hollan, "History-Enriched Digital Objects: Prototypes and Policy." The Information Society, 1994, 10, 139-145.
    [Holtzblatt 93]
    Holtzblatt, K. and Jones, S. "Contextual inquiry: A participatory technique for system design." In A. Namioka and D. Schuler (eds.), Participatory design: Principles and practices. Hillsdale, NJ: Lawrence Erlbaum Associates. 1993.
    [Hudson 96]
    S. E. Hudson, I. Smith, "Ultra-Lightweight Constraints," ACM Symposium on User Interface Software and Technology, UIST'96, November 6-8, 1996. Seattle, WA. pp. 147-155.
    [Hutchins 85]
    E. L. Hutchins, J. D. Hollan, and D. A. Norman. "Direct Manipulation Interfaces." Human-Computer Interaction. 1, 1985, 311-338.
    [John 97]
    John, B. E. and Kieras, D. E. "Using GOMS for user interface design and evaluation: Which technique?" ACM Transactions on Computer-Human Interaction. To appear, 1997.
    [Karat 90]
    C. M. Karat, "Cost-Benefit Analysis of Usability Engineering Techniques," Proceedings of the Human Factors Society 34th Annual Meeting, Volume 2, Orlando, FLA, Oct, 1990, pp. 839-843.
    [Kasik 82]
    D. J. Kasik. "A User Interface Management System." Computer Graphics, 16(3), Boston, MA. 1982.
    [Kay 69]
    A. Kay. The Reactive Engine. PhD Thesis, Electrical Engineering and Computer Science, University of Utah. 1969.
    [Kay 77]
    A. Kay. "Personal Dynamic Media." IEEE Computer. 10(3): 31-42. 1977.
    [Keim 95]
    D. Keim. Visual Support for Query Specification and Data Mining. PhD. Thesis, Institute for Computer Science, Ludwig Maximilians University, Munich, Germany, 1995.
    [Koved 86]
    L. Koved and B. Shneiderman, "Embedded menus: Selecting items in context," Communications of the ACM. 29(4), April, 1986, pp. 312-318.
    [Ladkin 96]
    P. Ladkin, "Inside Risks Forum", http://csrc.ncsl.nist.gov//rskforum/risks18.010, 1996.
    [Landauer 91]
    Landauer, T. "Let's Get Real: A Position Paper on the Role of Cognitive Psychology in the Design of Humanly Useful and Usable Systems." In Carroll, J. (Ed.) Designing Interaction. Cambridge University Press, pp. 60-73. 1991.
    [Leveson 93]
    N. G. Leveson and C. S. Turner, "An Investigation of the Therac-25 Accidents," IEEE Computer, Jul, 1993, 26(7), pp. 18-41.
    [Linton 89]
    M. A. Linton, J. M. Vlissides, et al. "Composing user interfaces with InterViews." IEEE Computer. 22(2): 8-22. 1989.
    [Lynch 96]
    D.C. Lynch, C. Daniel and L. Lundquist. Digital Money: The New Era of Internet Commerce. New York: John Wiley and Sons, 1996.
    [Mackinlay 86]
    J. Mackinlay, "Automating the Design of Graphical Presentation of Relational Information", ACM Transaction on Graphics, 5(2), Apr. 1996, pp. 110-141.
    [Mantei 88]
    M. M. Mantei and T. J. Teorey, "Cost/Benefit Analysis for Incorporating Human Factors in the Software Lifecycle," CACM,, Apr, 1988, 31(4), pp. 428-439.
    [Margolis 96]
    B. Margolis. "Digital commerce: the future of retailing." Direct Marketing. Jan. 1996:41(6).
    [Myers 88]
    B. A. Myers. "A Taxonomy of User Interfaces for Window Managers." IEEE Computer Graphics and Applications 8(5): 65-84. 1988.
    [Myers 90]
    B. A. Myers, D. A. Giuse, et al. "Garnet: Comprehensive Support for Graphical, Highly-Interactive User Interfaces." IEEE Computer. 23(11): 71-85. 1990.
    [Myers 92]
    B. A. Myers and M. B. Rosson. "Survey on User Interface Programming," Proceedings SIGCHI'92: Human Factors in Computing Systems. Monterrey, CA, May 3-7, 1992. 195-202.
    [Myers 96a]
    B. A. Myers. A Quick History of Human Computer Interaction. Carnegie Mellon University School of Computer Science Technical Report CMU-CS-96-163 and Human Computer Interaction Institute Technical Report CMU-HCII-96-103, August, 1996.
    [Myers 96b]
    B. A. Myers, R. C. Miller, R. McDaniel, and A. Ferrency, "Easily Adding Animations to Interfaces Using Constraints," ACM Symposium on User Interface Software and Technology, UIST'96, November 6-8, 1996. Seattle, WA. pp. 119-128.
    [Neches 93]
    R. Neches, J. Foley, P. Szekely, P. Sukaviriya, P. Luo, S. Kovacevic, and S. Hudson, "Knowledgeable Development Environments Using Shared Design Models," Proceedings of the ACM SIGCHI 1993 International Workshop on Intelligent User Interfaces, Orlando, FL, Jan, 1993, pp. 63-70.
    [Nelson 65]
    Nelson, T. "A File Structure for the Complex, the Changing, and the Indeterminate," in Proceedings ACM National Conference. 1965. pp. 84-100.
    [Neumann 91]
    P. G. Neumann, "Inside Risks: Putting on Your Best Interface," CACM,, Mar, 1991, 34(3), p. 138.
    [Newell 72]
    Newell, A. and Simon, H. A. Human problem solving. Englewood Cliffs, NJ: Prentice-Hall. 1972
    [Newman 68]
    W. M. Newman. "A System for Interactive Graphical Programming." AFIPS Spring Joint Computer Conference. 1968.
    [Nielsen 93a]
    J. Nielsen and T. K. Landauer, "A Mathematical Model of the Finding of Usability Problems," Proceedings INTERCHI'93: Human Factors in Computing Systems, Amsterdam, The Netherlands, Apr, 1993, pp. 206-213.
    [Nielsen 93b]
    Jakob Nielsen, Usability Engineering, Boston: Academic Press, 1993.
    [Nielsen 94]
    Jakob Nielsen, "Heuristic Evaluation." In J. Nielsen and R. L. Mack (Eds.), Usability Inspection Methods. New York: John Wiley & Sons. pp. 25-62. 1994.
    [Norman 86]
    Norman, D. A., "Cognitive engineering," In D. A. Norman and S. W. Draper (Eds.), User centered system design. Hillsdale, NJ: Lawrence Erlbaum Associates. 1986.
    [Norman 90]
    Norman, D. A. The design of everyday things. New York: Doubleday Currency. 1990
    [Olson 90]
    Olson, J. and Olson, G. "The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS," Human-Computer Interaction, 5, pp. 221-265. 1990.
    [Polson 90]
    Polson, P., and Lewis, C. "Theory-based design for easily learned interfaces." Human-Computer Interaction, 5, pp. 191-220. 1990.
    [Robertson 93]
    George G. Robertson, Stuart K. Card, and Jock D. Mackinlay, "Information Visualization Using 3D Interactive Visualization", Communications of the ACM, vol. 26, no. 4, pp. 56-71, April 1993.
    [Robertson 77]
    G. Robertson, A. Newell, et al. ZOG: A Man-Machine Communication Philosophy, Carnegie Mellon University Technical Report. 1977.
    [Roth 94]
    S. F. Roth, J Kolojejchick, J. Mattis, and J. Goldstein, "Interactive Graphic Design Using Automatic Presentation Knowledge." ACM Proceedings Human Factors in Computing Systems, CHI'94, 112-117, 1994.
    [Scheifler 86]
    R. W. Scheifler and J. Gettys "The X Window System." ACM Transactions on Graphics 5(2): 79-109. 1986.
    [Schuler 93]
    Schuler, D., and Namioka, A. (Eds.). Participatory design: Principles and practices. Hillsdale, NJ: Lawrence Erlbaum Associates. 1993.
    [Shackel 69]
    Shackel, B. "Man-computer interaction-The contribution of the human sciences," Ergonomics, 12(4), pp. 485-499. 1969.
    [Shneiderman 83]
    B. Shneiderman. "Direct Manipulation: A Step Beyond Programming Languages." IEEE Computer 16(8): 57-69. 1983.
    [Shneiderman 94]
    Shneiderman, B., "Dynamic queries for visual information seeking," IEEE Software, 11(6) 1994, pp. 70-77.
    [Sibert 93]
    John Sibert and Gary Marchionini, "Human-Computer Interaction Research Agendas." Behaviour and Information Technology, 12(2), Mar-Apr, 1993. pp. 67-135.
    [Silberschatz 96]
    A. Silberschatz, M. Stonebraker, and J. Ullman, eds., "Database Research: Achievements and Opportunities Into the 21st Century." ACM Sigmod Record 25(1): pp. 52-63, 1996. http://bunny.cs.uiuc.edu/sigmod/sigmod_record/3-96/lagunita.ps
    [Smith 77]
    D. C. Smith. Pygmalion: A Computer Program to Model and Stimulate Creative Thought. Basel, Stuttgart, Birkhauser Verlag. 1977.
    [Smith 82]
    D. C. Smith, E. Harslem, et al. "The Star User Interface: an Overview." Proceedings of the 1982 National Computer Conference, AFIPS. 1982.
    [Strong 94]
    G.W. Strong, NSF Workshop on New Directions in Human-Computer Interaction Education, Research and Practice. Sept. 11, 1994. http://www.sei.cmu.edu/arpa/hci/directions/TitlePage.html
    [Sutherland 63]
    I. E. Sutherland. "SketchPad: A Man-Machine Graphical Communication System." AFIPS Spring Joint Computer Conference. 1963
    [Swinehart 74]
    D. C. Swinehart. Copilot: A Multiple Process Approach to Interactive Programming Systems. Computer Science Department, Stanford University. 1974.
    [Teitelman 79]
    W. Teitelman. "A Display Oriented Programmer's Assistant." International Journal of Man-Machine Studies. 11: 157-187. 1979.
    [van Dam 69]
    A. van Dam, S. Carmody, T. Gross, T. Nelson, and D. Rice "A Hypertext Editing System for the /360." Proceedings Conference in Computer Graphics, University of Illinois, 1969.
    [Weiser 93]
    Mark Weiser. "Some computer science issues in ubiquitous computing." CACM, 36(7), July 1993, 74-83.
    [Wharton 94]
    Wharton, C., Rieman, J., Lewis, C., and Polson, P. "The Cognitive Walkthrough Method: A practitioner's guide". In J. Nielsen and R. L. Mack (Eds.), Usability Inspection Methods. New York: John Wiley & Sons. pp. 105-140. 1994.
    [Williams 83]
    G. Williams. "The Lisa Computer System." Byte Magazine 8(2): 33-50. 1983.
    [Williams 84]
    G. Williams. "The Apple Macintosh Computer." Byte 9(2): 30-54. 1984.


    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or permissions@acm.org.