Article to appear in ACM Computing Surveys 28(4), December 1996. Copyright © 1996 by the Association for Computing Machinery, Inc. See the permissions statement below.
*This article is based on the results of the
Human Computer Interaction Working Group of the
ACM Workshop on Strategic Directions in Computing Research,
and was authored by:
Steve Bryson, NASA Ames Research Center, Dick Bulterman, CWI, Tiziana Catarci, University of Rome, Wayne Citrin, University of Colorado Boulder, Isabel Cruz, Tufts University (co-chair and editor), Ephraim Glinert, RPI, Jonathan Grudin, University of California Irvine, Jim Hollan, University of New Mexico (editor), Yannis Ioannidis, University of Wisconsin-Madison, Rob Jacob, Tufts University, Bonnie John, Carnegie Mellon University, David Kurlander, Microsoft Research, Brad Myers, Carnegie Mellon University (co-chair and editor), Dan Olsen, Carnegie Mellon University, Randy Pausch, University of Virginia, Stuart Shieber, Harvard University, Ben Shneiderman, University of Maryland College Park, John Stasko, Georgia Tech, Gary Strong, NSF, Kent Wittenburg, Bellcore.
Abstract: Human Computer Interaction (HCI) is the study of how people design, implement, and use interactive computer systems, and how computers affect individuals, organizations, and society. HCI is a research area of increasingly central significance to computer science, other scientific and engineering disciplines, and an ever expanding array of application domains. This more prominent role follows from the widely perceived need to expand the focus of computer science research beyond traditional hardware and software issues to attempt to better understand how technology can more effectively support people in accomplishing their goals.
At the same time that a human-centered approach to system development is of growing significance, factors conspire to make the design and development of systems even more difficult than in the past. This increased difficulty follows from the disappearance of boundaries between applications as we start to support people's real activities; between machines as we move to distributed computing; between media as we expand systems to include video, sound, graphics, and communication facilities; and between people as we begin to realize the importance of supporting organizations and group activities.
This report summarizes selected strategic directions in human computer interaction research, sets them within an historical context of research accomplishments, and tries to convey not only the significance but the excitement of the field.
Categories and Subject Descriptors: H.1.2 [Information Systems]: Human Factors; H.5 [Information Systems]: Information Interfaces and Presentation;
General Terms: Human Factors
Human-Computer Interaction (HCI) is the study of how people design, implement, and use interactive computer systems, and how computers affect individuals, organizations, and society. This encompasses not only ease of use but also new interaction techniques for supporting user tasks, providing better access to information, and creating more powerful forms of communication. It involves input and output devices and the interaction techniques that use them; how information is presented and requested; how the computer's actions are controlled and monitored; all forms of help, documentation, and training; the tools used to design, build, test, and evaluate user interfaces; and the processes that developers follow when creating interfaces.
This report describes the historical and intellectual foundations of HCI, and then summarizes selected strategic directions in human-computer interaction research. Previous important reports on HCI directions include the results of the 1991 [Sibert 93] and 1994 [Strong 94] NSF studies on HCI in general, and the 1994 NSF study on the World-Wide-Web [Foley 94].
1.1. Importance of HCI
Users expect highly effective and easy-to-learn interfaces and developers now realize the crucial role the interface plays. Surveys show that over 50% of the design and programming effort on projects is devoted to the user interface portion [Myers 92]. The human-computer interface is critical to the success of products in the marketplace, as well as the safety, usefulness, and pleasure of using computer-based systems.
There is substantial empirical evidence that employing the processes, techniques, and tools developed by the HCI community can dramatically decrease costs and increase productivity. For example, one study [Karat 90] reported savings due to the use of usability engineering [Nielsen 93b] of $41,700 in a small application used by 23,000 marketing personnel, and $6,800,000 for a large business application used by 240,000 employees. Savings were attributed to decreased task time, fewer errors, greatly reduced user disruption, reduced burden on support staff, elimination of training, and avoidance of changes in software after release. Another analysis estimates the mean benefit for finding each usability problem at $19,300 [Mantei 88]. A usability analysis of a proposed workstation saved a telephone company $2 million per year in operating costs [Gray 93]. A mathematical model based on eleven studies suggests that using software that has undergone thorough usability engineering will save a small project $39,000, a medium project $613,000 and a large project $8,200,000 [Nielsen 93a]. By estimating all the costs associated with usability engineering, another study found that the benefits can be up to 5000 times the cost [Nielsen 93a].
There are also well-known catastrophes that have resulted from not paying enough attention to the human-computer interface. For example, the complicated user interface of the Aegis tracking system was a contributing cause to the erroneous downing of an Iranian passenger plane, and the US Stark's inability to cope with Iraqi Exocet missiles was partly attributed to the human-computer interface [Neumann 91]. Problems with the interfaces of military and commercial airplane cockpits have been named as a likely cause for several crashes, including the Cali crash of December 1995 [Ladkin 96]. Sometimes the implementation of the user interface can be at fault. A number of people died from radiation overdoses partially as a result of faulty cursor handling code in the Therac-25 [Leveson 93].
Effective user interfaces to complex applications are indispensable. The recognition of their importance in other disciplines is increasing and with it the necessary interdisciplinary collaboration needed to fully address many challenging research problems. For example, for artificial intelligence technologies such as agents, speech, and learning and adaptive systems, effective interfaces are fundamental to general acceptance. HCI subdisciplines such as information visualization and algorithm animation are used in computational geometry, databases, information retrieval, parallel and distributed computation, electronic commerce and digital libraries, and education. HCI requirements resulting from multimedia, distributed computing, real-time graphics, multimodal input and output, ubiquitous computing, and other new interface technologies shape the research problems currently being investigated in disciplines such as operating systems, databases, and networking. New programming languages such as Java result from the need to program new types of distributed interfaces on multiple platforms. As more and more of software designers' time and code are devoted to the user interface, software engineering must increase its focus on HCI.
HCI research has been spectacularly successful, and has fundamentally changed computing. Just one example is the ubiquitous graphical interface. Another example is that virtually all software written today employs user interface toolkits and interface builders. Even the spectacular growth of the World-Wide Web is a direct result of HCI technology: applying hypertext technology to browsers allows one to traverse a link across the world with a click of the mouse. It is interface improvements more than anything else that triggered this explosive growth.
In this section we give a brief summary of the research that underlies a few selected HCI advances. By "research," we mean exploratory work at universities and government and industrial research labs (such as Xerox PARC) that is not directly related to products. Figure 1 shows a summary time line. Of course, deeper analysis would reveal much interaction between these three activity streams. For a more complete history, see [Myers 96a]. It is important to appreciate that years of research, typically government-funded, are involved in creating and making these technologies ready for widespread use. The same will be true for the HCI technologies that will provide the interfaces of tomorrow.
Figure 1: Summary time-lines for some of the technologies discussed in this article.
Direct Manipulation of Graphical Objects: The now ubiquitous direct manipulation interface was first demonstrated by Ivan Sutherland in Sketchpad [Sutherland 63]. This system was the basis of his 1963 MIT PhD thesis. SketchPad supported manipulation of objects using a light-pen, including grabbing objects, moving them, changing size, and using constraints. It contained the seeds of myriad important interface ideas. The system was built at Lincoln Labs with support from the Air Force and NSF. William Newman's Reaction Handler [Newman 68], created at Imperial College, London (1966-67) provided direct manipulation of graphics, and introduced "Light Handles," a form of graphical potentiometer, that was probably the first "widget." Another early system was AMBIT/G (implemented at MIT's Lincoln Labs, 1968, ARPA funded). It employed, among other interface techniques, iconic representations, gesture recognition, dynamic menus, selection of icons by pointing, and moded and mode-free styles of interaction. Smith coined the term "icons" in his 1975 Stanford PhD thesis on Pygmalion [Smith 77] (funded by ARPA and NIMH) and Smith later popularized icons as one of the chief designers of the Xerox Star [Smith 82]. Many of the interaction techniques popular in direct manipulation interfaces, such as how objects and text are selected, opened, and manipulated, resulted from research at Xerox PARC in the 1970's. The concept of direct manipulation interfaces for everyone was envisioned by Alan Kay of Xerox PARC in a 1977 article about the "Dynabook" [Kay 77]. The first commercial systems to make extensive use of Direct Manipulation were the Xerox Star (1981) [Smith 82], the Apple Lisa (1982) [Williams 83] and Macintosh (1984) [Williams 84]. Ben Shneiderman at the University of Maryland coined the term "Direct Manipulation" in 1982 and identified the components and gave psychological foundations [Shneiderman 83]. The concept was elaborated by other researchers (e.g. [Hutchins 85]).
Windows: Multiple tiled windows were demonstrated in Engelbart's NLS in 1968. Early research at Stanford on systems like COPILOT (1974) [Swinehart 74] and at MIT with the EMACS text editor (1974) also demonstrated tiled windows. Alan Kay proposed the idea of overlapping windows in his 1969 University of Utah PhD thesis [Kay 69] and they first appeared in his 1974 Smalltalk system [Goldberg 79] at Xerox PARC, and soon after in the InterLisp system [Teitelman 79]. One of the first commercial uses of windows was on LMI and Symbolics Lisp Machines (1979), which grew out of MIT AI Lab projects. The main commercial systems popularizing windows were the Xerox Star (1981), the Apple Lisa (1982), and most importantly the Apple Macintosh (1984). Microsoft's original window managers were tiled, but eventually were overlapping. The X Window System, a current international standard, was developed at MIT in 1984 [Scheifler 86]. For a survey of window managers, see [Myers 88].
Hypertext: The idea for hypertext is credited to Vannevar Bush's famous MEMEX idea from 1945 [Bush 45], but his idea of implementing this using microfilm was never tried. Engelbart's NLS system [Engelbart 68] at the Stanford Research Laboratories in 1965 made extensive use of linking (funding from ARPA, NASA, and Rome ADC). The "NLS Journal," one of the first on-line journals, included full linking of articles. Ted Nelson coined the term "hypertext" in 1965 [Nelson 65]. The Hypertext Editing System, jointly designed by Andy van Dam, Ted Nelson, and two students at Brown University (funding from IBM) was distributed extensively [van Dam 69]. The ZOG project (1977) from CMU was another early hypertext system, and was funded by ONR and DARPA [Robertson 77]. Ben Shneiderman's Hyperties was the first system where highlighted items in the text could be clicked on to go to other pages (1983, Univ. of Maryland) [Koved 86]. HyperCard from Apple (1988) significantly helped to bring the idea to a wide audience. There have been many other hypertext systems through the years. The spectacular growth of the World-Wide Web is a direct result of Tim Berners-Lee's application of Hypertext as the interface to mostly existing capabilities of the Internet. This work was done in 1990 while he was at the government-funded European Particle Physics Laboratory (CERN).
UIMSs and Toolkits: The first User Interface Management System (UIMS) was William Newman's Reaction Handler [Newman 68] created at Imperial College, London (1966-67 with SRC funding). Most of the early work took place at universities (University of Toronto with Canadian government funding; George Washington University with NASA, NSF, DOE, and NBS funding; Brigham Young University with industrial funding). The term UIMS was coined by David Kasik at Boeing (1982) [Kasik 82]. Early window managers such as Smalltalk (1974) and InterLisp, both from Xerox PARC, came with a few widgets, such as popup menus and scrollbars. The Xerox Star (1981) was the first commercial system to have a large collection of widgets and to use dialog boxes. The Apple Macintosh (1984) was the first to actively promote its toolkit for use by other developers to enforce a consistent interface. An early C++ toolkit was InterViews [Linton 89], developed at Stanford (1988, industrial funding). Much of current research is now being performed at universities, including Garnet [Myers 90] and Amulet [Myers 96b] at CMU (ARPA funded), MasterMind [Neches 93] at Georgia Tech (ARPA funded) , and Artkit [Hudson 96] at Georgia Tech (funding from NSF and Intel).
There are, of course, many other examples of HCI research that should be included in a complete history, including work that led to drawing programs, paint programs, animation systems, text editing, spreadsheets, multimedia, 3D, virtual reality, interface builders, event-driven architectures, usability engineering, and a very long list of other significant developments [Myers 96a]. Although our brief history here has had to be selective, what we hope is clear is that there are many years of productive HCI research behind our current interfaces and that it has been research results that have led to the successful interfaces of today.
For the future, HCI researchers are developing interfaces that will greatly facilitate interaction and make computers useful to a wider population. These technologies include: handwriting and gesture recognition, speech and natural language understanding, multiscale zoomable interfaces, "intelligent agents" to help users understand systems and find information, end-user programming systems so people can create and tailor their own applications, and much, much more. New methods and tools promise to make the process of developing user interfaces significantly easier but the challenges are many as we expand the modalities that interface designers employ and as computing systems become an increasingly central part of virtually every aspect of our lives.
As HCI has matured as a discipline, a set of principles is emerging that are generally agreed upon and that are taught in courses on HCI at the undergraduate and graduate level (e.g, see [Greenberg 96]). These principles should be taught to every CS undergraduate, since virtually all programmers will be involved in designing and implementing user interfaces during their careers. These principles are described in other publications, such as [Hewett 92], and include task analysis, user-centered design, and evaluation methods.
1.3. Foundations of the Field
The intellectual foundations of HCI derive from a variety of fields: computer science, cognitive psychology, social psychology, perceptual psychology, linguistics, artificial intelligence, and anthropology. Decades of research in perceptual and cognitive psychology were distilled by pioneers in HCI, beginning in the 1960s (e.g., [Shackel 69]), and several workers have explored the relationship between these sciences and the demands of design (e.g., [Barnard 91] [Landauer 91]).
One influential early effort was directed at producing an "engineering model of human performance" able to make quantitative predictions that can contribute to design (the Model Human Processor [Card 83]). Drawing also on research into human problem solving [Ernst 69] [Newell 72], this led to the GOMS family of analysis techniques that make quantitative predictions of skilled performance. Extensions and refinements of these models have continued to draw on basic psychological theories [Olson 90]. In addition, the needs of HCI have given rise to new psychological theories, e.g., Polson and Lewis's theory of learning through exploration that predicts behavior in walk-up-and-use interfaces and other applications where exploration is the norm [Polson 90].
Donald Norman and his colleagues applied knowledge from the psychology of perception, attention, memory, and motor control to human-computer interaction and design in a series of influential papers and books (e.g., [Norman 86] [Norman 90]). The think-aloud protocol technique [Ericsson 84], developed in cognitive psychology to assist human problem solving research [Newell 72], influenced early HCI work and has become a valuable usability engineering method [Nielsen 93b, p. 195]. Requirements-setting for HCI uses techniques from anthropology (e.g., ethnographic techniques, [Blomberg 93]). Evaluation uses experimental techniques long established in experimental psychology. Social psychology contributes methods for discourse analysis (e.g. [Clark 85]), interviewing, and questionnaires. Using methods researched and validated in other scientific fields allows HCI to move quickly to robust, valid results that are applicable to the more applied area of design.
The intellectual foundations of HCI also include the development of object-oriented programming. This style of programming comes from early work on Simula but was further developed and refined in Smalltalk as a natural way to implement user interfaces [Kay 77]. Early HCI software work drew on compiler theories such as the conceptual/semantic/syntactic/lexical model and parser technologies. Constraint systems and solvers were developed to ease UI implementations ranging from SketchPad [Sutherland 63] and ThingLab [Borning 81] to Amulet [Myers 96b] and Artkit [Hudson 96].
Current widely used interaction techniques, such as how menus and scroll bars work, have been refined through years of research and experimentation. We now know how to provide effective control using the mouse and keyboard for 2D interfaces.
Although we are encouraged by past research success in HCI and
excited by the potential of current research, we want to emphasize how
central a strong research effort is to future practical use of
computational and network technologies. For example, popular
discussion of the National Information Infrastructure (NII) envisions
the development of an information marketplace that can enrich people's
economic, social, cultural, and political lives. For such an
information marketplace, or, in fact, many other applications, to be
successful requires solutions to a series of significant research issues
that all revolve around better understanding how to build effective
human-centered systems. The following sections discuss selected
strategic themes, technology trends, and opportunities to be addressed
by HCI research.
2.1. Strategic Themes
If one steps back from the details of current HCI research a number of themes are visible. Although we cannot hope to do justice here to elaborating these or a number of other themes that arose in our workshop discussions, it is clear that HCI research has now started to crystallize as a critical discipline, intimately involved in virtually all uses of computer technologies and decisive to successful applications. Here we expand on just a few themes:
Information-access interfaces must offer great flexibility on how queries are expressed and how data are visualized; they must be able to deal with several new kinds of data, e.g., multimedia, free text, documents, the Web itself; and they must permit several new styles of interaction beyond the typical, two-step query-specification/result-visualization loop, e.g., data browsing, filtering, and dynamic and incremental querying. Fundamental research is required on visual query languages, user-defined and constraint-based visualizations, visual metaphors, and generic and customizable interfaces, and advances seem most likely to come from collaborations between the HCI and database research communities.
Information-discovery interfaces must support a collaboration between humans and computers, e.g., for data mining. Because of our limited memory and cognitive abilities, the growing volume of available information has increasingly forced us to delegate the discovery process to computers, greatly underemphasizing the key role played by humans. Discovery should be viewed as an interactive process in which the system gives users the necessary support to analyze terabytes of data, and users give the system the feedback necessary to better focus its search. Fundamental issues for the future include how best to array tasks between people and computers, create systems that adapt to different kinds of users, and support the changing context of tasks. Also, the system could suggest appropriate discovery techniques depending on data characteristics, as well as data visualizations, and help integrate what are currently different tools into a homogeneous environment (see [Brachman 96] [Keim 95]).
End-user programming will be increasingly important in the future. No matter how successful interface designers are, systems will still need to be customized to the needs of particular users. Although there will likely be generic structures, for example, in an email filtering system, that can be shared, such systems and agents will always need to be tailored to meet personal requirements. The use of various scripting languages to meet such needs is widespread, but better interfaces and understandings of end-user programming are needed.
The importance of information visualization will increase as people have access to larger and more diverse sources of information (e.g., digital libraries, large databases), which are becoming universally available with the WWW. Visualizing the WWW itself and other communication networks is also an important aim of information visualization systems (see, for example, [Catarci 96]). The rich variety of information may be handled by giving the users the ability to tailor the visualization to a particular application, to the size of the data set, or to the device (e.g., 2D vs. 3D capabilities, large vs. small screens). Research challenges include making the specification, exploration, and evolution of visualizations interactive and accessible to a variety of users. Tools should be designed that support a range of tailoring capabilities: from specifying visualizations from scratch to minor adaptations of existing visualizations. Incorporating automatic generation of information visualization with user-defined approaches is another interesting open problem, for example when the user-defined visualization is underconstrained.
One fundamental issue for information visualization is how to characterize the expressiveness of a visualization and judge its adequacy to represent a data set. For example, the "readability" of a visualization of a graph may depend on (often conflicting) aesthetic criteria, such as the minimization of edge crossings and of the area of the graph, and the maximization of symmetries [DiBattista 94]. For other types of visualization, the criteria are quite ad hoc. Therefore, more foundation work is needed for establishing general principles (see, for example, [FADIVA 96]).
The unpredicted shift of focus to the Internet, intranets, and the World-Wide Web has ended a period in which the focus was on the interaction between an individual and a computer system, with relatively little attention to group and organizational contexts. Computer-mediated human communication raises a host of new interface issues. Additional challenges arise in coordinating the activities of computer-supported group members, either by providing shared access to common on-line resources and letting people structure their work around them, or by formally representing work processes to enable a system to guide the work. The CSCW subcommunity of human-computer interaction has grown rapidly, drawing from diverse disciplines. Social theory and social science, management studies, communication studies, education, are among the relevant areas of knowledge and expertise. Techniques drawn from these areas, including ethnographic approaches to understanding group activity, have become important adjuncts to more familiar usability methods.
Mounting demands for more function, greater availability, and
interoperability affect requirements in all areas. For example, the
great increase in accessible information shifts the research agenda
toward more sophisticated information retrieval techniques. Approaches
to dealing with the new requirements through formal or de facto
standards can determine where research is pointless, as well as where
it is useful. As traditional applications are integrated into the Web,
social aspects of computing are extended.
2.2 Technological Trends
Again, the number and variety of trends identified in our discussions outstrip the space we have here for reporting. One can see large general trends that are moving the field from concerns about connectivity, as the networked world becomes a reality, to compatibility, as applications increasingly need to run across different platforms and code begins to move over networks as easily as data, to issues of coordination, as we understand the need to support multiperson and organization activities. We limit our discussion here to a few instances of these general trends.
The introduction of such devices presents a number of challenges to the discipline of HCI. First, there is the tension between the design of interfaces appropriate to the device in question and the need to offer a uniform interface for an application across a range of devices. The computational devices differ greatly, most notably in the sizes and resolutions of displays, but also in the available input devices, the stance of the user (is the user standing, sitting at a desk, or on a couch?), the physical support of the device (is the device sitting on a desk, mounted on a wall, or held by the user, and is the device immediately in front of the user or across the room?), and the social context of the device's use (is the device meant to be used in a private office, a meeting room, a busy street, or a living room?). On the other hand, applications offered across a number of devices need to offer uniform interfaces, both so that users can quickly learn to use a familiar application on new devices, and so that a given application can retain its identity and recognizability, regardless of the device on which it is operating.
Development of systems meeting the described requirements will involve user testing and research into design of displays and input devices, as well as into design of effective interfaces, but some systems have already begun to address these problems. Some browsers for the World-Wide Web attempt to offer interfaces that are appropriate to the devices on which they run and yet offer some uniformity. At times this can be difficult. For example, the frames feature of HTML causes a browser to attempt to divide up a user's display without any knowledge of the characteristics of that display. Although building applications that adapt their interfaces to the characteristics of the device on which they are running is one potential direction of research in this area, perhaps a more promising one is to separate the interface from the application and give the responsibility of maintaining the interface to the device itself. A standard set of protocols would allow the application to negotiate the setup of an interface, and later to interact with that interface and, indirectly, with the user. Such multimodal architectures could address the problems of generating an appropriate interface, as well as providing better support for users with specific disabilities. The architectures could also be distributed, and the building blocks of forthcoming distributed applications could become accessible from assorted computational devices.
Increases in processor speed and memory suggest that if the information can be collected and cached from the network and/or local sources, local interactive techniques based on signal processing and work context could be utilized to the fullest. With advances in speech and video processing, interfaces that actively watch, listen, catalog, and assist become possible. With increased CPU speed we might design interactive techniques based on work context rather than isolated event handling. Fast event dispatch becomes less important than helpful action. Tools might pursue multiple redundant paths, leaving the user to choose and approve rather than manually specify. We can afford to "waste" time and space on indexing information and tasks that may never be used, solely for the purpose of optimizing user effort. With increased storage capacity it becomes potentially possible to store every piece of interactive information that a user or even a virtual community ever sees. The processes of sifting, sorting, finding and arranging increase in importance relative to the editing and browsing that characterizes today's interfaces. When it is physically possible to store every paper, e-mail, voice-mail and phone conversation in a user's working life, the question arises of how to provide effective access.
A central aspect of three-dimensional interfaces is "near-real-time"
interactivity, the ability for the system to respond
quickly enough that the effect of direct manipulation is achieved.
interactivity implies strong performance demands that touch
on all aspects of an application, from data management through
computation to graphical rendering. Designing interfaces and
applications to meet these demands in an application-independent
manner presents a major challenge to the HCI community. Maintaining
the required performance in the context of an unpredictable
user-configured environment implies a "time-critical" capability,
where the system automatically gracefully degrades quality in order to
maintain performance. The design of general algorithms for
time-critical applications is a new area and a significant challenge.
2.3 Design and Evaluation Methods
Design and evaluation methods have evolved rapidly as the focus of human-computer interaction has expanded. Contributing to this are the versatility of software and the downward price and upward performance spiral, which continually extend the applications of software. The challenges overshadow those faced by designers using previous media and assessment methods. Design and evaluation for a monochrome, ASCII, stand-alone PC was challenging, and still does not routinely use more than ad hoc methods and intuition. New methods are needed to address the complexities of multimedia design, of supporting networked group activities, and of responding to routine demands for ever-faster turnaround times.
More rapid evaluation methods will remain a focus, manifest in recent work on cognitive walkthrough [Wharton 94], heuristic evaluation [Nielsen 94]), and other modifications of earlier cognitive modeling (e.g., [John 97]) and usability engineering approaches. Methods to deal with the greater complexity of assessing use in group settings are moving from research into the mainstream. Ethnographic observation, participatory design, and scenario-based design are being streamlined [Schuler 93]. Contextual inquiry and design is an example of a method intended to quickly obtain a rich understanding of an activity and transfer that understanding to all design team members [Holtzblatt 93].
As well as developing and refining the procedures of design and evaluation
methods, we need to understand the conditions under which they work. Are
some better for individual tasks, some excellent for supporting groupware?
Are some useful very early in the conceptual phase of design, others best
when a specific interface design has already been detailed, and some
restricted to when a prototype is in existence? In addition, for proven
and promising techniques to become widespread, they need to be incorporated
into the education of UI designers. Undergraduate curricula should require
such courses for a subset of their students; continuing education courses
need to be developed to address the needs of practicing designers.
All the forms of computer-human interaction discussed here will need to be supported by appropriate tools. The interfaces of the future will use multiple modalities for input and output (speech and other sounds, gestures, handwriting, animation, and video), multiple screen sizes (from tiny to huge), and have an "intelligent" component ("wizards" or "agents" to adapt the interface to the different wishes and needs of the various users). The tools used to construct these interfaces will have to be substantially different from those of today. Whereas most of today's tools well support widgets such as menus and dialog boxes, these will be a tiny fraction of the interfaces of the future. Instead, the tools will need to access and control in some standard way the main application data structures and internals, so the speech system and agents can know what the user is talking about and doing. If the user says "delete the red truck," the speech system needs access to the objects to see which one is to be deleted. Otherwise, each application will have to deal with its own speech interpretation, which is undesirable. Furthermore, an agent might notice that this is the third red truck that was deleted, and propose to delete the rest. If confirmed, the agent will need to be able to find the rest of the trucks that meet the criteria. Increasingly, future user interfaces will be built around standardized data structures or "knowledge bases" to make these facilities available without requiring each application to rebuild them.
In addition, tools of the future should incorporate the design and
evaluation methods discussed in section 2.3. These procedures should be
supported by the system-building tools themselves. This would make the
evaluation of ideas extremely easy for designers, allowing ubiquitous
evaluation to become a routine aspect of system design.
Although some areas of computer science are maturing and perhaps no
longer have the excitement they once did, the current generally felt
concern with developing human centered systems, that is, those
that more effectively support people in accomplishing their tasks, is
bringing HCI to the center of computer science. We have never had more
interest, positive publicity, and recognition of the importance of the
area. And it is warranted. We now have a solid foundation of
principles and results to teach in courses and from which to base
today's user interface design and tomorrow's research. As computing
systems become increasingly central to our society, HCI research will
continue to grow in importance. The field can expect a stream of
rapidly changing technological developments, challenges associated
with integrating research from multiple disciplines, and crucially
important problems to address. We look forward to many exciting new
HCI research results in the future as well as the benefits associated
with their application.
Permission to make digital
or hard copies of part or all of this work for personal or classroom
use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, to
republish, to post on servers, or to redistribute to lists, requires
prior specific permission and/or a fee. Request permissions from
Publications Dept, ACM Inc., fax +1 (212) 869-0481, or
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or firstname.lastname@example.org.