What Adults Should Know about Information Technology
and How they should Learn It

Mary Shaw

Computer Science Department
Carnegie Mellon University

January 1998


Public literacy in information technology (IT) has three important aspects: appreciation of important fundamental concepts underlying computing and information, functional competency in personal computer use, and citizen literacy – critical ability to interpret IT results and to hold informed opinions on public issues. Here I concentrate on the latter two. For these, the ability of the concepts to give pragmatic guidance about dealing with the world is more important than the purity, elegance, or detailed correctness of the concepts. The most important target audience is current adults. Since they have, for the most part, left the school systems, we must find creative ways to reach them.

The natural sciences offer some useful examples. Many people seek to understand some of the deep results such as relativity, string theory, and genetics. Often, they do so out of intellectual fascination – to appreciate the elegance of the theories rather than because of their everyday utility. At the same time, most people rely on simpler, more mundane models for guidance on how things in the world work. These include, for example, some simplified version of Newtonian mechanics and qualitative gas laws. These pragmatic models shape the way people manipulate their immediate environment (how hard to push a child on a swing) and make decisions about public events (how much faith to place in claims about speed and collision forces).

I begin with some observations about the demographics of computer use and about the pragmatic models that people use to deal with natural phenomena. Then I list some of the important concepts in each of the three categories above. I close with suggestions about how IT, and hence the body of relevant IT concepts, is changing and a plea to target literacy efforts at adults as well as at children.

The problem with computers is their success

Over the past decade, the demographics of computing have changed dramatically. Computers are now widespread, nearly ubiquitous. Most personal computers are used by people who neither have, nor want, nor (should) need, extensive special training in computing. Business computers are often in the hands of information users, no longer under exclusive control of a centralized information systems department. Instead of gaining access to IT only through professional intermediaries, vast numbers of people are responsible for their own computing – often with little systematic training or support. This disintermediation, or direct connection of users to their IT services, has created new problems for IT system designers.

If all these computers are to be genuinely useful, their owners or handlers must be able to control the IT services effectively. They must understand how to express their problems; they must be able to set up and adapt the computations; and they must have reasonable assurance of the correctness of their results –without actually writing programs. This must be true across a wide range of problems, spanning business solutions, scientific and engineering calculations, and document and image preparation. Furthermore, owners of personal computers must be able to carry out the tasks formerly delegated to system administrators, such as configuration, upgrade, backup, and recovery.

The means for this disintermediation have been the availability of affordable hardware coupled with application-targeted software that produces information and computing in a form that the end user can understand and control. The IT carriers – the "killer apps" – have been spreadsheets, the world-wide web, integrated office suites, and interactive environments such as MUDs. To a lesser degree, Visual Basic has enabled people with minimal programming experience to create useful software that fits their own needs. These applications have become winners in the marketplace because they put a genuinely useful capability in the hands of people with practical problems.

The problem of supplying computing to real people is not yet solved, of course. We are still plagued by bad documentation, incompatible formats, configuration problems, and generally inscrutable systems. I regularly marvel that so many people do succeed in configuring and upgrading their own computer systems. From the anecdotal evidence, it's easy to conclude that everyone who productively owns a personal computer either is a highly-trained IT professional or knows one well enough to ask for favors.

Good-enough models

Most of the general public gets along quite adequately with imperfect models of natural phenomena. Most people can operate appliances without understanding electricity and magnetism, drive cars without understanding internal combustion and gear ratios, and tune radios and TVs without understanding signal propagation and processing. They can, for the most part, avoid the inherent hazards of the devices. Many people can even make minor repairs to these devices with only hazy understanding of the phenomena.

The models that guide people in using these devices are often vastly simplified and sometimes outright wrong. They are

Examples of models that are simply wrong but "work" anyhow include

Not all analogy-based explanations provide good models, of course. A recent example in which the analogy doesn't seem to help is the use of an island to explain copyright law at Copyright Bay (http://www.nmjc.cc.nm.us/copyrightbay/). The information at this site may be fine, but the island-harbor-reef analogy doesn't do much to help explain or relate the elements.

While I won't go so far as to suggest that we invent false models, I do think we should be relaxed about the details. It's very important to get lots of people onto the right track and prepared to refine their understanding incrementally. It is much less important to get a few people in command of subtle details. We should seek advice on what types of model work for people and take this into account when we plan what to teach.

Citizen literacy concepts

These are the concepts related to critical evaluation of IT in public settings: evaluating the credibility of claims about IT, understanding the risks and benefits of adopting IT, and so on. IT literacy should imply deep enough understanding to hold an informed opinion on topics such as

Some of these can be addressed by analogy to the physical world, but in other cases the analogies are misleading – most notably when close human surveillance is absent, when identification of other parties is impractical, and when the scale of time and quantity are radically different from physical scales. Some of the concepts required for this type of IT literacy are

Functional competency concepts

These are the concepts that allow people to set up and control their own computers, computations, communications, and bodies of information. They include both models to explain IT and specific skills pertinent to the care and handling of computers.

Fundamental concepts

These are the concepts related to the intellectual foundations of computing, communications, and information. The public should understand these for the same reasons they should understand the foundations of physics or biology. Since I expect most respondents to focus here, I'll just list some topics:

Setting Priorities

It’s easy enough to list things that people "ought" to know. However, curriculum design is at heart a resource-allocation problem, with content (however measured) as the scarce resource. The situation is even worse for adults who have left the educational system: it’s harder to get their attention, and you get less of it. In response to a challenge from Al Aho, here are my top 10 choices for the information technology concepts that responsible adults should know. Each concept elaborated with some concrete examples of content that might be included and a rationale; note that the important part is the core concept, not so much the possible specifics. Skills should follow along, providing examples that make the concepts real and memorable. This list does not address the question of how much a high school graduate should know about each concept – I’d have to say, "enough to be useful, but constrained by available curriculum space."

  1. Anatomy of computing and information systems
  1. Nature of electronic content as an economic good
  1. Abstraction and generalization
  1. Representation
  1. Persistence and transience
  1. Large-scale information processing
  1. Scale
  1. Operational procedures
  1. Time, sequence, and concurrency (builds on #8, Operational procedures)
  1. Modeling (builds on #3, Abstraction and Representation)

Future developments

It’s always risky to predict the marketplace. One place to look for ideas is the relation between the people who use computing and the computing they use. One trend is toward reducing the degree of hands-on control the individual user maintains. We have already seen substantial disintermediation – that is, increasing numbers of people have direct access to their computers and software. At present, their computing is dominated by individual interactive computations; this allows them to monitor the results as they go. It is more challenging to set up standalone processes that run unmonitored. This requires generalization – description of policy for an open-ended set of computations, not just manipulation of instances. We can see small beginnings of such independent processes in mail filters, automatic check payments, and the demons that select articles from news feeds. What will be required to enable large numbers of users to set up autonomous software agents with larger responsibilities? At what point will large numbers of people trust the internet and electronic commerce mechanisms enough to carry out individual transactions? How about enabling an autonomous software agent to carry out a series of transactions? What about composing your own agent from a number of available components?

A second development trend is toward increasing each user’s interaction with other people. Original personal computers were isolated devices serving individuals. Addition of (intermittent) telephone connections supported sporadic communication such as e-mail, newsgroups and classical World-Wide Web use (i.e., pull technology responding to individual user clicks). Permanent network connections, as they become widespread and affordable, will allow push technology to provide information (and entertainment) as it becomes available from its purveyor. Given enough bandwidth, it will also support real time interactions as required for cooperative work, business, and multiparty games. We can expect one of the side effects of this trend to be a fusion between computing and entertainment. This will, of course, require infrastructure development in bandwidth, 3D display, intellectual property protection, and electronic commerce – the usual stuff of software computer science research. Beyond that, though, what new capabilities will the consumer need? What will be required to make entertainment both interactive and multiparty? How can individuals become producers as well as consumers of computer-based entertainment?

A third development trend is away from the full-function general-purpose computer. It currently takes three forms: (a) network computers, essentially thin-to-the-point-of-anorexia clients that allow whoever's responsible for the server to worry about system maintenance; (b) palm-tops, PDAs, and upscale pagers that trade functionality for size and weight; and (c) embedded computers specialized to a particular application such as the dozen or so in your car or the one in your microwave oven. This trend moves away from the need to understand IT; the IT functions are embodied in the product that serves some particular function.


The big problem with IT literacy isn’t reaching people in classrooms, it’s reaching the people who aren’t in classrooms and aren’t likely to be. This group includes many current decision-makers. However, virtually all the position papers and discussion that addressed delivery at all seemed to assume that IT education would be carried out through traditional schooling – explicit teaching based on a plan for introducing certain ideas at certain ages. A few considered library-based approaches. Some of the discussion mentioned preparation for lifelong learning, but the flavor that adults will attend formal classes. The underlying assumptions of this discussion would lead us to ignore a problem that is currently much more pressing than teaching K-12, namely how to help adults understand IT

We can’t afford to wait 30 years for children taught in traditional institutions to turn 40 or 50. The target for IT literacy education must be the public at large, not just school students. This implies that much of the education must be accomplished outside the school system. Classrooms and libraries are "pull" technologies – students come deliberately to the institution, presumably having decided to participate. To reach the adults, we need to figure out how to "push" ideas to the public at large. This will require the expertise of public educators: science museum educators, journalists, writers, game builders, movie producers, and entertainers. Reaching this population will require content organization that is, compared with formal courses, more selective, more meticulous about prerequisite knowledge, finer-grained, situational, and gentle-slope.

Let’s also distinguish carefully between teaching and learning. Learning is something a student does. It can take place in the total absence of an instructor. Teaching is creating activities or settings that make it more likely for learning to take place. Still, the strongest correlation between learning and any of the other elements of education is with time on task – the amount of time the student spends attentively engaged in practicing with the content. It follows that finding ways to engage students with learning about IT is crucial. Adult students who seek out courses (the pull case) are often highly motivated. The push case is harder – the delivery has to draw in the target learner. To reach the public at large will require the expertise of, for example, science museum educators, marketing experts, and product designers (the design-for-usability types). Recent publicity campaigns concerning the hazards of undercooked hamburgers and the proper use of ABS brakes come to mind; have they actually been effective?

Rather than asking of each concept "at what age it should first be introduced", it may be fruitful to establish which of the concepts depend on others and hence in what order they should be introduced. For example, we might construct a prerequisite tree of concepts. I hope that this will turn out to be shallow and bushy, at least for the concepts of functional competency and citizen literacy. If so, it will ease the learning curve; individuals will be better able to pick up the knowledge they most need, as they need it. This is most critical for adult learners because the curriculum designers have much less control over the order in which an individual will be receptive to topics.

Over the past few years, personal computer products have become easier to install, and their user models have become more consistent (though they have also become more complex through feature accretion). Despite these recent improvements, they are still not simple. It may be productive to enlist the major hardware and software developers (especially application developers) in defining models of IT literacy concurrently with redesign of their own products. That is, perhaps some of the problems of functional competency can be alleviated by design changes in the computer and software systems.