Marvin A. Sirbu
Carnegie Mellon University
Similarly, when we look at the technology issues behind the phrase "information infrastructure," many techniques are used to meet an ever wider range of service demands. The National Telecommunications and Information Administration has defined the information infrastructure as "all of the facilities and instrumentalities engaged in delivering and disseminating information throughout the nation," including not only the telecommunications (e.g. telephony) industry but also mass media, (broadcasting and cable television service), the Postal Service, publishing, printing, and the production and distribution systems of the motion picture industry." It would take an encyclopedia to review all of the technologies used in providing our communications infrastructure. In this paper, we have a much more modest agenda. First, we shall attempt to provide a framework within which to situate a discussion of communications technologies. Second, we will identify some of the key technical developments that have occurred in the last decade and the critical future developments that are expected to substantially alter the nature of our communications infrastructure. Third, we examine the current political debate over communications infrastructure and discuss how technology developments are both framing and likely to alter the terms of the debate in coming years.
Communications Network Architecture
The third element is the first level switching or concentration point, sometimes referred to as an end office. This first level of switching may be a telephone company switching office providing residential service and business Centrex, or it may be a corporate owned Private Branch Exchange (PBX). Such a node serves two important functions. First, it may make connections between two users' access lines, allowing them to communicate with each other. Second, it can concentrate traffic from multiple users onto a single high capacity communications link which carries traffic to a higher level switching node. Because high capacity communications links exhibit substantial economies of scale, network providers can realize great savings in transmission as a result of this concentration function. Moreover, the same inter-switch link can be shared sequentially by many different callers, as the conversations of first one subscriber and then another are carried across the trunk.
Higher level switching nodes, interconnected by high capacity links provide long distance transport between regions served by end office switching nodes.
Finally, there is the logical element of the network indicated by the box labeled control. The control function provides the intelligence for setting up the switching paths needed to interconnect two parties that wish to communicate. While Figure 1 shows it as a centralized abstraction, control can be distributed either to each interface or to each switching node.
From this simple abstraction, a number of profound observations can immediately be made. First, communications network costs per user are typically dominated by the investment in the "last mile" -- the link between the user's terminal and the first concentration point. Capital in the last mile is dedicated to a single user, and not shared, as contrasted with switching nodes and inter-office trunks whose costs are shared by the traffic of many callers. The capital investment of the nation's local exchange carriers in what is referred to as the "local loop" is more than the investment in all other parts of the network combined.
Second, high capacity communications links will make their first appearance in the long distance trunking part of the network. Even if the traffic of each individual user is small, the aggregation of traffic from many simultaneous users can require large capacity transmission lines. Second, high costs associated with high speed transmission can be justified in the trunking plant where costs are shared over many users; only when these costs are greatly reduced, can we expect to see high capacity links, migrate out towards the user from the carrier's end offices.
Third, as we imagine new end-to-end services such as digital data transmission or video dial tone, changes are required throughout the network: in terminal equipment, loop plant, switching, control, and inter-office trunking. The pace of service introduction is determined by the last of these items to be upgraded. Because of the capital invested in the local loop, there is a great incentive to find ways to use existing local loops to deliver new services. Conversely, would-be competitors with the existing local exchange carriers must focus their attention on reducing costs for the last mile--if not the last few hundred meters--if they hope to be successful.
While there are many companies today that sell only point to point communications channels, a communications infrastructure generally means a network. A network implies a set of channels linked by some form of switching that enables any two parties connected to the network to send signals between them.
Finally, at the highest level, there are complete services, such as Plain Old Telephone Service (POTS), electronic mail, video telephony, or enhanced services involving protocol conversion and interaction with stored data. Complete services require many ancillary functions ranging from sophisticated billing and reporting to directories or complex data processing.
Charles Jonscher has observed that the businesses of providing each of these different levels of added values are very different indeed. The transmission business consists of delivering a highly standardized commodity--each bit transmitted is the same as every other bit. Success in a commodity business requires low cost production. That in turn requires investment in state of the art technology for production -- i.e., transmission facilities. The successful vendor of transmission focuses on process innovation as opposed to product innovation. Commodity businesses are also characterized by capital intensity and large economies of scale.
At the other extreme is the business of providing services, particularly enhanced services such as electronic mail, protocol conversion, and information services. These services are characterized by a high degree of customization for each end user or vertical market segment. Successful participants in this end of the business will focus on product not process innovation. Skilled system developers, not capital is the scarce resource, and comparative advantage requires a focus on the customer and his needs as much as on the processes of production.
The networking part of the business is intermediate between these two. While not as much a commodity business as transmission, there is much more standardization than in enhanced services. The cost of switching nodes is increasingly dominated by design and software costs, which exhibit significant economies of scale. At the same time, various traffic types require different switching technologies; thus there is significant variation among networks optimized for different traffic types or different peak channel speeds.
We will return to these distinctions as we examine more carefully the major trends in the underlying technologies of communications and their implications for information infrastructure.
The new data traffic differs from voice traffic in two fundamental ways. First, it is largely bursty traffic. That is, unlike voice traffic, where an open network connection will typically be used almost continuously by somebody speaking, data traffic flows in fits and starts. A user types a few characters at a terminal, receives a screenful of data in response, and then may pause for many seconds while he or she studies the information received. Second, whereas a digital channel with a peak speed of 64 kbps can carry a 3 Khz voice channel, many data applications require much higher peak speeds to provide the desired quality of service. Consider a doctor examining an image sent electronically from a Magnetic Resonance Imaging (MRI) scanner. A single image might consist of 2000 by 2000 picture elements, each requiring 24 bits to encode a full range of color, for a total of 12 megabytes of data. To transmit that data in a time comparable to the rate at which a doctor can flip through a series of film images requires a peak transmission rate of several hundred megabits per second.
A simple way to understand this demand for higher speed networks is to look at both the typical size for a "chunk" of information needed by an application, and the elapsed time the user is willing to wait for it--known as the latency. When the user is a person, acceptable latencies are measured in seconds; when the user is a computer however, waiting one millisecond might mean 50,000 wasted instruction cycles for a typical workstation. This concept is illustrated in Figure 2 which shows chunk sizes and latencies for a variety of applications. The axes are drawn to a logarithmic scale, covering many orders of magnitude. The diagonal lines correspond to transmission speeds of 64 kbps--one voice channel--and 45 megabits per second. They illustrate how the combination of low latencies and larger chunk sizes leads to demands for networks where a single user can consume hundreds of megabits per second, even if only for a brief burst of traffic.
In the meantime, significant progress has been made on two fronts which will prolong the viability of the existing copper plant. First, advances in image compression technology make it possible to encode VCR quality video at bit rates as low as 1.5 Mbps, and broadcast quality at rates below 5 Mbps. Second, advances in digital signal processing have raised the ceiling on what can be transmitted over a copper loop with acceptable error rates. AT&T and other manufacturers have announced a technology known as Asymmetric Digital Subscriber Loop (ADSL), which, by installing appropriate interfaces at the subscriber's premises and in the central office, would allow a majority of existing copper plant to carry 3-4 Mbps downstream--enough for a single broadcast quality video signal or two VCR quality signals--while carrying a lower speed voice/signaling channel upstream.,13
These developments suggest that the inevitability of fiber to the home and the optimal time schedule for deployment are still quite uncertain. Moreover, we may well see a mixed scenario in which fiber is used out to a neighborhood concentration point, and copper pairs for the last few hundred meters. Shortening the copper link allows it to support higher data rates.
For medium to large business customers, however, fiber has clearly proved its worth. Large business users, because they concentrate traffic from many offices, can make use of high capacity links from their premises to the carrier's central office. While technology similar to ADSL can provide up to 24 channels on one or two copper pairs, increasingly, business users need the capacity of fiber. Fiber not only provides more capacity than copper today, but also promises easy expansion of capacity simply by installing more capable electronics in the future. The local exchange carriers are rapidly installing fiber rings in major metropolitan areas with alternate path routing in the event of a cable break.
The demand by businesses for fiber-based access has induced a number of new companies to construct metropolitan fiber networks to compete with the local exchange carriers. By providing quick service, competitive prices, and an alternate path for reliability, these Competitive Access Providers (CAPs) have gained many satisfied customers. The CAPs may eventually become full fledged competitors to the local exchange carriers; to date, however, they have been limited--by regulation as well as by strategic choice--to bypassing the LECs and providing leased access to the interexchange carriers. The FCC has recently issued a tentative decision and notice of proposed rule making which would allow the CAPs to begin to compete in the provision of switched access to the interexchange carriers.
While the telephone companies wrestle with their copper vs fiber dilemma, two other approaches to providing the access link to the subscriber continue to garner attention: the use of coaxial cable of the type already installed to some 60% of U.S. homes for the carriage of cable television, and radio technology.
Earlier generations of cable television networks used a tree and branch architecture (Figure 4) which distributes a common set of video channels to all households in a franchise area. The constant branching of the cables coupled with normal attenuation with distance requires the installation of numerous amplifiers in the network to boost the signal power to adequate levels. When 30 or more of these amplifiers are cascaded together, they introduce distortion in the signal, especially at higher frequencies, thus limiting the capacity of the network. Older networks may support as few as 12 channels, though more recent systems go up to 450 or 500 MHz, or about 70 video channels.
Cable Television Networks
As the cable franchises come up for renewal, and the operators are asked to upgrade their networks, many are installing optical fiber backbones from the headend to an optical network interface (ONI) point much closer to the subscriber (Figure 5). This design drastically reduces the number of amplifiers needed, thus increasing network capacity to as much as 1 GHz or 160 6 MHz video channels. At the same time, it allows for different combinations of channels to be sent on a backbone to any particular neighborhood.
Fiber Backbone Cable Networks
Besides telephone company provided fiber, or cable TV company fiber/coax hybrids, a third alternative for the local access link is the radio spectrum. Already some 7 million subscribers make use of wireless telephone service provided by one of the two franchised cellular telephone operators in each metropolitan area, or about 5% of the total number of wired loops. In a cellular telephone system subscriber terminals are linked to a base station by radio signals rather than wires. These base stations are then linked by wire or microwave to a switching node. As mobile subscribers move from the area or "cell" served by one base station to a cell served by another, the wireless access link is automatically switched to the new base station. (Figure 6). Increasingly, cellular operators are finding that some subscribers use their cellular service as their primary telephone line, abandoning wireline service altogether. While existing cellular service was designed to support automobile-based telephones, work is rapidly progressing on technology and standards for so-called Personal Communications Services (PCS) which will support personal telephones the size of a cigarette pack. The FCC has recently issued a Notice of Proposed Rule Making (NPRM) asking for comment on whether it should grant PCS licenses for individual cities, as it did with PCS, for regions as large as LATAs, or grant one or more nationwide licenses of spectrum to PCS providers.
Cellular Telephone Network
In order to conserve spectrum, most proposals for new radio access technology assume the use of speech compression to reduce the channel rate to 16 or 32 kbps, which limits the usefulness of these access links for data by comparison with 64 kbps wireline channels.
Radio spectrum is basically an access technology, someone must still supply the switching. Treating wireless only as a loop substitute, the switching could be handled by the existing LECs; that is, the wireless operator would simply hand over the traffic collected at the base stations to the existing carriers; or, the cellular operator could install its own switching center to serve customers directly. A long distance company could choose to compete for the new radio licenses, build a radio access network, and handle all switching in existing or expanded toll switching offices. Finally, there is potentially great synergy between cable television companies and wireless telephony providers: excess fiber in the cable operator's plant can be used to interconnect local radio base stations to a central switching center. [21,22]
To date, wireless local access has focused on voice communications. However, as companies like AT&T, Apple and Sharp introduce their "Personal Digital Assistants", handheld computers designed to link their owners via wireless access to an enormous web of information and computational power, the demand for wireless data services seems poised to explode. Limitations in available spectrum at the frequencies currently used for cellular kbps data rates at 32 kbps and below. However, new research on wireless access at frequencies around 30 GHz may make possible wideband wireless access links. These higher frequencies are currently more costly to exploit, however, and more susceptible to interference due to inclement weather.
The significance of this litany of local loop developments is that it suggests that the simple notion of five years ago that a single integrated broadband network based on fiber access was the "obvious" path for future telecommunications network development is today not nearly so obvious.
Packet switching was invented in the 1960s to respond to the traffic requirements of data. Instead of reserving a dedicated circuit for the length of a call, a packet switching network allows many users to share transmission capacity by breaking up information into small chunks, called packets, and then using a transmission line to alternately send packets from several different users. The concept was used first in private networks built by end users. Until recently, users had to build their own data networks by leasing full period channels from the telephone carriers, and adding their own premises-based data switching equipment. Leasing full period channels is costly however, when traffic has a high peak requirement, but relatively low average throughput. This creates demand for switched data services from the carriers who can take advantage of traffic statistics to provide high peak capacity to each user, while charging only for the average throughput actually consumed.
The first carrier service meeting these objectives were public switched packet networks (PSDNs) which carried data in packets of 128 characters at speeds up to 48 kbps. These were served primarily terminal to mainframe computer traffic.
The introduction of desktop computers, created a demand for high speed switching of bursty traffic between machines. The idea of distributing the switching function among all the attached machines, rather than having a central switch led to the development of Local Area Network technology. In early LANs, the transmission medium is configured as a bus or ring and its capacity is shared by all of the users as they transmit their data in high speed bursts. Each node on the LAN is responsible for assuring that its transmissions do not interfere with any others. Very quickly many companies found themselves with campus-wide LANs capable of efficiently handling bursty traffic at speeds of 4 Mbps or more. By comparison, the data services offered by the public carriers were slow and not well suited to computer-to-computer--as opposed to terminal to computer--traffic. Corporate users again had to rely on leased lines and premises based switches ("routers") to link LANs at their various locations. (Figure 7.)
Corporate Data Network Using Leased Lines and Premises Switching
The CCITT standard version of cell relay is called Asynchronous Transfer Mode (ATM) and is the basis of proposed Broadband Integrated Services Digital Network (BISDN) standards at 155 Mbps. Interexchange carriers are expected to offering ATM switching services beginning in 1994. By 1992, several LAN vendors had begun offering ATM switches as successors to LAN hubs for linking increasingly powerful workstations at speeds above 100 Mbps. An interim step to customer use of cell relay is Switched Multimegabit Data Service which provides a LAN-like packet switching interface familiar to end users on top of a cell relay infrastructure.
While there are some who still question whether ATM switching will prove to be the optimum solution for integrated broadband networks, the concept has such momentum within the international telecommunications community that it is virtually certain to be widely deployed. There are also some who question whether a single integrated network will actually prove to be as cost effective as several networks each specialized for various traffic types. ATM switching, if successful, will provide carriers with the ability to offer "bandwidth on demand"--i.e. to carry all types of traffic over a common transmission and switching infrastructure.
The local exchange carriers have begun to introduce Common Channel signalling in their own networks to provide such services as Call Return, Repeat Call and Caller ID. Common Channel Signalling is also a pre-requisite for full deployment of Integrated Services Digital Networks which bring the common channel signalling right to the end user's terminal. However, deployment has been slow, partly due to disputes between the RHCs and Judge Greene over whether they can transport CCS data across Local Access and Transport Area (LATA) boundaries or must interconnect with the Inter Exchange Carriers (IECs) in each LATA.
Further innovations in the control of switching are being developed by the local exchange carriers under the heading "Advanced Intelligent Network" (AIN). The goal of AIN is to make it easier for carriers to offer advanced call control features on a customized basis.
Common channel signalling is also central to Personal Communications Services. With PCS, a user would have a single telephone number for his portable terminal which would never change, no matter where the customer traveled. Using a sophisticated control system based on CCS, calls to an individual's number would always be routed to the radio base station nearest his portable handset anywhere in the country, and eventually throughout the world.
As call control becomes more sophisticated, however, the problems posed by multiple providers sharing a geographic area become more complex. For example, for PCS to function properly, information on my whereabouts may need to be shared among multiple service providers if I am to receive calls in any jurisdiction. It is quite likely that the diffusion of competition in the local loop will be paced by the difficulties in resolving control issues, not merely problems of the interconnection of transport facilities. It is perhaps interesting to note that the National Science Foundation in its recent solicitation for proposals to provide services in support of the Interagency Interim National Research and Education Network (IINREN) has advocated the separation of the "Routing Authority" from the provider of transport. One might envision operation of the software systems for the intelligent network eventually being separated from the competing service providers and operated either by a cooperative association, or by an independent party.
Given connectivity among millions of students and researchers who clearly have a need for information sharing, how far have we come in making information sharing simple and available to the average user. The answer is not very far at all. Mail and bulletin boards are the most developed application, accounting for some 15% of total network traffic. Bulletin boards on selected topics, many having to do with computers, provide a way for information in a particular field to circulate rapidly among interested researchers. Yet the usefulness of mail is limited by the total absence of a directory system for finding out someone's electronic mail address.
Many universities or even individuals have taken to making information available to others via anonymous retrieval using the File Transfer Protocol. FTP accounts for some 50% of all traffic on the NSFNET. While the amount of information thus available. is enormous, the tools for finding out what information is available are still quite primitive.
A number of separate projects undertaken at different Universities are beginning to provide models for how overall structuring of information access might be accomplished. Each of these projects incorporate a client server model in which software on a user's machine (the "client") talks to software on one or more "server" machines connected to the internet. These servers may be repositories of both indexing information and of data, or they may be organized in a hierarchy in which the records at one server contain index information as to the data indexed and stored at yet another server. The Wide Area Information System, developed at Thinking Machines Inc. uses a notion of indexes and documents at each server. A client searches across one or more indexes and identifies documents of interest, which can then be retrieved. A "document of interest" might be a reference to another server with its own index to be searched. WAIS uses sophisticated weighted searching techniques to support searching one or more indexes for articles similar to a previously retrieved article of interest. The Gopher System, developed at the University of Minnesota, uses a menu interface to allow users to search across many different servers for documents of interest. Each server can be an entry point into the entire space of documents, since menus are organized as an unrooted graph, not as a tree. The Mercury project at Carnegie Mellon University has developed a client server system for retrieving citations to journal literature, and then to fetch facsimile images of journal pages.
Another form of information sharing, somewhat more transparent than FTP, which involves copying a file from one machine to another, is provided by the Andrew File System marketed by Transarc, Inc. The Andrew File System allows file servers at multiple institutions to share a single hierarchical name space from which users can read and write copies of files no matter where in the world they are located. Thus, two co-authors collaborating on a paper can always have access to the most current version, rather than sending copies to each other via mail or FTP.
Each of these systems gives a hint as to the kind of robust information sharing that might be possible as we develop a true information--as opposed to communications--infrastructure, but there are many unresolved problems. Among them:
* Development of standardized methods for information finding: White Pages directories, Yellow Pages, information indexes
* Development of widely standardized methods for retrieving information which may be scattered across hundreds of different hosts.
* Mechanisms for security and authentication.
*` Development of billing and accounting systems which can track the transfer of intellectual property and provide a mechanism for compensating authors and maintainers.
* Development of standard document representation formats which go beyond ASCII text and allow sharing of graphics, images, voice annotation, animated sequences and video.
Research and demonstration prototypes of systems solving these problems have been called for in the Information Infrastructure and Technology Act of 1992 introduced by Senator Gore as a follow-on to the High Performance Computing and Communications Act of 1991 which has funded network infrastructure.
The proponents of competition have generally argued in favor of competition in all segments of the communications network: in the local loop as well as in the switching and long haul portions of the network; in transport, and networks, as well as in enhanced services.
The view that vigorous competition is the desirable state of affairs for the nation's telecommunications was at first fiercely resisted by the existing local exchange carriers. More recently, they have come to accept the idea of competition as long as there is a "level playing field." In casting about for a political argument that could help to swing the debate away from competition, the local exchange carriers have recently focused on the code word "infrastructure." By refocusing the debate on universal service, and a "public good" model of networks, the existing carriers have highlighted the risks of competition, and created a climate in which they may be better able to expend ratepayer funds to invest in networks for the future. The fact that such investments tend to raise barriers to entry is merely a side benefit.
The heart of the LEC argument is based on the notion that a single integrated broadband network based on fiber optic transmission facilities and Asynchronous Transfer Mode switching will be able to meet all the varying needs for communications services, from simple POTS to the most exotic supercomputer interconnection. The implicit assumption behind such arguments is that integrated broadband networks have such economies of scale and scope that there can be little room for multiple providers each offering differentiated services oriented towards specific market niches. Further, they argue, any restrictions on the type of services that telephone companies can carry -- information services, video services--would inhibit the realization of these scale and scope economies to the detriment of the consumer. In its most extreme form, the argument takes the position that the scale and scope economies extend beyond transport and networking to the provision of information itself. Thus carriers must also be free to originate content on their integrated broadband networks if the full benefits are to be realized.
This argument is reflected in legislation such as the Burns-Dole bill (S.1200) or the recent telecommunications policy legislation in New Jersey and appears frequently in articles written in telephony trade magazines.
In return for permission to provide the full range of networking services, voice data and video, the telephone companies would continue to operate as common carriers, providing non-discriminatory access to their network to all users. This is the bargain inherent in the FCC's recent video dialtone decision, which authorizes telephone company entry into the delivery of video services, but only on a common carrier basis. The common carrier model is seen as the best approach to fostering competition among information providers. Further, the single integrated common carrier would be in a better position to insure universal service by rate averaging among its customers.
The adherents to the competition paradigm envision a much different future. They question the extent of economies of scale and scope in the communications infrastructure. If economies of scale and scope are limited, then there is room for multiple providers of competing communications networks. Moreover, in a period of great technological ferment, competition is seen as more likely to insure that technological opportunities are exploited. Thus, the way to insure that our communications infrastructure is based on the most cost effective technologies, is to encourage competition at all levels.
If the communications infrastructure is provided on a competitive basis, then the job of regulators is greatly simplified. Regulators no longer have to monitor costs in detail to determine if prices provide only "reasonable" profits: the pressures of competition can be relied on to reduce prices to cost-based levels. If the communications infrastructure is provided on a monopoly basis, great care must be taken to insure that monopolization of transmission or networking does not lead to monopolization of enhanced or information services, for, as we have discussed above, these are likely to be be better provided by many smaller firms which are more flexible and more customer oriented. In a competitive environment, there would be less need to be wary of carrier participation in the preparation of content, since no one carrier would control the only avenue for information dissemination.
To what extent does our review of technology shed light on the above debate? First, the case for a single, fiber optic-based integrated broadband network is still economically uncertain at the present time. Moreover cable companies are sufficiently well positioned, that they may well be able to evolve their networks to carry integrated broadband traffic more cost effectively than can the telephone companies. Indeed, U.S. West recently issued a Request for Information (RFI) to the traditional cable TV vendors for equipment and architectures that would allow the carrier to upgrade its loops using a mix of fiber and coaxial cable.
Second, Jim Utterback has observed that whenever a radically new technology appears that threatens to displace an entrenched alternative, it often stimulates rapid productivity improvement in the older technology which staves off its demise. In the years immediately following the invention of the electric lightbulb, gas lamp manufacturers realized a six fold improvement in light output through research on better wicks and other improvements. In much the same way, we are seeing rapid improvements in the carrying capacity of copper which may well enable the lead broadband product--entertainment video--to be delivered to the home without the need to install fiber.
Third, the most difficult problems of multiple networks are likely to be in interworking the control systems, particularly as these become more and more sophisticated. Little thought has been give to how AIN services might be delivered in a competitive local environment.
Fourth, we have paid far more attention to date to the development of the transport and networking layers of our information infrastructure. However, there are numerous unresolved issues at the services layer that must be addressed if information is to be readily available and shareable throughout the society. Developing simple user metaphors which allow information to be found may not appear as exciting as work on gigabit networks, but it will probably have far more impact on the efficiency and competitiveness of US firms and educational institutions.