• Aucun résultat trouvé

A Primer on Internet Technology

C. The logic of the Internet

At the physical layer, discussed later in this chapter, the Internet includes millions of networked computers and smart devices joined together by fiber-optic pipes and other transmission media into a world-wide “network of networks.” At the logicallayer, the Internet consists of a common com-puter “addressing” scheme and a set of protocols for the accurate and effi-cient transmission of packet-switched data across different computer networks. Those protocols—known collectively as the “TCP/IP suite” (for

“transmission control protocol” and “Internet protocol”)—enable each packet in a transmission to “tell” the packet switches it encounters where it is headed and enables the computers on each end to confirm that the message has been accurately transmitted and received.6“The Internet” is defined as the combination of these characteristics—the IP-based address-ing system and the interconnected network of networks that rely on TCP/IP as a common logical layer standard.7

Together, these elements of the Internet enable a computer in one cor-ner of the world to find a different computer in another corcor-ner of the world and exchange information that can be “understood” by the applications software loaded onto the computers at each end of the transmission.

Indeed, the two computers (or smart devices) may use entirely different hardware and operating systems. One might be a powerful server running on the UNIX operating system, and the other might be an IBM-compatible personal computer running Windows XP or an Apple Macintosh running its own operating system. The critical point is that all computers connect-ed to the Internet speak the same logical layer language: TCP/IP. As notconnect-ed, this language is used by all in the Internet world and, like English, is owned by no one.

We will make this point more concrete with a simplified example.

Suppose that you wish to download a particular webpage—that is, load it into your computer’s memory and call it up on your screen. To do that, you type in the webpage’s domain name, such as “www.amazon.com.” This domain name is just a user-friendly shorthand for the real information needed to reach the website: the IP addressof the computer hosting that

site. The IP address—four numerical sequences separated by dots (in this case, 207.171.163.30)—performs much the same function as the number you dial in an ordinary phone call: it designates the computer you are try-ing to reach. Like the number assigned to a mobile phone, an IP address is not location-specific. It can be accessed from anywhere and it could be located anywhere.

In our simplified example, your computer finds the IP address for the website by transmitting the domain name to a special type of computer on the Internet called a domain name server, whose job it is to keep track of which domain names correspond to which IP addresses. (Domain names—

valuable commodities in the world of e-commerce—are allocated by pri-vate companies such as Verisign under the general supervision of a non-profit entity known as the Internet Corporation for Assigned Names and Numbers, or “ICANN.”8) The message transmitted in your Web inquiry is broken down into discrete packets—strings of 1s and 0s—which may fly off individually in several directions in search of the fastest, least congested route to the computer running that website. Messages are com-partmentalized this way for the sake of efficiency. By analogy, as John Naughton explains in his masterful history of the Internet:

[Nobody] would contemplate moving a large house in one piece from New York to San Francisco. The obvious way to do it is to disassemble the structure, load seg-ments on to trucks and then dispatch them over the interstate highway network.

The trucks may go by different routes—some through the Deep South, perhaps, others via Kansas or Chicago. But eventually, barring accidents, they will all end up at the appointed address in San Francisco. They may not arrive in the order in which they departed, but provided all the components are clearly labelled the var-ious parts of the house can be collected together and reassembled.9

In an Internet packet, the labeling function is performed by an address header—the 1s and 0s that appear in preassigned slots near the beginning of each packet and convey information about the packet’s destination.

Ideally, the related packets in a message will end up in the right place in short order because various packet switches throughout the Internet’s telecommunications infrastructure—the routers—are constantly exchang-ing information about the most efficient way to reach a particular destina-tion. Additional pre-assigned slots within an Internet packet contain other standard information, such as where the packet originated, how many bits

A Primer on Internet Technology 123

it contains, and how it relates to the other packets within the same trans-mission. The “TCP/IP protocol suite” is, in essence, the set of rules govern-ing which slots within a packet will contain what information about a packet’s destination, source, length, and so forth. Together, those rules enable the computers on each end to ensure the efficient transmission of the data “cargo”—the substance of the transmission—across the Internet.

The “TCP” in TCP/IP governs the assembly and reassembly of the packets at each end (including checking for errors such as missing packets), while the “IP” is responsible for moving packets of data from one node to anoth-er.10

The high capacity computer hosting the website you contact could be in the same city or halfway across the globe; you do not know, and it often makes no difference to you. When it receives your inquiry, that computer, known as a server (in that it “serves” you, the “client”), sends a burst of digital packets back to you. Again, the 1s and 0s in the header of each packet contain addressing information to ensure that the return message reaches your computer. Other 1s and 0s identify the content of the web-page using protocols specific to the World Wide Web. Your computer is able to translate those 1s and 0s into pictures and words only because it is outfitted with client software (a browsersuch as Netscape Navigator or Internet Explorer) designed to “understand” the meaning of 1s and 0s transmitted from distant websites. This is a critical point: the telecommu-nications facilities of the Internet itself—and, more generally, the Internet’s physical and logical layers—do not “know” what those 1s and 0s mean;

they simply send the 1s and 0s your way and let your computer software figure out the rest.

The Internet’s effectiveness depends on universal agreement on the non-proprietary protocols to be used to translate information into 1s and 0s and back again. As discussed below, this agreement is the legacy of the government’s early sponsorship of the Internet’s antecedents combined with the power of network effects once the Internet assumed public stature.

The important point for now is that, because the Internet’s core logical layer standards are not owned by any firm, any operator of a data network can connect to the Internet. Similarly, because the intelligence of the Internet is provided primarily by the devices connected to it, and not by centralized switches, any developer of an “application,” such as a file shar-ing program like Napster, is free to make her work available via the Internet and give all Internet users access to it. Consequently, the creator of

new media content, such as a short film, can rely on the Internet to distrib-ute her work, thereby displacing such traditional intermediaries as movie theaters or television networks. To summarize, these two related features of the Internet’s open architecture—the openness of its protocols and the abil-ity of anyone to develop applications and content for it—help explain the Internet’s spectacular growth.

The Internet’s generally open architecture is no accident. The engineers who developed the basic protocols for the Internet self-consciously pro-moted an end-to-end design principle that gave maximal control to end users and minimized the intelligence necessary to operate the Internet itself.11When applied faithfully, this principle means that packets are deliv-ered on a first come, first served basis without regard to their content, ori-gin, or destination, and are free from any intermediate error checking or filtering. This end-to-end feature makes TCP/IP a quintessential form of common carriage. But it also means that, as originally conceived, the Internet was an imperfect medium for real-time applications such as voice conversations and video-conferencing because it provided no assurance that bits would arrive in time for such applications to work with the req-uisite quality of service.

Because of the characteristics just described, the Internet is sometimes described as the circuit-switched telephone network “turned inside out.”

The intelligence in a telephone network resides in centralized switches, which tightly control the range of permissible applications and the quality of service for each application. Telephone company engineers conserve switching and transport capacity by limiting the bandwidth available to those making telephone calls. In the Internet world, by contrast, the prin-cipal limit on bandwidth—if no one else is using the network—is the over-all capacity of the routers and pipes between point A and point B. If these facilities are congested, you don’t get an “all circuits busy” signal as you might when placing a telephone call at peak calling hours on Mother’s Day.

Rather, you simply face delay and the potential degradation of whatever application you are trying to run.

As noted in chapter 2, ISPs and network service providers have begun deviating from the end-to-end principle by, for example, filtering out unwanted traffic and assigning priority within their IP networks to pack-ets (identified by special headers) associated with certain real-time applica-tions. At a high level of abstraction, they have effectively made these networks function somewhat more like traditional telephone networks.

A Primer on Internet Technology 125

These steps are designed both to improve service quality for voice and video-conferencing applications and to cope with the proliferation of spam and threats to network security. In this respect, some of the interconnected networks that make up the Internet are becoming “smarter” and have thus reduced the need for the use of inefficiently duplicative circuit-switched networks for voice services. But they have done so only by compromising, to some extent, the end user’s traditional control over her Internet experi-ence and raising a set of policy concerns addressed in chapter 5.12