Fundamental Concepts of computer network

Fundamental Concepts

Learning Objectives

After reading this chapter, you should be able to:

Understand the relationship between business drivers and network

engineering

Understand the difference between circuit and packet switching

Understand the advantages and disadvantages of circuit and packet

switching

Understand the basic concept of network complexity and complexity

tradeoffs

Networks were always designed to do one thing: carry information from one

attached system to another. The discussion (or perhaps argument) over the best way

to do this seemingly simple task has been long-lived, sometimes accompanied by

more heat than light, and often intertwined with people and opinions of a rather

absolute kind. This history can be roughly broken into multiple, and often overlap-

ping, stages, each of which asked a different question:

Should networks be circuit switched or packet switched?

Should packet switched networks use fixed- or variable-sized frames?

What is the best way to calculate a set of shortest paths through a network?

How should packet switched networks interact with Quality of Service (QoS)?

Should the control plane be centralized or decentralized?

Chapter 1 Fundamental Concepts

Some of these questions have been long since answered, most often by blending

the more extreme elements of each position into a sometimes messy, but generally

always useful, solution. Some of these questions are, on the other hand, still active,

particularly the last one. Perhaps, in twenty years’ time, readers will be able to look

on this last question as being answered, as well.

This chapter will describe the basic terms and concepts used in this book from

within the framework of these historical movements or stages within the world

of network engineering. By the end of this chapter, you will be ready to tackle the

first two parts of this book—the forwarding plane and the control plane. The third

part, an overview of network design, builds on the first two parts. The final part

of this book looks at a few specific technologies and movements likely to shape the

future—not only of network engineering, but even of the production and processing

of information overall.

Art or Engineering?

One question that must be asked, up front, is whether network engineering is an art,

or truly engineering. Many engineering fields begin as more of an art. For instance,

in the early 1970s, working on and around electronics—tubes, “coils,” and trans-

formers—was largely considered an art. By the mid-1980s, electronics had become

ubiquitous, and this began a commoditization process harsher than any standardi-

zation. Electronics then was considered more engineering than art. By the 2010s,

electronics became “just the stuff that makes up computers.” There is still some art

in the designing and troubleshooting of electronics, but, by and large, their creation

became more focused on engineering principles. The problems have moved from

“how do you do that,” to “what is the cheapest way to do that,” or “what is the

smallest way to do that,” or some other problem that would have been considered

second order in the earlier days. Perhaps one way to phrase the movement in elec-

tronics is in ratios. Perhaps (and these are very rough estimates), electronics started

at around 80% art and 20% engineering, and has now moved to 80% engineering

and 20% art.

What about network engineering? Will it pass through the same phases, eventu-

ally moving into the 80% engineering, 20% art range? This seems doubtful for sev-

eral reasons. Network engineering works in a largely virtual space; although there

are wires and devices, the protocols, data, and functionality are all laid on top of the

physical infrastructure, rather than being the physical infrastructure. Unlike electron-

ics, where you can point to a physical object and say, “this is the product,” a network

is not a physical thing. To put this another way, the network is a conceptual “thing”

built using a wide array of individual components connected together through

protocols and data models. This means design choices are almost infinitely variable

and malleable. Each problem can be approached, assessed, and designed much more

specifically than in electronics. So long as there are new problems to solve, there will

be new solutions developed, deployed, and—eventually (finally) removed from net-

works. Perhaps a useful comparison is between applications and the various kinds of

computing devices; no matter how standardized computing devices become, there is

still an almost infinite selection of software applications to run on top.

Figure 1-1 will be useful in illustrating this “fit” between the network and the

business from one perspective.

In Figure 1-1, the solid gray curved line is business growth. The dashed black line

running vertical and horizontal is network capacity. There are many times when the

network is overprovisioned, costing the business money to maintain unused capac-

ity; these are shown in the gray line-shaded regions. There are other times when

the network is under strain. In these darker gray solid-shaded regions, the busi-

ness could grow faster, but the network is holding it back. One of the many objec-

tives of network architecture and design (this is more of an architecture issue than

strictly a design issue; see The Art of Network Architecture) is to bring these lines

closer together. Accomplishing this part of the work requires creativity and future

thinking problem-solving skills. The engineer must ask questions like “How can the

Overpaying for infrastructure

Lost business opportunity

Figure 1-1 Business to Technology Fit, First Perspective

Chapter 1 Fundamental Concepts

network be built so it scales larger and smaller to move with the business’s require-

ments?” This is more than just scale and size; however, it is possible the nature of

the business may even change over time, driving changes in applications, opera-

tional procedures, and operational pace. The network must have an architecture

capable of changing as needed without introducing ossification, or the harden-

ing of systems and processes that will eventually cause the network to fail in a

catastrophic way. This part of the working on networks is often considered more

art than engineering, and it is not likely to change until the entire business world

changes in some way.

Figure 1-2 illustrates another way in which businesses drive network engineering

as an art.

In Figure 1-2, time runs from the left to the right, and feature count from

the bottom to the top. What the chart expresses is the additional features added

to a product over time. Network operator A will start out needing a somewhat

small feature set, but the feature set required will increase over time; the same

will hold true of the other three networks. The feature sets required to run any

of these networks will always overlap to some degree, and they will also always

be different to some degree. If a vendor wants to be able to sell a single product

(or product line) and cater to all four networks, it will need to implement every

unique feature required by each network. The entire set of features is depicted by

the peak of the chart on the right side. For each of the networks, some percentage

of the features available in any product will be unnecessary—also known as code

bloat.

Features for network D

Features for network C

Features for network B

Feature Count

Features for network A

Time

Figure 1-2 Features versus Usage in Networking Products

۹

Even though these features are not being used, each one will still represent secu-

rity vulnerabilities, code that must be tested, code that interacts with features that

are being used, etc. In other words, each one of these unused features is actually a

liability for the network operator. The ideal solution might be to custom build equip-

ment for each network, containing just the features required—but this is often not a

choice available to either the vendor or the network operator. Instead, network engi-

neers must somehow balance between required features and available features—and

this process is definitely more a form of art than engineering.

So long as there are mismatches between the way networks can be built and the

way businesses use networks, there will always be some interplay between art and

engineering in networking. The percentage of each one will vary based on the net-

work, tools, and the time within the network engineering field, of course, but the

art component will probably always be more strongly represented in the networking

field than it is in fields like electronics engineering.

Note

Some people might object to the use of the word art in this section. It is easy

enough to replace art with craft, however, if this makes the concepts in this sec-

tion easier to understand.

Circuit Switching

The first large discussion in the computer networking world was whether networks

should be circuit switched or packet switched. The basic difference between these

two is the concept of a circuit—do the transmitter and receiver “see” the network as

a single wire, or connection, preconfigured (or set up) with a specific set of proper-

ties before they begin communicating? Or do they “see” the network as a shared

resource, where information is simply generated and transmitted “at will”? The for-

mer is considered circuit switched, while the latter is considered packet switched.

Circuit switching tends to provide more traffic flow and delivery guarantees, while

packet switching tends to deliver data at a much lower cost—the first of many trade-

offs you will encounter in network engineering. Figure 1-3 will be used to illustrate

circuit switching, using Time Division Multiplexing (TDM) as an example.

In Figure 1-3, the total bandwidth of the links between any two devices is split

up into eight equal parts; A is sending data to E using time slot A1 and F using time

slot A2; B is sending data to E using time slot B1 and F using time slot B2. Each piece

of information is a fixed length, so each one can be put into a single time slot in the

Chapter 1 Fundamental Concepts

A1 B2

A1 A2

A

E

A1 B1 B2 A2

D C

B F

B1 B2

B1 A2

Figure 1-3 Time Division Multiplexing Based Circuit Switching

ongoing data stream (hence, each block of data represents a fixed amount of time, or

slot, on the wire). Assume there is a controller someplace assigning a slot on each of

the segments the traffic will traverse:

For [A,E] traffic:

At C: slot 1 from A is switched to slot 1 toward D •

At D: slot 1 from C is switched to slot 1 toward E •

For [A,F] traffic:

At C: slot 4 from A is switched to slot 4 toward D •

At D: slot 4 from C is switched to slot 3 toward F •

For [B,E] traffic:

At C: slot 4 from B is switched to slot 7 toward D •

At D: slot 7 from C is switched to slot 4 toward E •

For [B,F] traffic:

At C: slot 2 from B is switched to slot 2 toward D •

At D: slot 2 from C is switched to slot 1 toward F •

None of the packet processing devices in the network need to know which bit

of data is going where; so long as C takes whatever is in slot 1 in A’s data stream in

each time frame and copies it to slot 1 in its outgoing stream toward D, and D cop-

ies it from slot 1 inbound from C to slot 1 outbound to E, traffic transmitted by A

۱۱

will be delivered at E. There is an interesting point to note about this kind of traffic

processing—to forward the traffic, none of the devices in the network actually need

to know what the source or destination is. The blocks of data being transmitted

through the network do not need to contain source or destination addresses; where

they are headed, and where they are coming from, decisions are all based on the

controllers’ knowledge of open slots in each link. The set of slots assigned to any

particular device-to-device communications is called a circuit, because it is band-

width and network resources committed to the communications between the one

pair of devices.

The primary advantages of circuit switched networks include:

The devices do not need to read a header, or do any complex processing, to

switch packets. This was extremely important in the early days of networking,

when hardware had much lower transistor and gate counts, line speeds were

lower, and the time to process a packet in the device was a large part of the

overall packet delay through the network.

The controller knows the available bandwidth and traffic being pushed toward

the edge devices everywhere in the network. This makes it somewhat simple,

given there is actually enough bandwidth available, to engineer traffic to create

the most optimal paths through the network possible.

There are also disadvantages, including:

The complexity of the controller ramps up significantly as the network and

services it offers grow in scale. The load on the controller can become over-

whelming, in fact, causing network outages.

The bandwidth on each link is not used optimally. In Figure 1-3, the blocks of

time (or cells) containing an * are essentially wasted bandwidth. The slots are

assigned to a particular circuit ahead of time: slots used for the [A,E] traffic

cannot be “borrowed” for the [A,F] traffic even when A has nothing to trans-

mit toward E.

The time required to react to changes in topology can be quite long in network

terms; the local device must discover the change, report it to the controller, and

the controller must reconfigure every network device along the path of each

affected traffic flow.

TDM systems contributed a number of ideas to the development of the net-

works used today. In particular, TDM systems molded much of the early discussion

۱۲

Chapter 1 Fundamental Concepts

on breaking data into packets for transmission through the network, and laid the

groundwork for much later work in QoS and flow control. One rather significant

idea these early TDM systems bequeathed to the larger networking world is network

planes.

Note

Quality of Service is briefly considered in a later section in this chapter, and then

in more depth in Chapter 8, “Quality of Service,” later in this book.

Specifically, TDM systems are divided into three planes:

The control plane is the set of protocols and processes that build the informa- •

tion necessary for the network devices to forward traffic through the network. In

circuit switched networks, the control plane is completely a separate plane; there

is normally a separate network between the controller and the individual devices

(though not always, particularly in newer circuit switched systems).

The data plane (also known as the forwarding plane) is the path of informa- •

tion through the network. This includes decoding the signal received in a wire

into frames, processing them, and pushing them back onto the wire, encoded

according to the physical transport system.

The management plane is focused on managing the network devices, includ- •

ing monitoring the available memory, monitoring queue depth, and moni-

toring when the device drops the information being transmitted through the

network, etc. It is often difficult to distinguish between the management and

the control planes in a network. For instance, if the device is manually config-

ured to forward traffic in a particular way, is this a management plane function

(because the device is being configured) or a control plane function (because

this is information about how to forward information)?

Note

This question does not have a definitive answer. Throughout this book, however,

anything that impacts the way traffic is forwarded through the network is consid-

ered part of the control plane, while anything that impacts the physical or logical

state of the device, such as interface state, is considered part of the management

plane. Do not expect these definitions to hold true in the real world.

۱۳

Note

Frame Relay, SONET, ISDN, and X.25 are examples of circuit switched tech-

nology, some of which are still deployed at the time of writing. See the “Further

Reading” section for suggested sources for learning about these technologies.

Packet Switching

In the early- to mid-1960s, packet switching was “in the air.” A lot of people were

rethinking the way networks had been built until then, and were considering alterna-

tives to the circuit switched paradigm. Paul Baran, working for the RAND Corpora-

tion, proposed a packet switching network as a solution for survivability; around the

same time, Donald Davies, in the UK, proposed the same type of system. These

ideas made their way to the Lawrence Livermore Laboratory, leading to the first

packet switched network (called Octopus) being put into operation in 1968. The

ARPANET, an experimental packet switched network, began operation not long

after, in 1970.

Packet Switched Operation

Note

The actual process of switching a packet is discussed in greater detail in Chapter ۷,

“Packet Switching.”

The essential difference between circuit switching and packet switching is the

role individual network devices play in the forwarding of traffic, as Figure 1-4

illustrates.

In Figure 1-4, A produces two blocks of data. Each of these includes a header

describing, at a minimum, the destination (represented by the H in each block of

data). This complete bundle of information—the original block of data and the

header—is called a packet. The header can also describe what is inside the packet,

and include any special handling instructions forwarding devices should take when

processing the packet—these are sometimes called metadata, or “data about the

data in the packet.”

There are two packets produced by A: A1, destined to E; and A2, destined to F. B

sends two packets as well: B1, destined to F, and B2, destined to E. When C receives

۱۴

Chapter 1 Fundamental Concepts

H H A1 A2 H H A1 B2

A

E

H H H HA1 A2 B1 B2

D C

B F

B1 H H H H B1 B2 A2

Figure 1-4 Packet Switched Network

these packets, it reads a small part of the packet header, often called a field, to deter-

mine the destination. C then consults a local table to determine which outbound

interface the packet should be transmitted on. D does likewise, forwarding the

packet out the correct interface toward the destination.

This way of forwarding traffic is called hop-by-hop forwarding, because each

device in the network makes a completely independent decision about where to for-

ward each individual packet. The local table each device consults is called a forward-

ing table; this normally is not one table, but many tables, potentially including a

Routing Information Base (RIB) and a Forwarding Information Base (FIB).

Note

These tables, how they are built, and how they are used, are explained more fully

in Chapter 7, “Packet Switching.”

In the original circuit switched systems, the control plane is completely separate

from packet forwarding through the network. With the move from circuit to packet

switched, there was a corresponding move from centralized controller decisions to a

distributed protocol running over the network itself. For the latter, each node is capa-

ble of making its own forwarding decisions locally. Each device in the network runs

the distributed protocol to gain the information needed to build these local tables.

This model is called a distributed control plane; thus the idea of a control plane was

simply transferred from one model to the other, although they do not actually mean

the same thing.

۱۵

Note

Packet switching networks can use a centralized control plane, and circuit switch-

ing networks can use distributed control planes. At the time packet switched

networks were first designed and deployed, however, they typically used distrib-

uted control planes. Software-Defined Networks (SDNs) brought the concept of

centralized control planes back into the world of packet switched networks.

The first advantage the packet switched network has over a circuit switched net-

work is the hop-by-hop forwarding paradigm. As each device can make a completely

independent forwarding decision, packets can be dynamically forwarded around

changes in the network topology, eliminating the need to communicate to the con-

troller and await a decision. So long as there are at least two paths between the source

and the destination (the network is two connected), packets handed to the network

by the source will eventually be handed to the destination by the network.

The second advantage the packet switched network has over a circuit switched net-

work is the way the packet switched network uses bandwidth. In the circuit switched

network, if a particular circuit (really a time slot in the TDM example given) is not

used, then the slot is simply wasted. In hop-by-hop forwarding, each device can best

use the bandwidth available on each outbound link to carry the necessary traffic

load. While this is locally more complex, it is globally simpler, and it makes better

use of network resources.

The primary disadvantage of packet switched networks is the additional com-

plexity required, particularly in the forwarding process. Each device must be able

to read the packet header, look up the destination in a table, and then forward the

information based on the table lookup results. In early hardware, these were difficult,

time-consuming tasks; circuit switching was generally faster than packet switching.

As hardware has improved over time, the speed of switching a variable length packet

is generally close enough to the speed of switching a fixed length packet that there is

little difference between packet and circuit switching.

Flow Control in Packet Switched Networks

In a circuit switched network, the controller allocates a specific amount of band-

width to each circuit by assigning time slots from the source to the destination. What

happens if the transmitter wants to send more traffic than the allocated time slots

will support? The answer is simple—it cannot. In a sense, then, the ability to control

the flow of packets through the network is built in to a circuit switched network;

۱۶

Chapter 1 Fundamental Concepts

there is no way to send more traffic than the network can forward, because “space”

the transmitter has at its disposal for information sending is pre-allocated.

What about packet switched networks? If all the links in the network shown in

Figure 1-4 have the same link speed, what happens if both A and B want to use the

entire link capacity toward C? How will C decide how to send it all to D on a link

that is half as large as the traffic it needs to handle? Here is where traffic flow control

techniques can be used. Typically, they are implemented as a separate protocol/rule

set “riding on top of” the underlying network, helping to “organize” packet trans-

mission by building a virtual circuit between the two communicating devices.

Note

Flow and error control are discussed in detail in Chapter 2, “Data Transport

Problems and Solutions.”

The Transmission Control Protocol (TCP) provides flow control for Internet

Protocol (IP) based packet switched networks. This protocol was first specified in

۱۹۷۳ by Vint Cerf and Bob Kahn.

The Protocol Wars

In the development of packet switched networks, a number of different pro-

tocols (or protocol stacks) were developed. Over time, all of them have been

abandoned in favor of the IP suite of protocols. For instance, Banyan Vines

had its own protocol suite based on IP called Vines Internet Protocol (VIP),

and Novell Netware had its own protocol suite based on a protocol called

IPX. Other standards bodies created standard protocol suites as well, such as

the International Telecommunications Union’s (ITU) suite of protocols built

around Connectionless Mode Network Service (CLNS).

Why did all of these protocol suites fall by the wayside? Some of

them were proprietary, and many governments and large organiza-

tions rejected proprietary solutions to packet switched networking for

a wide range of reasons. The proprietary solutions were often not as well

thought out, either, as they were generally developed and maintained by

a small group of people. Standards-based protocols can be more com-

plex, but they also tend to be developed and maintained by a larger group

of experienced engineers. The protocol suite based on CLNS was a seri-

ous contender for some time, but it just never really caught on in the global

Internet, which was becoming an important economic force at the time.

۱۷

There were some specific technical reasons for this—for instance, CLNS

does not number wires, but hosts. The aggregation of reachability informa-

tion (concepts covered in more detail later in this book) is therefore limited in

many ways.

An interesting reference for the discussion between the CLNS and IP pro-

tocol suites is The Elements of Networking Style.1

  1. Padlipsky, The Elements of Networking Style and Other Essays and Animadversions on the

Art of Intercomputer Networking (New York: Prentice-Hall, 1985).

Fixed Versus Variable Length Frames

In the late 1980s, a new topic of discussion washed over the network engineering

world—Asynchronous Transfer Mode (ATM). The need for ever higher speed cir-

cuits, combined with slow progress in switching packets individually based on their

destination addresses, led to a push for a new form of transport that would, eventu-

ally, reconfigure the entire set (or stack, because each protocol forms a layer on top of

the protocol below, like a “stacked cake”) of protocols used in modern networks.

ATM combined the fixed length cell (or packet) size of circuit switching with a

header from packet switching (although greatly simplified) to produce an “in

between” technology solution. There were two key points to ATM: label switching

and fixed call sizes; Figure 1-5 illustrates the first.

In Figure 1-5, G sends a packet destined to H. On receiving this packet, A exam-

ines a local table, and finds the next hop toward H is C. A’s local table also specifies a

H A1

G

H

L H A1

A

E

L H A1 L H A1

D C

H B F

LH A1 H A1

Figure 1-5 Label Switching

۱۸

Chapter 1 Fundamental Concepts

label, shown as L, rather than “just” information about where to forward the packet.

A inserts this label into a dedicated field at the head of the packet and forwards it to C.

When C receives the packet, it does not need to read the destination address in the

header; rather, it just reads the label, which is a short, fixed length field. The label

is looked up in a local table, which tells C to forward traffic to D for destination H.

The label is very small, and is therefore easy to process for the forwarding devices,

making switching much faster.

The label can also “contain” handling information for the packet, in a sense.

For instance, if there are actually two streams of traffic between G and H, each one

can be assigned a different label (or set of labels) through the network. Packets car-

rying one label can be given priority over packets carrying another label, so the net-

work devices do not need to look at any fields in the header to determine how to

process a particular packet.

This can be seen as a compromise between packet and circuit switching. While

each packet is still forwarded hop by hop, a virtual circuit can also be defined by the

label path through the network. The second point was that ATM was also based on

a fixed sized cell: each packet was limited to 53 octets of information. Fixed size cells

may seem to be a minor issue, but fixed size packets can make a huge performance

difference. Figure 1-6 illustrates some factors involved in fixed cell sizes.

In Figure 1-6, packet 1 (A1) is copied from the network into memory on a line

card or interface, LC1; then it travels across the internal fabric inside B (between

memory locations) to LC2, being finally placed back onto the network at B’s out-

bound interface. It might seem trivial from such a diagram, but perhaps the most

Memory Copy

Packet 1

A1 A1 A1 A1 A1

Fragment Memory Copy

Packet 2

A1 A1 A1 A2 A1 A2 A1 A2 A1 A2

LC1 LC2 Fabric

C A D

B

Figure 1-6 Fixed Cell Sizes

۱۹

important factor in the speed at which a device can switch/process packets is the time

it takes to copy the packet across any internal paths between memory locations. The

process of copying information from one place in memory to another is one of the

slowest operations a device can undertake, particularly on older processors. Making

every packet the same (a fixed cell size) allowed code optimizations around the copy

process, dramatically increasing switching speed.

Note

The process of switching a packet across an internal fabric is considered in

Chapter 7, “Packet Switching.”

Packet 2’s path through B is even worse from a performance perspective; it is cop-

ied off the network into local memory first. When the destination port is determined

by looking in the local forwarding table, the code processing the packet realizes the

packet must be fragmented to fit into the largest packet size allowed on the outbound

[B,C] link. The inbound line card, LC1, fragments the packet into A1 and A2, creat-

ing a second header and adjusting any values in the header as needed. The packet is

divided into two packets, A1 and A2. These two packets are copied in two opera-

tions across the fabric to the outbound line card, LC2. By using fixed size cells, ATM

avoids the performance cost of fragmenting packets (at the time ATM was being

proposed) incurred by almost every other packet switching system.

ATM did not, in fact, start at the network core and work its way to the network

edge. Why not? The first answer lies in the rather strange choice of cell size. Why

۵۳ octets? The answer is simple—and perhaps a little astounding. ATM was supposed

to replace not only packet switched networks, but also the then-current generation

of voice networks based on circuit switched technologies. In unifying these two tech-

nologies, providers could offer both sorts of services on a single set of circuits and

devices.

What amount of information, or packet size, is ideal for carrying voice traffic?

Around 48 octets. What amount of information, or packet size, is the minimum

that makes any sort of sense for data transmission? Around 64 octets. Fifty-three

octets was chosen as a compromise between these two sizes; it would not be per-

fect for voice transmission, as 5 octets of every cell carrying voice would be wasted.

It would not be perfect for data traffic, because the most common packet size,

۶۴ octets, would need to be split into two cells to be carried across an ATM net-

work. A common line of thinking, at the time these deliberations were being held,

was the data transport protocols would be able to adjust to the slightly smaller cell

size, hence making 53 octets an optimal size to support a wide variety of traffic. The

data transport protocols, however, did not adjust. To transport a 64-octet block of

۲۰

Chapter 1 Fundamental Concepts

data, one cell would contain 53 octets, and the second would contain 9 octets, with

۴۲ octets of empty space. Providers discovered 50% or more of the bandwidth avail-

able on ATM links was consumed by empty cells—effectively wasted bandwidth.

Hence, data providers stopped deploying ATM, voice providers never really started

deploying it, and ATM died.

What is interesting is how the legacy of projects like ATM live on in other pro-

tocols and ideas. The label switching concept was picked up by Yakov Rekhter and

other engineers, and developed into label switching. This keeps many of the funda-

mental advantages of ATM’s quick lookup in the forwarding path, and bundling

the metadata about packet handling into the label itself. Label switching eventu-

ally became Multiprotocol Label Switching (MPLS), which not only provides faster

lookup, but also stacks of labels and virtualization. The basic idea was thus taken

and expanded, impacting modern network protocols and designs in significant ways.

Note

MPLS is discussed in Chapter 9, “Network Virtualization.”

The second legacy of ATM is the fixed cell size. For many years, the dominant

network transport suite, based on TCP and IP, has allowed network devices to frag-

ment packets while forwarding them. This is a well-known way to degrade the per-

formance of a network, however. A do not fragment bit was added to the IP header,

telling network devices to drop packets rather than fragmenting them, and serious

efforts were put into discovering the largest packet that can be transmitted through

the network between any pair of devices. A newer generation of IP, called IPv6,

removed fragmentation by network devices from the protocol specification.

Calculating Loop-Free Paths

Overlapping many of these previous discussions within the network engineering

world was another issue that often made it more difficult to decide whether packet or

circuit switching was the better solution. How should loop-free paths be computed

in a packet switched network?

As packet switched networks have, throughout the history of network engineer-

ing, been associated with distributed control planes, and circuit switched networks

have been associated with centralized control planes, the issue of computing loop-

free paths efficiently had a major impact on deciding whether packet switched net-

works were viable or not.

۲۱

Note

Loop-free paths are discussed in Part II, “The Control Plane.”

In the early days of network engineering, the available processing power, mem-

ory, and bandwidth were often in short supply. Table 1-1 provides a little historical

context.

Table 1-1 History of Computing Power, Memory, and Bandwidth

Memory

Year MIPS (Cost/MB)

۱۹۸۴ ۳٫۲ (Motorola 68010) 1331

۱۹۸۷ ۶٫۶ (Motorola 68020) 154

۱۹۹۰ ۴۴ (Motorola 68040) 98

۱۹۹۶ ۵۴۱ (Intel Pentium Pro) 8

۱۹۹۹ ۲,۰۵۴ (Intel Pentium III) 1

۲۰۰۶ ۴۹,۱۶۱ (Intel Core 2, 4 cores) 0.1

۲۰۱۴ ۲۳۸,۳۱۰ (Intel i7, 4 cores) 0.001

Bandwidth

(LAN)

۲Mb/s

۱۰Mb/s

۱۶Mb/s

۱۰۰Mb/s

۱۰۰Mbp/s

۴Gb/s

۱۰۰Gb/s

In 1984, when many of these discussions were occurring, any difference in the

amount of processor and memory between two ways of calculating loop-free

paths through a network would have a material impact on the cost of building a

network. When bandwidth is at a premium, reducing the number of bits a control

plane required to transfer the information required to calculate a set of loop-free

paths through a network makes a real difference in the amount of user traffic

the network can handle. Reducing the number of bits required for the control

to operate also makes a large difference in the stability of the network at lower

bandwidths.

For instance, using a Type Length Vector (TLV) format to describe control plane

information carried across the network adds a few octets of information to the

overall packet length—but in the context of a 2Mbps link, aggravated by a chatty

control plane, the costs could far outweigh the longer-term advantage of protocol

extensibility.

Note

TLVs are discussed in Chapter 2, “Data Transport Problems and Solutions.”

۲۲

Chapter 1 Fundamental Concepts

The protocol wars were rather heated at some points; entire research projects

were undertaken, and papers written, about why and how one protocol was bet-

ter than another. As an example of the kind of back and forth these arguments

generated, a shirt seen at the Internet Engineering Task Force (IETF) during which

the Open Shortest Path First (OSPF) Protocol was being developed said: IS-IS = 0.

The “IS-IS” here refers to Intermediate System-to-Intermediate System, a control

plane (routing protocol) originally developed by the International Organization

for Standardization (ISO).

There was a wide variety of mechanisms proposed to solve the problems of cal-

culating loop-free paths through a network; ultimately three general classes of solu-

tions have been widely deployed and used:

Distance Vector protocols, which calculate loop-free paths hop by hop based •

on the path cost

Link State protocols, which calculate loop-free paths across a database syn- •

chronized across the network devices

Path Vector protocols, which calculate loop-free paths hop by hop based on a •

record of previous hops

The discussion over which protocol is best for each specific network, and for what

particular reasons, still persists; it is probably a never-ending conversation, as there

is (probably) no final answer to the question. Instead, as with fitting a network to a

business, there will probably always be some degree of art (or craft) involved in mak-

ing a particular control plane work on a particular network. Much of the urgency in

the question, however, has been drawn out by the increasing speed of networks—in

processing power, memory, and bandwidth.

Quality of Service

As real-time traffic started to be carried over packet switched networks, QoS

became a major problem. Voice and video both rely on the network being able to

carry traffic between hosts quickly (having low delay), and with small amounts of

variability in interpacket spacing (jitter). Discussions around QoS actually began

in the early days of packet switched networking, but reached a high point around

the time ATM was being considered. In fact, one of the main advantages of ATM

was the ability to closely control the way in which packets were handled as they

were carried over a packet switched network. With the failure of ATM in the

۲۳

market, two distinct lines of thought emerged about applications that require

strong controls on jitter and delay:

These applications would never work on packet switched networks; these

kinds of applications would always need to be run on a separate network.

It is just a matter of finding the right set of QoS controls to allow such applica-

tions to run on packet switched networks.

Note

Quality of Service is discussed in detail in Chapter 8, “Quality of Service.”

The primary application most providers and engineers were concerned about

was voice, and the fundamental question came down to this: is it possible to pro-

vide decent voice over a network also carrying large file transfers and other “non-

real-time” traffic? Complex schemes were invented to allow packets to be classified

and marked (called QoS marking) so network devices would know how to han-

dle them properly. Mapping systems were developed to carry these QoS markings

from one type of network to another, and a lot of time and effort were put into

researching queueing mechanisms—the order in which packets are sent out on an

interface. Figure 1-7 shows a sample chart of one QoS system and the mapping

between applications and QoS markings will suffice to illustrate the complexity of

these systems.

The increasing link speeds, shown previously in Table 1-1, had two effects on the

discussion around QoS:

Faster links will (obviously) carry more data. As any individual voice and video

stream becomes a shrinking part of the overall bandwidth usage, the need to

strongly balance the use of bandwidth between different applications became

less important.

The amount of time required to move a packet from memory onto the wire

through a physical chip is reduced with each increase in bandwidth.

As available bandwidth increased, the need for complex queueing strategies to

counter jitter became less important. This increase in speed has been augmented by

newer queueing systems that are much more effective at managing different kinds

of traffic, reducing the necessity of marking and handling traffic in a fine-grained

fashion.

۲۴

Chapter 1 Fundamental Concepts

Voice 18%

Best Effort 25%

Best Effort

Real Time

۲۵%

۳۳%

Scavenger 1%

Bulk 5% Interactive

Video 15%

Bulk 4% Critical Data

۳۷%

Streaming Internetwork

Video 10% Control 5%

Call Signaling 5%

Network Management 5%

Mission Critical Data 7%

Transactional Data 5%

Figure 1-7 QoS Planning and Mapping

These increases in bandwidth were often enabled by changing from copper to

glass fiber. Fiber not only offers larger bandwidths but also more reliable transmis-

sion of data. The way physical links are built also evolved, making them more resist-

ant to breakage and other material problems. A second factor increasing bandwidth

availability was the growth of the Internet. As networks became more common and

more connected, a single link failure had a lesser impact on the amount of available

bandwidth and on the traffic flows across the network.

As processors became faster, it became possible to develop systems where

dropped and delayed packets would have less effect on the quality of a real-time

stream. Increasing processor speeds also made it possible to use very effective

compression algorithms, reducing the size of each stream. On the network side,

faster processors meant the control plane could compute a set of loop-free paths

through the network faster, reducing both direct and indirect impacts of link and

device failures.

Ultimately, although QoS is still important, it can be much simplified. Four to six

queues are often enough to support even the most difficult applications. If more are

needed, some systems can now either engineer traffic flows through a network or

۲۵

actively manage queues, to balance between the complexity of queue management

and application support.

The Revenge of Centralized Control Planes

In the 1990s, in order to resolve many of the perceived problems with packet switched

networks, such as complex control planes and QoS management, researchers began

working on a concept called Active Networking. The general idea was that the con-

trol plane for a packet switched network could, and should, be separated from the

forwarding devices in order to allow the network to interact with the applications

running on top of it.

The basic concept of separating the control and data planes more distinctly in

packet switching networks was again considered in the formation of the Forwarding

and Control Element Separation (ForCES) working group in the IETF. This working

group was primarily concerned with creating an interface applications can use to

install forwarding information onto network devices. The working group was even-

tually shut down in 2015 and its standards were never widely implemented.

In 2006, researchers began looking for a way to experiment with control planes

in packet switched networks without the need to code modifications on the devices

themselves—a particular problem, as most of these devices were sold by vendors

as unmodifiable appliances (or black boxes). The eventual result was OpenFlow, a

standard interface that allows applications to install entries directly in the forward-

ing table (rather than the routing table; this is explained more fully in several places

in Part I of this book, “The Data Plane”). The research project was picked up as

a feature by several vendors, and a wide array of controllers have been created by

vendors and open source projects. Many engineers believed OpenFlow would revolu-

tionize network engineering by centralizing the control plane.

The reality is likely to be far different—what is likely to happen is what has always

happened in the world of data networking: the better parts of a centralized control

plane will be consumed into existing systems, and the fully centralized model will

fall to the wayside, leaving in its path changed ideas about how the control plane

interacts with applications and the network at large.

Complexity

The technologies described thus far—circuit and packet switching, control planes,

and QoS—are very complex. In fact, there appears to be no end to the increasing

۲۶

Chapter 1 Fundamental Concepts

complexity in networks, particularly as applications and businesses become more

demanding. This section will consider two specific questions in relation to complex-

ity and networks:

  • What is network complexity?
  • Can network complexity be “solved”?

The final parts of this section will consider a way of looking at complexity as a

set of tradeoffs.

Why So Complex?

While the most obvious place to begin might be with a definition of complexity, it is

actually more useful to consider why complexity is required in a more general

sense. To put it more succinctly, is it possible to “solve” complexity? Why not just

design simpler networks and protocols? Why does every attempt to make anything

simpler in the networking world end up apparently making things more complex in

the long run?

For instance, by tunneling on top of (or through) IP, the control plane’s complex-

ity is reduced, and the network is made simpler overall. Why then do tunneled over-

lays end up containing so much complexity?

There are two answers to this question. First, human nature being what it is, engi-

neers will always invent ten different ways to solve the same problem. This is espe-

cially true in the virtual world, where new solutions are (relatively) easy to deploy, it

is (relatively) easy to find a problem with the last set of proposed solutions, and it

is (relatively) easy to move some bits around to create a new solution that is “better

than the old one.” This is particularly true from a vendor perspective, when build-

ing something new often means being able to sell an entirely new line of products

and technologies—even if those technologies look very much like the old ones. The

virtual space, in other words, is partially so messy because it is so easy to build some-

thing new there.

The second answer, however, lies in a more fundamental problem: complex-

ity is necessary to deal with the uncertainty involved in difficult to solve problems.

Figure ۱-۸ illustrates.

Adding complexity seems to allow a network to handle future requirements and

unexpected events more easily, as well as provide more services over a smaller set of

base functions. If this is the case, why not simply build a single protocol running on

a single network able to handle all the requirements potentially thrown at it and can

handle any sequence of events you can imagine? A single network running a single

۲۷

Robustness

Solution Effectiveness

Increasing

lex

mp

Co

ity

Figure 1-8 Complexity, Effectiveness, and Robustness

protocol would certainly reduce the number of moving parts network engineers need

to deal with, making all our lives simpler, right? In fact, there are a number of differ-

ent ways to manage complexity, for instance:

  1. Abstract the complexity away, to build a black box around each part of the

system, so each piece and the interactions between these pieces are more imme-

diately understandable.

  1. Toss the complexity over the cubicle wall—to move the problem out of the

networking realm into the realm of applications, or coding, or a protocol.

As RFC1925 says, “It is easier to move a problem around (e.g., by moving

the problem to a different part of the overall network architecture) than it is

to solve it.”

  1. Add another layer on top, to treat all the complexity as a black box by put-

ting another protocol or tunnel on top of what’s already there. Returning to

RFC1925, “It is always possible to add another level of indirection.”

  1. Become overwhelmed with the complexity, label what exists as “legacy,” and

chase some new shiny thing perceived to be able to solve all the problems in a

much less complex way.

  1. Ignoring the problem and hoping it will go away. Arguing for an exception

“just this once,” so a particular business goal can be met, or some problem

fixed, within a very tight schedule, with the promise that the complexity issue

will be dealt with “later,” is a good example.

۲۸

Chapter 1 Fundamental Concepts

Each of these solutions, however, has a set of tradeoffs to consider and manage.

Further, at some point, any complex system becomes brittle—robust yet fragile. A

system is robust yet fragile when it is able to react resiliently to an expected set of

circumstances, but an unexpected set of circumstances will cause it to fail. To give an

example from the real world—knife blades are required to have a somewhat unique

combination of characteristics. They must be hard enough to hold an edge and cut,

and yet flexible enough to bend slightly in use, returning to their original shape with-

out any evidence of damage, and they must not shatter when dropped. It has taken

years of research and experience to find the right metal to make a knife blade from,

and there are still long and deeply technical discussions about which material is right

for specific properties, under what conditions, etc.

“Trying to make a network proof against predictable problems tends to make it

fragile in dealing with unpredictable problems (through an ossification effect as

you mentioned). Giving the same network the strongest possible ability to defend

itself against unpredictable problems, it necessarily follows, means that it MUST

NOT be too terribly robust against predictable problems. Not being too robust

against predictable problems is necessary to avoid the ossification issue, but not

necessarily sufficient to provide for a robust ability to handle unpredictable net-

work problems.” —Tony Przygienda

Complexity is necessary, then: it cannot be “solved.”

Defining Complexity

Given complexity is necessary, engineers are going to need to learn to manage it in

some way, by finding or building a model or framework. The best place to begin in

building such a model is with the most fundamental question: What does complexity

mean in terms of networks? Can you put a network on a scale and have the needle

point to “complex”? Is there a mathematical model into which you can plug the con-

figurations and topology of a set of network devices to produce a “complexity

index”? How do the concepts of scale, resilience, brittleness, and elegance relate to

complexity? The best place to begin in building a model is with an example.

Control Plane State versus Stretch

What is network stretch? In the simplest terms possible, it is the difference between

the shortest path in a network and the path that traffic between two points actually

takes. Figure 1-9 illustrates this concept.

Assuming the cost of each link in this network is 1, the shortest physical path

between Routers A and C will also be the shortest logical path: [A,B,C]. What

۲۹

B

C

A

D
E

Figure 1-9 A Small Network to Illustrate State and Stretch

happens, however, if the metric on the [A,B] link is changed to 3? The shortest physi-

cal path is still [A,B,C], but the shortest logical path is now [A,D,E,C]. The differen-

tial between the shortest physical path and the shortest logical path is the distance

a packet being forwarded between Routers A and C must travel—in this case, the

stretch can be calculated as (4 [A,D,E,C])−(۳ [A,B,C]), for a stretch of 1.

How Is Stretch Measured?

The way stretch is measured depends on what is most important in any given situa-

tion, but the most common way is by comparing hop counts through the network, as

is used in the examples here. In some cases, it might be more important to consider

the metric along two paths, the delay along two paths, or some other metric, but the

important point is to measure it consistently across every possible path to allow for

accurate comparison between paths.

It is sometimes difficult to differentiate between the physical topology and the

logical topology. In this case, was the [A,B] link metric increased because the link is

actually a slower link? If so, whether this is an example of stretch, or an example of

simply bringing the logical topology in line with the physical topology is debatable.

In line with this observation, it is much easier to define policy in terms of stretch

than almost any other way. Policy is any configuration that increases the stretch of a

network. Using Policy-Based Routing, or Traffic Engineering, to push traffic off the

shortest physical path and onto a longer logical path to reduce congestion on specific

links, for instance, is a policy—it increases stretch.

Increasing stretch is not always a bad thing. Understanding the concept of stretch

simply helps us understand various other concepts and put a framework around

۳۰

Chapter 1 Fundamental Concepts

complexity and optimization tradeoffs. The shortest path, physically speaking, is

not always the best path.

Stretch, in this illustration, is very simple—it impacts every destination, and every

packet flowing through the network. In the real world, things are more complex.

Stretch is actually per source/destination pair, making it very difficult to measure on

a network-wide basis.

Defining Complexity: A Model

Three components—state, optimization, and surface—are common in virtually

every network or protocol design decision. These can be seen as a set of tradeoffs, as

illustrated in Figure 1-10 and described in the list that follows.

  • Increasing optimization always moves toward more state or more interaction

surfaces.

  • Decreasing state always moves toward less optimization or more interaction

surfaces.

  • Decreasing interaction surfaces always moves toward less optimization or

more state.

These are no ironclad rules, of course; they are contingent on the specific net-

work, protocols, and requirements, but they are generally true often enough to make

this a useful model for understanding tradeoffs in complexity.

Interaction Surfaces

While state and optimization are fairly intuitive, it is worthwhile to spend just a

moment more on interaction surfaces. The concept of interaction surfaces is difficult

to grasp primarily because it covers such a wide array of ideas. Perhaps an example

would be helpful; assume a function that

  • Accepts two numbers as input
  • Adds them
  • Multiplies the resulting sum by 100
  • Returns the result

This single function can be considered a subsystem in some larger system. Now

assume you break this single function into two functions, one of which does the

۳۱

Surface

Plane of the Possible

ion

izat

im

Opt

(de)

Realm of the

impossible

Sta

te

Figure 1-10 The Plane of the Possible

addition, and the other of which does the multiplication. You have created two sim-

pler functions (each one only does one thing), but you have also created an interaction

surface between the two functions—you have created two interacting subsystems

within the system where there only used to be one.

As another example, assume you have two control planes running on a single

network. One of these two control planes carries information about destinations

reachable outside the network (external routes), while the other carries destinations

reachable inside the network (internal routes). While these two control planes are

۳۲

Chapter 1 Fundamental Concepts

different systems, they will still interact in many interesting and complex ways. For

instance, the reachability to an external destination will necessarily depend on reach-

ability to the internal destinations between the edges of the network. These two con-

trol planes must now work together to build a complete table of information that

can be used to forward packets through the network.

Even two routers communicating within a single control plane can be considered

an interaction surface. This breadth of definition is what makes it so very difficult to

define what an interaction surface is.

Interaction surfaces are not a bad thing; they help engineers and designers divide

and conquer in any given problem space, from modeling to implementation. At the

same time, interaction surfaces are all too easy to introduce without thought.

Managing Complexity through the Wasp Waist

The wasp waist, or hourglass model, is used throughout the natural world, and

widely mimicked in the engineering world. While engineers do not often consciously

apply this model, it is actually used all the time. Figure 1-11 illustrates the hourglass

model in the context of the four-layer Department of Defense (DoD) model that

gave rise to the Internet Protocol (IP) suite.

At the bottom layer, the physical transport system, there are a wide array of

protocols, from Ethernet to Satellite. At the top layer, where information is mar-

shaled and presented to applications, there is a wide array of protocols, from

Hypertext Transfer Protocol (HTTP) to TELNET (and thousands of others

besides). A funny thing happens when you move toward the middle of the stack,

however: the number of protocols decreases, creating an hourglass. Why does this

work to control complexity? Going back through the three components of com-

plexity—state, surface, and complexity—exposes the relationship between the

hourglass and complexity.

HTML, SMTP, SNMP, FTP, TELNET, etc.

Application

TCP, UDP Transport

Network IP

Ethernet, SONET, Token Ring, Microwave,

Physical

LTE, Satellite, etc.

Figure 1-11 The DoD Model and the Wasp Waist

۳۳

State is divided by the hourglass into two distinct types of state: information

about the network and information about the data being transported across

the network. While the upper layers are concerned with marshaling and pre-

senting information in a usable way, the lower layers are concerned with dis-

covering what connectivity exists and what the connectivity properties actually

are. The lower layers do not need to know how to format an FTP frame, and

the upper layers do not need to know how to carry a packet over Ethernet—

state is reduced at both ends of the model.

Surfaces are controlled by reducing the number of interaction points between

the various components to precisely one—the Internet Protocol (IP). This sin-

gle interaction point can be well defined through a standards process, with

changes in the one interaction point closely regulated to prevent massive rapid

changes that will reflect up and down the protocol stack.

Optimization is traded off by allowing one layer to reach into another layer,

and by hiding the state of the network from the applications. For instance,

TCP does not really know the state of the network other than what it can

gather from local information. TCP could potentially be much more efficient

in its use of network resources, but only at the cost of a layer violation, which

opens up difficult-to-control interaction surfaces.

The layering of a stacked network model is, then, a direct attempt to control the

complexity of the various interacting components of a network.

Complexity and Tradeoffs

A very basic law of complexity might be stated thus: in any complex sys-

tem, there will exist sets of three-way tradeoffs. The State/Optimization/

Surface (SOS) model described here is one set of such tradeoffs. Another one,

more familiar to engineers who work primarily in databases, is Consistency/

Accessibility/Partitioning (the CAP theorem). Yet another, often found in a

wider range of contexts, is Quick/Cost/Quality (QSQ). These are not com-

ponents of complexity, but what can be called the consequents of complex-

ity. Engineers need to be adept at spotting these kinds of tradeoff triangles,

accurately understanding the “corners” of the triangle, determining where

along the plane of the possible the most optimal solution lies, and being able

to articulate why some solutions simply are not possible or desirable.

If you have not found the tradeoffs, you have not looked hard enough is a

good rule of thumb to follow in all engineering work.

۳۴

Chapter 1 Fundamental Concepts

Final Thoughts

This chapter is not intended to provide detail, but rather to frame key terms within the

scope of the history of computer network technology The computer networking world .

does not have a long history (for example, human history reaches back at least 6,000 years,

and potentially many millions, depending on your point of view), but this history still

contains a set of switchback turns and bumpy pathways, often making it difficult for the

average person to understand how and why things work the way they do.

With this introduction in hand, it is time to turn to the first topic of interest in

understanding how networks really work—the data plane.

Further Reading

Brewer, Eric. “Towards Robust Distributed Systems.” Presented at the ACM Sym-

posium on the Principles of Distributed Computing, July 19, 2000. http://

www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf.

Buckwalter, Jeff T. Frame Relay: Technology and Practice. 1st edition. Reading, MA:

Addison-Wesley Professional, 1999.

Cerf, Vinton G., and Edward Cain. “The DoD Internet Architecture Model.” Com-

puter Networks 7 (1983): 307–۱۸٫

Gorrell, Mike. “Salt Lake County Data Breach Exposed Info of 14,200 People.”

The Salt Lake Tribune. Accessed April 23, 2017. http://www.sltrib.com/

home/3705923-155/data-breach-exposed-info-of-14200.

Ibe, Oliver C. Converged Network Architectures: Delivering Voice over IP ATM, ,

and Frame Relay. 1st edition. New York: Wiley, 2001.

Kumar, Balaji. Broadband Communications: A Professional’s Guide to ATM, Frame

Relay, SMDs, Sonet, and Bisbn. New York: McGraw-Hill, 1995.

“LAN Emulation.” Microsoft TechNet. Accessed August 4, 2017. https://

technet.microsoft.com/en-us/library/cc976969.aspx.

“LAN Emulation (LANE).” Cisco. Accessed August 4, 2017. http://

www.cisco.com/c/en/us/tech/asynchronous-transfer-mode-atm/lan-emulation-

lane/index.html.

Padlipsky, Michael A. The Elements of Networking Style and Other Essays and

Animadversions on the Art of Intercomputer Networking. Prentice-Hall, 1985.

Russell, Andrew L. “OSI: The Internet That Wasn’t.” Professional Organization.

IEEE Spectrum, September 27, 2016. https://spectrum.ieee.org/tech-history/

cyberspace/osi-the-internet-that-wasnt

۳۵

“Understanding the CBR Service Category for ATM VCs.” Cisco. Accessed

June 10, 2017. http://www.cisco.com/c/en/us/support/docs/asynchronous-

transfer-mode-atm/atm-traffic-management/10422-cbr.html.

White, Russ, and Jeff Tantsura. Navigating Network Complexity: Next-Generation

Routing with SDN, Service Virtualization, and Service Chaining. Indianapolis,

IN: Addison-Wesley Professional, 2015.

Review Questions

  1. One specific realm where different business assumptions can be clearly seen

is in choosing to use a small number of large network devices (such as a chas-

sis-based router that supports multiple line cards) or using a larger number of

smaller devices (so-called pizza box, or one rack unit, routers having a fixed

number of interfaces available) to build a campus or data center network. List

a number of different factors that might make one option more expensive than

the other, and then explain what sorts of business conditions might dictate the

use of one instead of the other for both options.

  1. One “outside representation” of code bloat in software applications is nerd

knobs; while there are many definitions of a nerd knob, they are generally con-

sidered a configuration command that will modify some small, specific, point

of operation in the way a protocol or device operates. There are actually some

research papers and online discussions around the harm from nerd knobs; you

can also find command sets from various network devices across a number of

software releases through many years. In order to see the growth in complexity

in network devices, trace the number of available commands, and try to judge

how many of these would be considered nerd knobs versus major features. Is

there anything you can glean from this information?

  1. TDM is not the only kind of multiplexing available; there is also Frequency

Division Multiplexing (FDM). Would FDM be useful for dividing a channel in

the same way that TDM is? Why or why not?

  1. What is an inverse multiplexer, and what would it be used for?

  1. Read the two references to ATM LAN Emulation (LANE), in the “Further

Reading” section. Describe the complexity in this solution from within the

complexity model; where are state and interaction surfaces added, and what

sort of optimization is being gained with each addition? Do you think the ATM

LANE solution presents a good set of tradeoffs for providing the kinds of ser-

vices it is designed to offer versus something like a shared Ethernet network?

۳۶

Chapter 1 Fundamental Concepts

  1. Describe, in human terms, why delay and jitter are bad in real time (interac-

tive) voice and video communications. Would these same problems apply to

recorded voice and video stored and played back at some later time? Why or

why not?

  1. How would real-time (interactive) voice and video use the network differently

than a large file transfer? Are there specific points at which you can compare

the two kinds of traffic, and describe how the network might need to react dif-

ferently to each traffic type?

  1. The text claims the “wasp waist” is a common strategy used in nature to man-

age complexity. Find several examples in nature. Research at least one other set

of protocols (protocol stack) than TCP/IP, such as Banyan Vines, Novell’s IPX,

or the OSI system. Is there a “wasp waist” in these sets of protocols, as well?

What is it?

  1. Are there wasp waists in other areas of computing, such as the operating sys-

tems used in personal computers, or mobile computing devices (such as tablets

and mobile phones)? Can you identify them?

  1. Research some of the arguments against removing fragmentation from the

Internet Protocol in IPv6. Summarize the points made by each side. Do you

agree with the final decision to remove fragmentation?

پاسخ دهید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *