Data Transport Problems and Solutions

Data Transport Problems and Solutions

Learning Objectives

After reading this chapter, you should be able to:









Understand the concept of marshaling data, different marshaling options,

and the tradeoffs between them

Understand the concepts of dictionaries, grammars, and metadata within

the context of data marshaling

Understand the concepts of fixed length fields, type length values, and

shared data dictionaries

Understand the difference between error detection and error correction

Understand the fundamental concepts of checking for errors in data


Understand the relationship between addressing and multiplexing

Understand the basic theory behind multicast and anycast

Understand flow control mechanisms, including windowed flow control

When transport protocols dream, do they dream of applications? They probably

should, as the primary purpose of a network is to support applications—and the

primary resource that applications need from a network is data moved from one pro-

cess (or processor) to another. But how can data be transmitted over a wire, or

through the air, or over an optical cable?

Chapter 2 Data Transport Problems and Solutions

Perhaps it is best to begin with a more familiar example: human language. The

authors of this book wrote it using formatting, language, and vocabulary enabling you

to read and understand the information presented. What problems does a language

need to overcome to make the communication, this writing and reading, possible?

Thoughts must be captured in a form that allows them to be retrieved by a

receiver. In human languages, information is packaged into words, sentences, para-

graphs, chapters, and books. Each level of this division implies some unit of informa-

tion, and some organizational system. For instance, sounds or ideas are encapsulated

into letters or symbols; sounds or ideas are then combined into words; words are

combined into sentences, etc. Sentences follow a particular grammatical form so you

can decode the meaning from the symbols. This encoding of thoughts and informa-

tion into symbols formatted which allows a reader (receiver) to retrieve the original

meaning will be called marshaling the data in this book.

One aspect of marshaling is definitional—the process of associating one set of

symbols to a particular meaning. Metadata, or data about the data, allows you to

understand how to interpret information in a flow or stream.

There must be some way of managing errors in transmission or reception. Sup-

pose you have a pet dog who likes to chase after a particular ball. The ball drops out of

a basket one day, and bounces into the street. The dog chases and appears to be heading

directly into the path of an oncoming car. What do you do? Perhaps you shout “Stop!”—

and then maybe “No!”—and perhaps “Stay!” Using several commands that should result

in the same action—the dog stopping before he runs into the street—is making certain

the dog has correctly received, and understood, the message. Shouting multiple messages

will, you hope, ensure there is no misunderstanding in what you are telling the dog to do.

This is, in fact, a form of error correction. There are many kinds of error correc-

tion built into human language. For instance, yu cn prbbly stll rd ths sntnce. Human

languages overspecify the information they contain, so a few missed letters do not cause

the entire message to be lost. This overspecification can be considered a form of forward

error correction. This is not the only form of error correction human languages contain,

however. They also contain questions, which can be asked to verify, validate, or gain

missing bits or context of information previously “transmitted” through the language.

There must be some way to talk to one person, or a small group of people,

using a single medium—air—within a larger crowd. It is not uncommon to need

to talk to one person out of a room full of people. Human language has built in ways

of dealing with this problem in many situations, such as calling someone’s name, or

speaking loudly enough to be heard by the person you are directly facing (the imple-

mentation of language can be directional, in other words). The ability to speak to

one person among many, or a specific subset of people, is multiplexing.

Finally, there must be some way to control the flow of a conversation. With

a book, this is a simple matter; the writer produces text in parts, which are then col-

lected into a format the reader can read, and reread, at a completely different pace.

Not many people think of a book as a form of flow control, but putting thoughts

into written form is an effective way to disconnect the speed of the sender (the speed

of writing) from the speed of the receiver (the speed of reading). Spoken language

has other forms of flow control, such as “um,” and the glazed-over look in a lis-

tener’s eyes when she has lost the line of reasoning a speaker is following, or even

physical gestures indicating the speaker should slow down.

To summarize, successful communication systems need to solve four problems:

Marshaling the data; converting ideas into symbols and a grammar the receiver

will understand

Managing errors, so the ideas are correctly transmitted from the sender to the receiver

Multiplexing, or allowing a common media or infrastructure to be used for

conversations between many different pairs of senders and receivers

Flow control, or the ability to make certain the receiver is actually receiving

and processing the information before the sender transmits more

The following sections examine each of these problems as well as some of the

solutions available in each problem space.

Digital Grammars and Marshaling

Consider the process you are using to read this book. You examine a set of marks

created to contrast with a physical carrier, ink on paper. These marks represent cer-

tain symbols (or, if you are hearing this book, certain sounds on a white noise back-

ground), which you then interpret as letters. These letters, in turn, you can put

together using rules of spacing and layout to form words. Words, through punctua-

tion and spacing, you can form into sentences.

At each stage in the process there are several kinds of things interacting:

A physical carrier onto which the signal can be imposed. This work of repre-

senting information against a carrier is grounded in the work of Claude Shan-

non, and is outside the scope of this book; further reading is suggested in the

following section for those who are interested.

A symbolic representation of units of information used to translate the physical

symbols into the first layer of logical content. When you are interpreting sym-

bols, two things are required: a dictionary, which describes the range of possible

logical symbols that can correspond to a certain physical state, and a grammar,

which describes how to determine which logical symbol relates to this instance

of physical state. These two things, combined, can be described as a protocol.


Chapter 2 Data Transport Problems and Solutions

  • A way to convert the symbols into words and then the words into sentences.

Again, this will consist of two components, a dictionary and a grammar.

Again, these can be described as protocols.

As you move “up the stack,” from the physical to the letters to the words to the

sentences, etc., the dictionary will become less important, and the grammar, which

allows you to convert the context into meaning, more important—but these two

things exist at every layer of the reading and/or listening process. The dictionary and

grammar are considered two different forms of metadata you can use to turn physi-

cal representations into sentences, thoughts, lines of argument, etc.

Digital Grammars and Dictionaries

There really is not much difference between a human language, such as the one you

are reading right now, and a digital language. A digital language is not called a lan-

guage, however; it is called a protocol. More formally:

A protocol is a dictionary and a grammar (metadata) used to translate one

kind of information into another.

Protocols do not work in just one direction, of course; they can be used to encode as

well as decode information. Languages are probably the most common form of protocol

you encounter on a daily basis, but there are many others, such as traffic signs; the user

interfaces on your toaster, computer, and mobile devices; and every human language.

Given you are developing a protocol, which primarily means developing a diction-

ary and a grammar, there are two kinds of optimization you can work toward:

Resource Efficiency. How many resources are used in encoding any particular •

bit of information? The more metadata included inline, with the data itself,

the more efficient the encoding will be—but the more implementations will

rely on dictionaries to decode the information. Protocols that use very small

signals to encode a lot of information are generally considered compact.

Flexibility. In the real world, things change. Protocols must somehow be •

designed to deal with change, hopefully in a way not requiring a “flag day” to

upgrade the protocol.

The metadata tradeoff is one of many you will find in network engineering; either

include more metadata, allowing the protocol to better handle future requirements,

or include less metadata, making the protocol more efficient and compact. A good

rule of thumb, one you will see repeated many times throughout this book, is: if you

have not found the tradeoff, you have not looked hard enough.


What Is a Flag Day?

If you need to switch from one version of a protocol that is installed and run-

ning on many computers to a newer version of the same protocol, or perhaps

even a different protocol, you have three choices.

First, you can design the protocol (or protocols) so the old and new ver-

sions can overlap, or run on the same network at the same time. This is some-

times called the ships in the night solution; the old and new protocols (or

versions of the same protocol) do not interact at all.

Second, you can pick a day (and potentially a time, down to the millisecond

in some cases) to switch from the old protocol to the new. This is called a flag

day How did this term become attached to this kind of protocol changeover

event? In 1966, every system running the Multics operating system needed to

be switched from one definition of characters to another, specifically to replace

ASCII 1965 with ASCII 1967. A holiday was chosen for the change, which would

give all the Multics system administrators a full day to replace their software

and have the systems operating for the first business day after the change. The

day chosen was Flag Day in the United States, June 14, 1966. Hence the term

flag day become forever associated with a change requiring every host in the

system to be rebooted at (roughly) the same time to ensure proper operation. 1

The most famous flag day is the transition from the Network Control Pro-

gram (NCP) transport protocol to the Transmission Control Protocol (TCP)

on the entire Internet (as it existed then) in 1983. The process of moving from

one to the other required an Internet Engineering Task Force (IETF) Request

for Comment (RFC) to describe and coordinate the process—RFC801. 2

Third, you can design the protocol so a single version can contain multiple ver-

sions of the same information, with each version being formatted differently Send-

ers can send in either format, and receivers should be able to interpret the data in

either format. When all the systems have been upgraded to the newer version of

the software, the older encoding can be replaced. This mechanism relies heavily on

a principle laid out in RFC760:

In general, an implementation must be conservative in its sending behav-

ior, and liberal in its receiving behavior. That is, it must be careful to send

well-formed datagrams, but must accept any datagram that it can inter-

pret (e.g., not object to technical errors where the meaning is still clear). 3

  1. “Flag Day.”

  1. Postel, NCP/TCP Transition Plan.

  1. Internet Protocol This quote, or something similar, is attributed to Jon Postel.


Chapter 2 Data Transport Problems and Solutions

A dictionary in a protocol is a table of digital patterns to symbols and operations.

Perhaps the most commonly used digital dictionaries are character codes. Table 2-1

replicates part of the Unicode character code dictionary.

Table 2-1 A Partial Unicode Dictionary or Table

Code Glyph Decimal Description #

U+0030 0 0 Digit Zero 0017

U+0031 1 1 Digit One 0018

U+0032 2 2 Digit Two 0019

U+0033 3 3 Digit Three 0020

U+0034 4 4 Digit Four 0021

U+0035 5 5 Digit Five 0022

U+0036 6 6 Digit Six 0023

U+0037 7 7 Digit Seven 0024

U+0038 8 8 Digit Eight 0025

U+0039 9 9 Digit Nine 0026

U+003A : : Colon 0027

U+003B ; &#059; Semicolon 0028

U+003C < &#060; Less-than sign 0029

Using Table 2-1, if a computer is “reading” an array representing a series of let-

ters, it will print out (or treat in processing) the number 6 if the number in the array

is 0023, the number 7 if the number in the array is 0024, etc. This table, or diction-

ary, relates specific numbers to specific symbols in an alphabet, just like a dictionary

relates a word to a range of meanings.

How can the computer determine the difference between the price of a banana

and the letters in the word banana? Through the context of the information. For

instance, perhaps the array in question is stored as a string, or a series of letters; the

array being stored as a string variable type provides the metadata, or the context,

which indicates the values in these particular memory locations should be treated as

letters rather than the numeric values contained in the array. This metadata, acted on

by the computer, provides the grammar of the protocol.

In protocols, dictionaries are often expressed in terms of what any particular field

in a packet contains, and grammars are often expressed in terms of how the packet is

built, or what fields are contained at what locations in a packet.

There are several ways to build dictionaries and basic (first-level) grammars;

several of these will be considered in the following sections.


Fixed Length Fields

Fixed length fields are the simplest of the dictionary mechanisms to explain. The

protocol defines a set of fields, what kind of data each field contains, and how large

each field is. This information is “baked into” the protocol definition, so every imple-

mentation is built to these same specifications, and hence can interoperate with one

another. Figure 2-1 illustrates a fixed length field encoding used in the Open Shortest

Path First (OSPF) protocol taken from RFC2328. 4

The row of numbers across the top of Figure 2-1 indicates the individual bits in

the packet format; each row contains 32 bits of information. The first 8 bits indicate

the version number, the second 8 bits always have the number 5, the following 16

bits contain the total packet length, etc. Each of these fields is further defined in the

protocol specification with the kind of information carried in the field and how it is

encoded. For instance:

The version number field is encoded as an unsigned integer. This is metadata

indicating the dictionary and grammar used for this packet. If the packet for-

mat needs to be changed, the version number can be increased, allowing trans-

mitters and receivers to use the correct dictionary and grammar when encoding

and decoding the information in the packet.

The number 5 indicates the kind of packet within the protocol; this is part

of a dictionary defined elsewhere in the standards document, so it is simply

inserted as a fixed value in this illustration. This particular packet is a Link

State Acknowledgment Packet.

Figure 2-1 OSPF Fixed Length Field Definition in the Protocol Specification

  1. Moy, OSPF Version 2, 201.


Chapter 2 Data Transport Problems and Solutions

  • The packet length is encoded as an unsigned integer indicating the number of

octets (or sets of 8 bits) contained in the complete packet. This allows the packet

size to vary in length depending on how much information needs to be carried.

The fixed length field format has several advantages. Primarily, the location of

any piece of information within the packet will be the same from packet to packet,

which means it is easy to optimize the code designed to encode and decode the infor-

mation around the packet format. For instance, a common way of processing a fixed

length packet format is to create an in-memory data structure matching the packet

format precisely; when the packet is read off the wire, it is simply copied into this

data structure. The fields within the packet can then be read directly.

Fixed length formats tend to be somewhat compact. The metadata needed to

encode and decode the data is carried “outside the protocol,” in the form of a protocol

specification. The packets themselves contain only the value, and never any informa-

tion about the values. On the other hand, fixed length formats can waste a lot of space

by buffering the fields so they are always the same length. For instance, the decimal

number 1 can be represented with a single binary digit (a single bit), while the decimal

number 4 requires 3 binary digits (three bits); if a fixed length field must be able to rep-

resent any number between 0 and 4, it will need to be at least 3 bits long, even though

two of those bits will sometimes be “wasted” in representing smaller decimal numbers.

Fixed length formats also often waste space by aligning the field sizes on common

processor memory boundaries to improve the speed of processing. A field required

to take values between 0 and 3, even though it only needs two bits to represent the full

set of values, may be encoded as an 8-bit field (a full octet) in order to ensure the field

following is always aligned on an octet boundary for faster in-memory processing.

Flexibility is where fixed length encoding often runs into problems. If some field is

defined as an 8-bit value (a single octet) in the original specification, there is no obvi-

ous way to modify the length of the field to support new requirements. The primary

way this problem is solved in fixed length encoding schemes is through the version

number. If the length of a field must be changed, the version number is modified in

packet formats supporting the new field length. This allows implementations to use

the old format until all the devices in the network are upgraded to support the new

format; once they are all upgraded, the entire system can be switched to the new for-

mat, whether larger or smaller.

Type Length Value

The Type Length Value (TLV) format is another widely used solution to the problem

of marshaling data. Figure 2-2 shows an example from the Intermediate System to

Intermediate System (IS-IS) routing protocol.





packet header 135 metric


۲۳۶ metric

header TLVs


sub TLVs prefix






sub TLVs prefix


Figure 2-2 An Example of a TLV Format from IS-IS

In Figure 2-2, a packet consists of a header, which is normally fixed length, and

then a set of TLVs. Each TLV is formatted based on its type code. In this case, there

are two TLV types shown (there are many other types in IS-IS; two are used for illus-

tration here). The first type is a 135, which carries Internet Protocol version 4 (IPv4)

information. This type has several fields, some of which are fixed length—such as

the metric. Others, however, such as the prefix, are variable length; the length of the

field depends on the value placed in some other field within the TLV In this case,

the prefix length field determines the length of the prefix field. There are also sub-

TLVs, which are similarly formatted, and carry information associated with this

IPv4 information. The type 236 is similar to the 135, but it carries IPv6, rather than

IPv4, information.

Essentially, the TLV can be considered a complete set of self-contained informa-

tion carried within a larger packet. The TLV consists of three parts:

The type code, which describes the format of the data

The length, which describes the total length of the data

The value, or the data itself

TLV-based formats are less compact than fixed length formats because they carry

more metadata within the packet itself. The type and length information carried in

the data provides the information about where to look in the dictionary for informa-

tion about the formatting, as well as information about the grammar to use (how

each field is formatted, etc.). TLV formats trade off the ability to change the format-

ting of the information being carried by the protocol without requiring every device

to upgrade, or allowing some implementations to choose not to support every pos-

sible TLV against the additional metadata carried across the wire.


Chapter 2 Data Transport Problems and Solutions

TLVs are generally considered a very flexible way of marshaling data in protocols;

you will find this concept to be almost ubiquitous.

Shared Object Dictionaries

One of the major problems with fixed length fields is the fixedness of the field defini-

tions; if you want to modify a fixed length field protocol, you need to bump the ver-

sion number and modify the packet, or you must create a new packet type with

different encodings for the fields. TLV formatting solves this by including metadata

inline, with the data being transmitted, at the cost of carrying more information and

reducing compactness. Shared compiled dictionaries attempt to solve this problem

by placing the dictionary in a sharable file (or library) rather than in a specification.

Figure 2-3 illustrates the process.

In Figure 2-3, the process begins with a developer building a data structure to

marshal some particular set of data to be transferred across the network. Once

the data structure has been built, it is compiled into a function, or perhaps copied

into a library of functions (1), and copied over to the receiver (2). The receiver then

uses this library to write an application to process this data (3). On the transmit-

ter side, the raw data is encoded into the format (4), and then carried by a protocol

across the network to the receiver (5). The receiver uses its shared copy of the data

format (6) to decode the data, and pass the decoded information to the receiving

application (7).

This kind of system combines the flexibility of the TLV-based model with the

compactness of a fixed field protocol. While the fields are fixed length, the field defi-

nitions are given in a way that allows for fast, flexible updates when the marshaling

format needs to be changed. So long as the shared library is decoupled from the

application using the data, the dictionary and grammar can be changed by distribut-

ing a new version of the original data structure.

raw data raw data


copy (1) (3)

(۲) object data


encoded data 2 type length

in format

format library to



(۵) transfer encoded data to receiver

Figure 2-3 Shared Compiled Dictionaries

data 1 type length
data 3 type length


data 1 type length data
data 2 type length using
data 3 type length



Would a flag day be required if a new version of the data structure is distributed?

Not necessarily. If a version number is included in the data structure, so the receiver

could match the received data with the correct data structure, then multiple versions

of the data structure could exist in the system at one time. Once no sender is found

using an older data format, the older structure can be safely discarded throughout

the entire system.


gRPC is an example of a compiled shared library marshaling system; see the

“Further Reading” section for resources.


While the fixed format and TLV systems count on the developers reading the

specifications, and writing code as a form of sharing the grammar and dictionary,

shared data structure systems, as described in this section, count on the shared

dictionary being distributed in some other way. There are many different ways

this could be done; for instance, a new version of software can be distributed to

all the senders and receivers, or some form of distributed database can be used to

ensure all the senders and receivers receive the updated data dictionaries, or some

part of an application that specifically manages marshaling data can be distrib-

uted and paired with an application that generates and consumes the data. Some

systems of this kind transfer the shared dictionary as part of their initial session

setup. All of these are possible, and outside the scope of this present text.


No data transmission medium can be assumed to be perfect. If the transmission

medium is shared, like Radio Frequency (RF), there is the possibility of interference,

or even datagram collisions. This is where more than one sender attempts to trans-

mit information simultaneously. The result is a garbled message that cannot be

understood by the intended receiver. Even a dedicated medium, such as a point-to-

point undersea optical (lightwave) fiber cable, can experience errors due to cable deg-

radation or point events—even seemingly insane events, such as solar flares causing

radiation, which in turn interferes with data transmission through a copper cable.


Chapter 2 Data Transport Problems and Solutions

There are two key questions a network transport must answer in the area of


  • How can errors in the transmission of data be detected?
  • What should the network do about errors in data transmission?

The following sections consider some of the possible answers to these questions.

Error Detection

The first step in dealing with errors, whether they are because of a transmission

media failure, memory corruption in a switching device along the path, or any other

reason, is to detect the error. The problem is, of course, when a receiver examines the

data it receives, there is nothing to compare the data to in order to detect the error.

Parity checks are the simplest detection mechanisms. Two complementary parity

checking algorithms exist. With even parity checking, one additional bit is added to

each block of data. If the sum of bits in the block of data is even—that is, if there are

an even number of 1 bits in the data block—the additional bit is set to 0. This preserves

the even parity state of the block. If the sum of bits is odd, the additional bit is set to 1,

which sets the entire block to an even parity state. Odd parity uses the same additional

bit strategy, but it requires the block to have odd parity (an odd number of 1 bits).

As an example, calculate even and odd parity for these four octets of data:

۰۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۰ ۰۰۱۱۰۱۰۱ ۰۰۱۱۰۰۰۱

Simply counting the digits reveals there are 14 1s and 18 0s in this data. To provide

for error detection using a parity check, you add one bit to the data, either making

the total number of 1s in the newly enlarged set of bits even for even parity, or odd

for odd parity. For instance, if you want to add an even parity bit in this case, the

additional bit should be set to 0. This is because the number of 1s is already an even

number. Setting the additional parity bit to 0 will not add another 1, and hence will

not change whether the total number of 1s is even or odd. For even parity, then, the

final set of bits is

۰۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۰ ۰۰۱۱۰۱۰۱ ۰۰۱۱۰۰۰۱ ۰

On the other hand, if you wanted to add a single bit of odd parity to this set of

bits, you would need to make the additional parity bit a 1, so there are now 15 1s

rather than 14. For odd parity, the final set of bits is

۰۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۰ ۰۰۱۱۰۱۰۱ ۰۰۱۱۰۰۰۱ ۱


To check whether or not the data has been corrupted or changed in transit, the

receiver can simply note whether even or odd parity is in use, add the number of 1s,

and discard the parity bit. If the number of 1s does not match the kind of parity in

use (even of odd), the data has been corrupted; otherwise, the data appears to be the

same as what was originally transmitted.

This new bit is, of course, transmitted along with the original bits. What happens

if the parity bit itself is somehow corrupted? This is actually okay; assume even par-

ity checking is in place, and a transmitter sends

۰۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۰ ۰۰۱۱۰۱۰۱ ۰۰۱۱۰۰۰۱ ۰

The receiver, however, receives

۰۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۰ ۰۰۱۱۰۱۰۱ ۰۰۱۱۰۰۰۱ ۱

The parity bit itself has been flipped from a 0 to a 1. The receiver will count the 1s,

determining there are 15; since even parity checking is in use, the received data will

be flagged as having an error even though it does not. The parity check is potentially

too sensitive to failures, but it is better to err on the side of caution in the case of

error detection.

There is one problem with the parity check: it can detect only a single bit flip in

the transmitted signal. For instance, if even parity is in use, and the transmitter sends

۰۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۰ ۰۰۱۱۰۱۰۱ ۰۰۱۱۰۰۰۱ ۰

The receiver, however, receives

۰۰۱۱۰۰۱۰ ۰۰۱۱۱۰۰۰ ۰۰۱۱۰۱۰۱ ۰۰۱۱۰۰۰۰ ۰

The receiver will count the number of 1s and find it is 12; since the system is using

even parity, the receiver will assume the data is correct and process it normally. How-

ever, the two bits marked out in bold have both been corrupted. If an even number

of bits, in any combination, is modified, the parity check cannot detect the change;

only when the change involves an odd number of bits can the parity check detect the

modification of the data.

The Cyclic Redundancy Check (CRC) can detect a wider range of modifications

in transmitted data by using division (rather than addition) in cycles across the entire

data set, one small piece at a time. Working through an example is the best way to

understand how a CRC is calculated. A CRC calculation begins with a polynomial,

as shown in Figure 2-4.

In Figure 2-4, a three-term polynomial, x 3 + x 2 + 1, is expanded to include all

the terms—including terms preceded by 0 (and hence do not impact the result of


Chapter 2 Data Transport Problems and Solutions

base polynomial of three terms

x 3 + x 2 + 1

۱x 3 + 1x 2 + 0x + 1


binary calculator

Figure 2-4 A Polynomial Used to Calculate a CRC

the calculation regardless of the value of x). The four coefficients are then used as a

binary calculator, which will be used to calculate the CRC.

To perform the CRC, begin with the original binary data set, and add three extra

bits (because the original polynomial, without the coefficients, has three terms;

hence this is called a three-bit CRC check), as shown here:

۱۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۱ (original data)

۱۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۱ ۰۰۰ (with the added CRC bits)

These three bits are required to ensure all the bits in the original data are included

in the CRC; as the CRC moves from left to right across the original data, the last bits

in the original data will be included only if these padding bits are included. Now

begin at the left four bits (because the four coefficients are represented as four bits).

Use the Exclusive OR (XOR) operation to compare the far-left bits against the CRC

bits, and save the result, as shown here:

۱۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۱ ۰۰۰ (padded data)

۱۱۰۱ (CRC check bits)


۰۱۱۰۰۰۱۱ ۰۰۱۱۱۰۰۱ ۰۰۰ (result of the XOR)



XOR’ing two binary digits results in a 0 if the two digits match, and a 1 if they do not.

The check bits, called a divisor, are moved one bit to the right (some steps can be

skipped here) and the operation is repeated until the end of the number is reached:

۱۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۱ ۰۰۰


۰۱۱۰۰۰۱۱ ۰۰۱۱۱۰۰۱ ۰۰۰


۰۰۰۰۱۰۱۱ ۰۰۱۱۱۰۰۱ ۰۰۰


۰۰۰۰۰۱۱۰ ۰۰۱۱۱۰۰۱ ۰۰۰

۱۱۰ ۱

۰۰۰۰۰۰۰۰ ۱۰۱۱۱۰۰۱ ۰۰۰


۰۰۰۰۰۰۰۰ ۰۱۱۰۱۰۰۱ ۰۰۰


۰۰۰۰۰۰۰۰ ۰۰۰۰۰۰۰۱ ۰۰۰

۱ ۱۰۱

۰۰۰۰۰۰۰۰ ۰۰۰۰۰۰۰۰ ۱۰۱

The CRC is in the final three bits that were originally added on as padding; this is

the “remainder” of the division process of moving across the original data plus the

original padding. It is simple for the receiver to determine whether the data has been

changed by leaving the CRC bits in place (101 in this case), and using the original

divisor across the data, as shown here:

۱۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۱ ۱۰۱


۰۱۱۰۰۰۱۱ ۰۰۱۱۱۰۰۱ ۱۰۱



Chapter 2 Data Transport Problems and Solutions

۰۰۰۰۱۰۱۱ ۰۰۱۱۱۰۰۱ ۱۰۱


۰۰۰۰۰۱۱۰ ۰۰۱۱۱۰۰۱ ۱۰۱

۱۱۰ ۱

۰۰۰۰۰۰۰۰ ۱۰۱۱۱۰۰۱ ۱۰۱


۰۰۰۰۰۰۰۰ ۰۱۱۰۱۰۰۱ ۱۰۱


۰۰۰۰۰۰۰۰ ۰۰۰۰۰۰۰۱ ۱۰۱

۱ ۱۰۱

۰۰۰۰۰۰۰۰ ۰۰۰۰۰۰۰۰ ۰۰۰

If the data has not been changed, the result of this operation should always result

in 0. If a bit has been changed, the result will not be 0, as shown here:

۱۰۱۱۰۰۱۱ ۰۰۱۱۱۰۰۰ ۰۰۰


۰۱۱۰۰۰۱۱ ۰۰۱۱۱۰۰۰ ۰۰۰


۰۰۰۰۱۰۱۱ ۰۰۱۱۱۰۰۰ ۰۰۰


۰۰۰۰۰۱۱۰ ۰۰۱۱۱۰۰۰ ۰۰۰

۱۱۰ ۱

۰۰۰۰۰۰۰۰ ۱۰۱۱۱۰۰۰ ۰۰۰


۰۰۰۰۰۰۰۰ ۰۱۱۰۱۰۰۰ ۰۰۰


۰۰۰۰۰۰۰۰ ۰۰۰۰۰۰۰۰ ۰۰۰

۱ ۱۰۱

۰۰۰۰۰۰۰۰ ۰۰۰۰۰۰۰۱ ۰۰۰


The CRC might seem like a complex operation, but it plays to a computer’s

strong points—finite length binary operations. If the length of the CRC is set the

same as a standard small register in common processors, say eight bits, calculat-

ing the CRC is a fairly straightforward and quick process. CRC checks have the

advantage of being resistant to multibit changes, unlike the parity check described


Error Correction

Detecting an error is only half of the problem, however. Once the error is detected,

what should the transport system do? There are essentially three options.

The transport system can simply throw the data away. In this case, the transport is

effectively transferring the responsibility of what to do about the error up to higher-

level protocols or perhaps the application itself. As some applications may need a

complete data set with no errors (think a file transfer system, or a financial transac-

tion), they will likely have some way to discover any missing data and retransmit it.

Applications that do not care about small amounts of missing data (think a voice

stream) can simply ignore the missing data, reconstructing the information at the

receiver as well as possible given the missing information.

The transport system can signal the transmitter that there is an error, and let the

transmitter decide what to do with this information (generally the data in error will

be retransmitted).

The transport system can go beyond throwing data away by including enough

information in the original transmission and determine where the error is and

attempt to correct it. This is called Forward Error Correction (FEC). Hamming

codes, one of the first FEC mechanisms developed, is also one of the simplest to

explain. The Hamming code is best explained by example; Table 2-2 will be used

to illustrate.

Table 2-2 An Illustration of the Hamming Code


۰۰۰۱ ۰۰۱۰ ۰۰۱۱ ۰۱۰۰ ۰۱۰۱ ۰۱۱۰ ۰۱۱۱ ۱۰۰۰ ۱۰۰۱ ۱۰۱۰ ۱۰۱۱ ۱۱۰۰















































Chapter 2 Data Transport Problems and Solutions

In Table 2-2:

  • Each bit in the 12-bit space that is a power of two (1, 2, 4, 6, 8, etc.) and the first

bit are set aside as parity bits.

  • The 8-bit number to be protected with FEC, 10110011, has been distributed

across the remaining bits in the 12-bit space.

  • Each parity bit is set to 0, and then parity is calculated for each parity bit by

adding the number of 1s in positions where the binary bit number has the same

bit set as the parity bit. Specifically:

  • P1 has the far-right bit set in its bit number; the other bits in the number

space that also have the far right bit set are included in the parity calculation

(see the second row in the table to find all the bit positions in the number

with the far-right bit set). These are indicated in the table with an X in the

P1 row. The total number of 1s is an odd number, 3, so the P1 bit is set to 1

(this example is using even parity).

  • P2 has the second bit from the right set; the other bits in the number space

that have the second from the right bit set are included in the parity calcula-

tion, as indicated with an X in the P2 row of the table. The total number of

۱s is an even number, 4, so the P2 bit is set to 0.

  • P4 has the third bit from the right set, so the other bits that have the third

from the right bit set in their position numbers, as indicated with an X in

the P3 row. There are an odd number of 1s in the marked columns, so the P4

parity bit is set to 1.

To determine if any information has changed, the receiver can check the parity

bits in the same way the sender has calculated them; the total number of 1s in any set

should be an even number, including the parity bit. If one of the data bits has been

flipped, the receiver should never find a single parity error, because each of the bit

positions in the data is covered by multiple parity bits. To discover which data bit is

incorrect, the receiver adds the positions of the parity bits that are in error; the result

is the bit position that has been flipped. For instance, if the bit in position 9, which

is the fifth data bit, is flipped, then parity bits P1 and P8 would be both in error. In

this case, 8 + 1 = 9, so the bit in position 9 is in error and flipping it would correct the

data. If a single parity bit is in error—for example, P1 or P8—then it is that parity bit

which has been flipped, and the data itself is correct.

While the Hamming code is ingenious, there are many bit flip patterns it cannot

detect. A more modern code, such as Reed-Solomon, can detect and correct a wider

range of error conditions while adding less additional information to the data stream.



There are a large number of different kinds of CRC and error correction codes used

throughout the communications world. CRC checks are classified by the number

of bits used in the check (the number of bits of padding, or rather the length of

the polynomial), and, in some cases, the specific application. For instance, the

Universal Serial Bus uses a 5-bit CRC (CRC-5-USB); the Global System for Mobile

Communications (GSM), a widely used cellular telephone standard, uses CRC-3-

GSM; Code Division Multi-Access (CDMA), another widely used cellular tele-

phone standard, uses CRC-6-CDMA2000A, CRC-6-CDMA2000B, and CRC-30;

and some car area networks (CANs), used to tie together various components

in a vehicle, use CRC-17-CAN and CRC-21-CAN. Some of these various CRC

functions are not a single function, but rather a class, or family, of functions, with

many different codes and options within them.


You walk into a room and shout, “Joe!” Your friend, Joe, turns around and begins a

conversation on politics and religion (the two forbidden topics, of course, in any

polite conversation). This ability to use a single medium (the air through which your

voice travels) to address one person, even though many other people are using the

same medium for other conversations at the same time, is what is called, in network

engineering, multiplexing. More formally:

Multiplexing is used to allow multiple entities attached to the network to

communicate over a shared network.

Why is the word entities used here instead of hosts? Returning to the “conver-

sation with Joe” example, imagine the one way you can communicate with Joe is

through his teenaged child, who only texts (never talks). In fact, Joe is part of a fam-

ily of several hundred to several thousand people, and all the communications for

this entire family must come through this one teenager, and each person in the fam-

ily has multiple conversations running concurrently, sometimes on different topics

with the same person. The poor teenager must text very quickly, and keep a lot of

information in her head, like “Joe is having four conversations with Mary,” and must

keep the information in each conversation completely separate from the other. This

is closer to how network multiplexing really works; consider:


Chapter 2 Data Transport Problems and Solutions

  • There could be millions (or billions) of hosts connected to a single network, all

sharing the same physical network to communicate with one another.

  • Each of these hosts actually contains many applications, possibly several hun-

dred, each of which can communicate with any of the hundreds of applica-

tions on any other host connected to the network.

  • Each of these applications may, in fact, have several conversations to any other

application running on any other host in the network.

If this is starting to sound complicated, that is because it is. The question this sec-

tion needs to answer, then, is this:

How do hosts multiplex effectively over a computer network?

The following sections consider the most commonly used solutions in this space,

as well as some interesting problems tied up in this basic problem, such as multicast

and anycast.

Addressing Devices and Applications

Computer networks use a series of hierarchically arranged addresses to solve these

problems; Figure 2-5 illustrates.

In Figure 2-5, there are four levels of addressing shown:

  • At the physical link level, there are interface addresses that allow two devices to

address a particular device individually.

  • At the host level, there are host addresses that allow two hosts to address a

particular host directly.

  • At the process level, there are port numbers that, combined with the host address,

allow two processes to address a particular process on a particular device.

conversation 2 conversation 2

conversation 1 conversation 1

process 2 process 2

process 1 process 1

host host

interface interface interface interface

Figure 2-5 Addressing Multiple Levels of Entities in a Network


  • At the conversation level, the set of source port, destination port, source

address, and destination address can be combined to uniquely identify a par-

ticular conversation, or flow.

This diagram and explanation appear very clean. In real life, things are much

messier. In the most widely deployed addressing scheme, the Internet Protocol (IP),

there are no host-level addresses. Instead, there are logical and physical addresses on

a per interface basis.


IP and IP addressing will be considered in more detail in Chapter 5, “Higher Layer

Data Transports.”

Multiplexing and multiplexing identifiers (addresses) are stacked hierarchically

on top of one another in a network.


Mechanisms that associate one kind of address with another between some layers

will be considered more fully in Chapter 6, “Interlayer Discovery.”

There are some situations, however, in which you want to send traffic to more

than one host at a time; for these situations, there are multicast and anycast. These

two special kinds of addressing will be considered in the following sections.

On Physical Links, Broadcasts, and Failure Domains

The clear-cut model illustrated in Figure 2-5 is made more complex when

you consider the concept of broadcast domains and physical connectiv-

ity. Some media types (notably Ethernet, which is covered in more detail in

Chapter 4, “Lower Layer Transports”) are designed so every device connected

to the same physical link receives every packet transmitted onto the physi-

cal media—hosts just ignore packets not addressed to one of the addresses

associated with the physical interface connected to the physical wire. In mod-

ern networks, however, physical Ethernet wiring rarely allows every device to

receive every other device’s packets; instead, there is a switch in the middle

of the network that blocks packets not destined to a particular device from

being transmitted on the physical wire connected to that host.


Chapter 2 Data Transport Problems and Solutions

In these protocols, however, there are explicit addresses set aside for packets

that should be transmitted to every host that would normally receive every

packet if the switch was not there, or that every host should receive and pro-

cess (normally, this is some form of the all 1s or all 0s version of the address).

These are called broadcasts. Any device that will receive, and process, a broad-

cast sent by a device is said to be part of the device’s broadcast domain. The

concept of a broadcast domain has traditionally been closely associated with a

failure domain, because network failures impacting one device on a broadcast

domain often impact every device on the broadcast domain (see Chapter ۲۳,

“Redundant and Resilient,” for more information on failure domains).

Do not be surprised if you find all of this rather confusing, because it is,

in fact, rather confusing. The basic concepts of broadcasts and broadcast

domains still exist, and are still important in understanding the operation of

a network, but the meaning of the term can change, or even not apply, in

some situations. Be careful when considering any situation to make certain

you really understand how and where such broadcast domains really are, and

how specific technologies impact the relationship between physical connec-

tivity, addressing, and broadcast domains.



This short explanation cannot really do justice to the entire scope of solutions

available to build multicast trees; see the “Further Reading” section at the end of

the chapter for more material to consider in this area.

If you have a network like the one shown in Figure 2-6, and you need A to distrib-

ute the same content to G, H, M, and N, how would you go about doing this?

You could either generate four copies of the traffic, sending one stream to each

of the receivers using normal (unicast) forwarding, or you could somehow send the

traffic to a single address that the network knows to replicate so all four hosts receive

a copy. This latter option is called multicast, which means using a single address

to transmit traffic to multiple receivers. The key problem to solve in multicast is to

forward and replicate traffic as it passes through the network so each receiver who is

interested in the stream will receive a copy.







Figure 2-6 A Multicast Example





The set of devices interested in receiving a stream of packets from a multicast

source is called a multicast group. This can be a bit confusing because the address

used to describe the multicast stream is also called a multicast group in some sit-

uations. The two uses are practically interchangeable in that the set of devices

interested in receiving a particular set of multicast packets will join the multicast

group, which, in effect, means listening to a particular multicast address.


In cases where the multicast traffic is bidirectional, this problem is much more

difficult to solve. For instance, assume there is a requirement to build a multicast

group with every host in the network shown in Figure 2-6 except N, and further

that any multicast transmitted to the multicast group’s address be delivered to

every host within the multicast group.

The key problem for multicast to solve can be broken into two problems:

  • How do you discover which devices would like to receive a copy of traffic trans-

mitted to the multicast group?


Chapter 2 Data Transport Problems and Solutions

  • How do you determine which devices in the network should replicate the traf-

fic, and on which interfaces they should send copies?

One possible solution is to use local requests to build a tree through which the

multicast traffic should be forwarded through the network. An example of such a

system is Sparse Mode in Protocol Independent Multicast (PIM). In this this pro-

cess, each device sends a join message for the multicast streams it is interested in;

these joins are passed upstream in the network until the sender (the host sending

packets through the multicast stream) is reached. Figure 2-7 is used to illustrate

this process.

In Figure 2-7:

  1. A is sending some traffic to a multicast group (address); call it Z.
  2. N would like to receive a copy of Z, so it sends a request (a join) to its upstream

router, D, for a copy of this traffic.

  1. D does not have a source for this traffic, so it sends a request to the routers it is

connected to for a copy of this traffic; in this case, the only router D sends the

request to is B.











Figure 2-7 Sparse mode multicast


At each hop, the router receiving the request will place the interface on which it

received the request into its Outbound Interface List (OIL), and begin forwarding

traffic received in the given multicast group received on any other interface. In this

way, a path from the receiver to the originator of the traffic can be built; this is called

a reverse path tree.

A second option for discovering which hosts are interested in receiving traffic for

a specific multicast group is through some sort of registration server. Each host that

would like to receive a copy of the stream can register its desire with a server. There

are several ways the host can discover the presence of the server, including

Treating the multicast group address like a domain name, and looking up the

address of the registration server by querying for the multicast group address

Building and maintaining a list, or mapping, of groups to servers mapping in

a local table

Using some form of hash algorithm to compute the registration server from

the multicast group address

The registrations can either be tracked by devices on the path to the server, or,

once the set of receivers and transmitters is known, the server can signal the appro-

priate devices along the path which ports should be configured for replicating and

forwarding packets.


Another problem multiplexing solutions face is being able to address a specific

instance of a service residing in implemented on multiple hosts using a single address.

Figure 2-8 illustrates.

In Figure 2-8, some service, S, needs to be designed to increase its performance.

To accomplish this goal, a second copy of the service has been created, with the two

copies being named S1 and S2. These two copies of the service are running on two

servers, M and N. The problem anycast seeks to solve is this:

How can clients be directed to the most optimal instance of a service?

One way of solving this problem is to direct all the clients to a single device and

have a load balancer split the traffic to the servers based on the topological location

of the client, the load of each server, and other factors. This solution is not always

ideal, however. For instance, what if the load balancer cannot handle all the con-

nection requests generated by the clients who want to reach various copies of the


Chapter 2 Data Transport Problems and Solutions














Figure 2-8 An Anycast Example

service? What sorts of complexities are going to be added to the network to allow the

load balancer to track the health of the various copies of the service?


Load balancing is considered in Chapter 7, “Packet Switching.”

Anycast solves this problem by assigning the same address to each copy of the

service. In the network illustrated in Figure 2-8, then, M and N would use the same

address to provide reachability to S1 and S2. M and N would have different addresses

assigned and advertised to provide reachability to other services, and to the devices

themselves, as well.

H and K, the first hop routers beyond M and N, would advertise this same address

into the network. When C and D receive two routes to the same destination, they

will choose the closest route in terms of metrics. In this case, if every link in the same

network is configured with the same metric, then C would direct traffic sourced from

A, and destined to the service’s address, toward M. D, on the other hand, will direct

traffic sourced from B, and destined to the service’s address, toward N. What hap-

pens if two instances of the service are about the same distance apart? The router

will choose one of the two paths using a local hash algorithm.



See Chapter 7 for more information about equal cost multipath switching, and

how using a hash ensures the same path is used for each packet in a flow. Rout-

ing is generally stable enough, even in the Internet, to use anycast solutions with

stateful protocols. 5

  1. Palsson et al., “TCP over IP Anycast—Pipe Dream or Reality?”

Anycast is often used for large-scale services that must scale by provisioning a lot

of servers to support the single service. Examples include the following:

Most large-scale Domain Name Service (DNS) system servers are actually a set

of servers accessible through an anycast address.

Many large-scale web-based services, particularly social media and search,

where a single service is implemented on a large number of edge devices.

Content caching services often use anycast in distributing and serving


Designed correctly, anycast can provide effective load balancing as well as optimal

performance for services.

Flow Control

Do you remember your great aunt (or was it your second cousin once removed?) who

talked so fast that you could not understand a word she was saying? Some computer

programs talk too fast, too. Figure 2-9 illustrates.

In Figure 2-9:

At Time 1 (T1), the sender is transmitting about four packets for every three

the receiver can process. The receiver has a five-packet buffer to store unpro-

cessed information; there are two packets in this buffer.

At T2, the sender has transmitted four packets, and the receiver has processed

three; the buffer at the receiver is now holding three packets.

At T3, the sender has transmitted four packets, and the receiver has processed

three; the buffer at the receiver is now holding four packets.

At T4, the sender has transmitted four packets, and the receiver has processed

three; the buffer at the receiver is now holding five packets.


Chapter 2 Data Transport Problems and Solutions

send rate queueprocess rate





Figure 2-9 Buffer Overflow Example

The next packet transmitted will be dropped by the receiver because there is no

space in the buffer to store it while the receiver is processing packets so they can be

removed. What is needed is some sort of feedback loop to tell the transmitter to slow

down the rate at which it is sending packets, as illustrated in Figure 2-10.

This kind of feedback loop requires either implicit signaling or explicit signaling

between the receiver and the transmitter. Implicit signaling is more widely deployed.

In implicit signaling, the transmitter assumes the packet has not been received based

on some observation about the traffic stream. For instance, the receiver may acknowl-

edge the receipt of some later packet, or the receiver may simply not acknowledge

receiving a particular packet, or the receiver may not send anything for a long period

of time (in network terms). In explicit signaling, the receiver somehow directly

informs the sender that a specific packet has not been received.


Figure 2-10 A Feedback Loop to Control Packet Flow



Windowing, combined with implicit signaling, is by far the most widely deployed

flow control mechanism in real networks. Windowing essentially consists of the


  1. A transmitter sends some amount of information to the receiver.

  1. The transmitter waits before deciding if the information has been correctly

received or not.

  1. If the receiver acknowledges receipt within a specific amount of time, the

transmitter sends new information.

  1. If the receiver does not acknowledge receipt within a specific amount of time,

the transmitter resends the information.

Implicit signaling is normally used with windowing protocols by simply not

acknowledging the receipt of a particular packet. Explicit signaling is sometimes

used when the receiver knows it has dropped a packet, when received data contains

errors, data is received out of order, or data is otherwise corrupted in some way.

Figure 2-11 illustrates the simplest windowing scheme, a single packet window.

In a single packet window (also sometimes called a ping pong), the transmitter

sends a packet only when the receiver has acknowledged (shown as an ack in the

illustration) the receipt of the last packet transmitted. If the packet is not received,

the receiver will not acknowledge it. On sending a packet, the sender sets a timer,

A (sender)











Figure 2-11 A Single Packet Window

B (receiver)


Chapter 2 Data Transport Problems and Solutions

normally called the retransmit timer; once this timer wakes up (or expires), the

sender will assume the receiver has not received the packet, and resend it.

How long should the sender wait? There are a number of possible answers to this

question, but essentially the sender can either wait a fixed amount of time, or it can

set a timer based on information inferred from previous transmissions and network

conditions. A simple (and naïve) scheme would be to

  • Measure the length of time between sending a packet and receiving an

acknowledgment, called the Round Trip Time (RTT, though normally written

in the lowercase, so rtt).

  • Set the retransmit timer to this number plus some small amount of buffer time

to account for any variability in the rtt over multiple transmissions.


More information about various ways to calculate the retransmit timer are con-

sidered in Chapter 5.

It is also possible for the receiver to receive two copies of the same information:

  1. A transmits a packet and sets its retransmit timer.
  2. B receives the packet, but
  3. Is not able to acknowledge receipt because it is out of memory or is experi-

encing high processor utilization or some other condition.

  1. Sends an acknowledgment, but the acknowledgment is dropped by a

network device.

  1. The retransmit timer at A times out, so the sender transmits another copy of

the packet.

  1. B receives this second copy of the same information.

How can the receiver detect duplicated data? It does seem possible for the

receiver to compare the packets received to see if there is duplicate information, but

this will not always work—perhaps the sender intended to send the same informa-

tion twice. The usual method of detecting duplicate information is by including

some sort of sequence number in transmitted packets. Each packet is given a unique


sequence number while being built by the sender; if the receiver receives two packets

with the same sequence number, it assumes the data is duplicated and discards the


A window size of 1, or a ping pong, requires one round trip between the sender

and the receiver for each set of data transmitted. This would generally result in a

very slow transmission rate. If you think of the network as the end-to-end railroad

track, and each packet as a single train car, the most efficient use of the track, and

the fastest transmission speed, is going to be when the track is always full. This is not

physically possible, however, in the case of a network because the network is used by

many sets of senders and receivers, and there are always network conditions that will

prevent the network utilization from reaching 100%. There is some balance between

the increased efficiency and speed of sending more than one packet at a time, and

the multiplexing and “safety” of sending fewer packets at a time (such as one). If

a correct balance point can be calculated in some way, a fixed window flow control

scheme may work well. Figure 2-12 illustrates.

In Figure 2-12, assuming a three-packet fixed window:

At T1, T2, and T3, A transmits packets; A does not need to wait for B to

acknowledge anything to send these three packets, as the window size is fixed

at 3.

At T4, B acknowledges these three packets, which allows A to transmit another


At T5, B acknowledges this new packet, even though is it only one packet. B

does not need to wait until A has transmitted three more packets to acknowl-

edge a single packet. This acknowledgment allows A to have enough budget to

send three more packets.

At T5, T6, and T7, A sends three more packets, filling its window. It must now

wait until B acknowledges these three packets to send more information.

At T8, B acknowledges the receipt of these three packets.

In windowing schemes where the window size is more than one, there are four

kinds of acknowledgments a receiver can send to the transmitter:

Positive acknowledgment: The receiver acknowledges the receipt of each

packet individually. For instance, if sequence numbers 1, 3, 4, and 5 have been

received, the receiver will acknowledge receiving those specific packets. The

transmitter can infer which packets the receiver has not received by noting

which sequence numbers have not been acknowledged.


Chapter 2 Data Transport Problems and Solutions

A (sender) B (receiver)






















Figure 2-12 An Example of Fixed Window Flow Control

Negative acknowledgment: The receiver sends a negative acknowledgment •

for packets it infers are missing, or were corrupted when received. For instance,

if sequence numbers 1, 3, 4, and 5 have been received, the receiver may infer

that sequence number 2 is missing and send a negative acknowledgment for

this packet.

Selective acknowledgment: This essentially combines positive and nega- •

tive acknowledgment, as above; the receiver sends both positive and negative

acknowledgments for each sequence of received information.


Cumulative acknowledgment: Acknowledgment of the receipt of a sequence

number implies receipt of all information with lower sequence numbers. For

instance, if sequence number 10 is acknowledged, the information contained

in sequence numbers 1–۹ is implied, as well as the information contained in

sequence number 10.

A third windowing mechanism is called sliding window flow control. This mecha-

nism is very similar to a fixed window flow control mechanism, except the size of the

window is not fixed. In sliding window flow control, the transmitter can dynami-

cally modify the size of the window as network conditions change. The receiver

does not know what size the window is, only that the sender transmits packets, and,

from time to time, the receiver acknowledges some or all of them using one of the

acknowledgment mechanisms described in the preceding list.

Sliding window mechanisms add one more interesting question to the questions

already considered in other windowing mechanisms: What size should the window

be? A naïve solution might just calculate the rtt and set the window size to some multi-

ple of the rtt. More complex solutions have been proposed; some of these will be con-

sidered in Chapter 5, in the discussion of the Transmission Control Protocol (TCP).

Negotiated Bit Rates

Another solution, more often used in circuit switched rather than packet switched

networks, is for the sender, receiver, and network to negotiate a bit rate for any par-

ticular flow. A wide array of possible bit rates have been designed for a number of

different networking technologies; perhaps the “most complete set” is for Asynchro-

nous Transfer Mode (ATM)—look for ATM networks in your nearest networking

history museum, because ATM is rarely deployed in production networks any longer.

The ATM bit rates are:

Constant Bit Rate (CBR): The sender will be transmitting packets (or infor-

mation) at a constant rate; hence, the network can plan around this constant

bandwidth load, and the receiver can plan around this constant bit rate. This

bit rate is normally used for applications requiring time synchronization

between the sender and receiver.

Variable Bit Rate (VBR): The sender will be transmitting traffic at a variable

rate. This rate is normally negotiated with several other pieces of information

about the flow that help the network and the receiver plan resources, including:

  • The peak rate, or the maximum packets per second the sender plans to



Chapter 2 Data Transport Problems and Solutions

  • The sustained rate, or the rate at which the sender plans to transmit


  • The maximum burst size, or the largest number of packets the sender

intends to transmit over a very short period of time

Available Bit Rate (ABR): The sender intends to rely on the capability of the •

network to deliver traffic on a best-effort basis, using some other form of flow

control, such as a sliding window technique, to prevent buffer overflows and

adjust transmitted traffic to the available bandwidth.

Final Thoughts on Transport

This chapter begins with the fundamentals of understanding the entire scope of the

network engineering problem space: transporting data across the network. Four spe-

cific problems were uncovered by considering the human language space, and several

solutions were presented at a high level:

  • To marshal the data, fixed length and TLV-based systems were considered, as

well as the concepts of metadata, dictionaries, and grammars.

  • To manage errors, two methods were considered to detect errors, parity checks

and the CRC; and one method was considered for error correction, the Ham-

ming Code.

  • To allow multiple senders and receivers to use the same physical media, several

concepts in multiplexing were considered, including multicast and anycast.

  • To prevent buffer overflows, several kinds of windowing were explored, and

negotiated bit rates defined.

Like many other areas you will encounter in this book, the world of transport can

become an entire specialty. Understanding the basics, however, is important for every

network engineer. The next chapter will consider some models that will help you to

put data transport, which is generally associated with forwarding, or the data plane,

into a larger context. Chapters 4 and 5 will consider several different examples of

transport protocols, pulling the concepts in this chapter and the next into real-life



Further Reading

Some of these further reading resources are provided to help in answering the study

questions for this chapter.

Conran, Matt. “Know Anycast? Think Before You Talk.” Blog. CacheFly, February

۲۲, ۲۰۱۷٫

———. “Anycast—Think Before You Talk.” Blog. CacheFly, February 22, 2017.

“Flag Day.” Accessed June 6, 2017.

Gleick, James. The Information: A History, A Theory, A Flood. New York: Vintage,

“Grpc /.” Accessed June 7, 2017.

Internet Protocol. Request for Comments 791. RFC Editor, 1981. doi:10.17487/


Koopman, P. “۳۲-Bit Cyclic Redundancy Codes for Internet Applications.” In Pro-

ceedings International Conference on Dependable Systems and Networks,

۴۵۹–۶۸, ۲۰۰۲٫ doi:10.1109/DSN.2002.1028931.

Koopman, P., and T. Chakravarty. “Cyclic Redundancy Code (CRC) Polynomial Selec-

tion for Embedded Networks.” In International Conference on Dependable

Systems and Networks, 2004, 145–۵۴, ۲۰۰۴٫ doi:10.1109/DSN.2004.1311885.

Loveless, Josh, Ray Blair, and Arvind Durai. IP Multicast, Volume I: Cisco IP Multi-

cast Networking. 1st edition. Indianapolis, IN: Cisco Press, 2016.

———. IP Multicast, Volume II: Advanced Multicast Concepts and Large-Scale

Multicast Design. 1st edition. Indianapolis, IN: Cisco Press, 2017.

McPherson, Danny R., David Oran, Dave Thaler, and Eric Osterweil. Architectural

Considerations of IP Anycast. Request for Comments 7094. RFC Editor, 2014.


Moon, Todd K. Error Correction Coding: Mathematical Methods and Algorithms.

۱st edition. Hoboken, NJ: Wiley-Interscience, 2005.

Morelos-Zaragoza, Robert H. The Art of Error Correcting Coding. 2nd edition.

Chichester; Hoboken, NJ: Wiley, 2006.

Moy, John. “OSPF Specification.” Request for Comment. RFC Editor, October 1989.



Chapter 2 Data Transport Problems and Solutions

———. “OSPF Version 2.” Request for Comment. RFC Editor, April 1998.


Palsson, Bret, Prashanth Kumar, Samir Jafferali, and Zaid Ali Kahn. “TCP over

IP Anycast—Pipe Dream or Reality?” Blog. LinkedIn Engineering Blog,

September 2015.


Postel, J. NCP/TCP Transition Plan. Request for Comments 801. RFC Editor, 1981.


Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communi-

cation. 4th edition. Champaign, IL: University of Illinois Press, 1949.

Soni, Jimmy, and Rob Goodman. A Mind at Play: How Claude Shannon Invented

the Information Age. New York: Simon & Schuster, 2017.

Stone, James V Information Theory: A Tutorial Introduction. 1st edition. England: .

Sebtel Press, 2015.

“Understanding the CBR Service Category for ATM VCs.” Cisco. Accessed June

۱۰, ۲۰۱۷٫


Warren, Henry S. Hacker’s Delight. 2nd edition. Upper Saddle River, NJ: Addison-

Wesley Professional, 2012.

Williamson, Beau. Developing IP Multicast Networks, Volume I. Indianapolis, IN:

Cisco Press, 1999.

Review Questions

  1. While TLVs almost always require more space to carry a piece of informa-

tion than a fixed length field, there are some cases where the fixed length field

will be less efficient. Carrying IPv6 addresses is one specific instance of a TLV

being more efficient than a fixed length field. Describe why this is. Comparing

the way routing protocols carry IPv4 and IPv6 addresses is a good place to start

in understanding the answer. In particular, examine the way IPv4 addresses are

carried in OSPF version 2, and compare this with the way these same addresses

are carried in BGP.

  1. Consider the following data types and determine whether you would use a

fixed length field or a TLV to carry each one, and why.

  1. The time and date
  2. A person’s full name


  1. A temperature reading

  1. The square footage of a building

  1. A series of audio or video clips

  1. A book broken down into sections such as paragraphs and chapters

  1. The city and state in an address

  1. The house number or postal code in an address

  1. What is the relationship between the bit error rate (BER) and the amount of

information required to detect and/or repair errors in a data transmission

stream? Can you explain why this might be?

  1. Under some conditions, it makes more sense to send enough information to

correct data on receipt (such as using a Hamming code). In others, it makes

more sense to discover the error and throw the data away. These conditions

would not be just the link type, however, or just the application; they would

be a combination of the two. What link characteristics, combined with what

kinds of application characteristics, would suggest the use of FEC? Which

ones would suggest the use of error detection combined with retransmitting

the data? It might be best to think of specific applications and specific link

types first, and then generalize from there.

  1. How many bit flip changes can a parity check detect?

  1. Implicit and explicit signaling have different characteristics, or rather different

tradeoffs. Describe at least one positive and one negative aspect of each form

of signaling for error detection and/or correction.

  1. In a large-scale deployment of anycast, it is possible for packets from a single

stream to be delivered to multiple receivers. There are two broad solutions to

this problem; the first is for receivers to force the sender to reset their state

if a packet appears to be misdelivered in this way. Another is to constrict the

interface between the sender and receiver in a way that allows state to be con-

tained to a single transaction. One form of this latter solution is called atomic

transactions, and is often implemented in RESTful interfaces. Consider these

two possible solutions, and describe the kinds of applications, giving specific

examples of applications, that might be better suited for each of these two


  1. Would you always consider the dictionary and the grammar forms of meta-

data? Why or why not?


Chapter 2 Data Transport Problems and Solutions

  1. Find three other kinds of metadata that do not involve the way the data is for-

matted, but rather describe the data in a way that might be useful to an attacker

trying to understand a specific process, such as transferring funds between two

accounts. Is there a specific limit to what might be considered metadata, or is it

more accurate to say “metadata is in the eye of the beholder”?

  1. Consider the negotiated bit rates explained toward the end of the chapter. Is it

possible to truly provide a constant bit rate in a packet switched network? Does

your answer depend on the network conditions? If so, what conditions would

impact the answer to the question?

پاسخ دهید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *