Modeling Network Transport
After reading this chapter, you should be able to:
Understand the value of protocol stack models to network engineering
Understand the Department of Defense (DoD) network protocol stack
model, including the purpose of each layer
Understand the Open Systems Interconnect (OSI) network protocol stack
model, including the purpose of each layer
Understand the Recursive Internet Architecture (RINA) model, and how it
is different from the DoD and OSI models
Understand the difference between the connection-oriented and connec-
The set of problems and solutions considered in the preceding chapter provides some
insight into the complexity of network transport systems. How can engineers engage
with the apparent complexity involved in such systems?
The first way is to look at the basic problems transport systems solve, and under-
stand the range of solutions available for each of those problems. The second is to
build models that will aid in the understanding of transport protocols by
- Helping engineers classify transport protocols by their purpose, the informa-
tion each protocol contains, and the interfaces between protocols
Chapter 3 Modeling Network Transport
- Helping engineers know which questions to ask in order to understand a par-
ticular protocol, or to understand how a particular protocol interacts with the
network over which it runs, and the applications that it carries information for
- Helping engineers understand how single protocols fit together to make a
Chapter 1, “Fundamental Concepts,” provided a high-level overview of the
transport problem and solution spaces. This chapter will tackle the second way in
which engineers can understand protocols more fully: models. Models are essentially
abstract representations of the problems and solutions considered in the previous
chapter; they provide a more visual and module-focused representation, showing
how things fit together. This chapter will consider this question:
How can transport systems be modeled in a way that allows engineers to
quickly and fully grasp the problems these systems need to solve, as well as
the way multiple protocols can be put together to solve them?
Three specific models will be considered in this chapter:
- The United States Department of Defense (DoD) model
- The Open Systems Interconnect (OSI) model
- The Recursive Internet Architecture (RINA) model
Each of these three models has a different purpose and history. A second form of
protocol classification, connection oriented versus connectionless, will also be con-
sidered in this chapter.
United States Department of Defense (DoD) Model
In the 1960s, the US Defense Advanced Research Projects Agency (DARPA) spon-
sored the development of a packet switched network to replace the telephone net-
work as a primary means of computer communications. Contrary to the myth, the
original idea was not to survive a nuclear blast, but rather to create a way for the vari-
ous computers then being used at several universities, research institutes, and govern-
ment offices to communicate with one another. At the time, each computer system
used its own physical wiring, protocols, and other systems; there was no way to
interconnect these devices in order to even transfer data files, much less create any-
thing like the “world wide web,” or cross-execute software. These original models
were often designed to provide terminal-to-host communications, so you could
install a remote terminal into an office or shared space, which could then be used to
access the shared resources of the system, or host. Much of the original writing
around these models reflects this reality.
One of the earliest developments in this area was the DoD model, shown in
The DoD model separated the job of transporting information across a network
into four distinct functions, each of which could be performed by one of many pro-
tocols. The idea of having multiple protocols at each layer was considered somewhat
controversial until the late 1980s, and even into the early 1990s. In fact, one of the key
differences between the DoD and the original incarnation of the OSI model is the
concept of having multiple protocols at each layer.
In the DoD model:
The physical layer is responsible for getting the 0s and 1s modulated, or serialized,
onto the physical link. Each link type has a different format for signaling a 0 or a
۱; the physical layer is responsible for translating 0s and 1s into physical signals.
The internet layer is responsible for transporting data between systems that
are not connected through a single physical link. The internet layer, then, pro-
vides networkwide addresses, rather than link local addresses, and also pro-
vides some means for discovering the set of devices and links that must be
crossed to reach these destinations.
The transport layer is responsible for building and maintaining sessions
between communicating devices and providing a common transparent data
transmission mechanism for streams or blocks of data. Flow control and reli-
able transport may also be implemented in this layer, as in the case of TCP.
Figure 3-1 The Four-Layer DoD Model
Chapter 3 Modeling Network Transport
The application layer is the interface between the user and the network •
resources, or specific applications that use and provide data to other devices
attached to the network.
The application layer, in particular, seems out of place in a model of network
transport. Why should the application using the data be considered part of the trans-
port system? Because early systems considered the human user the ultimate user of
the data, and the application as primarily a way to munge data to be presented to
the actual user. Much of the machine-to-machine processing, heavy processing of
data before it is presented to a user, and simple storage of information in digital
format were not even considered viable use cases. As information was being trans-
ferred from one person to another, the application was just considered a part of the
Two other points might help the inclusion of the application make more sense.
First, in the design of these original systems, there were two components: a terminal
and a host. The terminal was really a display device; the application lived on the host.
Second, the networking software was not thought of as a separate “thing” in the sys-
tem; routers had not yet been invented, nor any other separate device to process and
forward packets. Rather, a host was just connected to either a terminal or another
host; the network software was just another application running on these devices.
Over time, as the OSI model came into more regular use, the DoD model was
modified to include more layers. For instance, in Figure 3-2, a diagram replicated
from a 1983 paper on the DoD model, there are seven layers (seven being a magic
number for some reason). 1
Here three layers have been added:
- The utility layer is a set of protocols living between the more generic transport
layer and applications. Specifically, the Simple Mail Transfer Protocol (SMTP),
File Transfer Protocol (FTP), and other protocols were seen as being a part of
- The network layer from the four-layer version has been divided into the net-
work layer and the internetwork layer. The network layer represents the differ-
ing packet formats used on each link type, such as radio networks and Ethernet
(still very new in the early 1980s). The internetwork layer unifies the view of
the applications and utility protocols running on the network into a single
internet datagram service.
- Cerf and Cain, “The DoD Internet Architecture Model.”
Figure 3-2 A Later Version of the DoD Model
- The link layer has been inserted to differentiate between the encoding of infor-
mation onto the various link types and a device’s connection to the physical
link. Not all hardware interfaces provided a link layer.
Over time, these expanded DoD models fell out of favor; the four-layer model is
the one most often referenced today. There are several reasons for this:
The utility and application layers are essentially duplicates of one another
in most cases. FTP, for instance, multiplexes content on top of the Transmis-
sion Control Protocol (TCP), rather than as a separate protocol or layer in the
stack. TCP and the User Datagram Protocol (UDP) eventually solidified as the
two protocols in the transport layer, with everything else (generally) running
on top of one of these two protocols.
With the invention of devices primarily intended to forward packets (routers
and switches), the separation between the network and internetwork layers
was overcome by events. The original differentiation was primarily between
lower-speed long haul (wide area) links and shorter-run local area links; rout-
ers generally took the burden of installing links into wide area networks out of
the host, so the differentiation became less important.
Chapter 3 Modeling Network Transport
- Some interface types simply do not have a way to separate signal encoding from
the host interface, as was envisioned in the split between the link and physical
layers. Hence these two layers are generally munged into a single “thing” in the
The DoD model is historically important because
- It is one of the first attempts to codify network functionality into a model.
- It is the model on which the TCP/IP suite of protocols (on which the global
Internet operates) was designed; the artifacts of this model are important in
understanding many aspects of TCP/IP protocol design.
- It had the concept of multiple protocols at any particular layer in the model
“built in.” This set the stage for the overall concept of narrowing the focus of
any particular protocol, while allowing many different protocols to operate at
once over the same network.
Open Systems Interconnect (OSI) Model
In the 1960s, carrying through to the 1980s, the primary form of communications
was the switched circuit; a sender would ask a network element (a switch) to connect
it to a particular receiver, the switch would complete the connection (if the receiver
was not busy), and traffic would be transmitted over the resulting circuit. If this
sounds like a traditional telephone system, this is because it is, in fact, based on the
traditional network system (now called Plain Old Telephone Service [POTS]). Large
telephone and computer companies were deeply invested in this model, and received
a lot of revenue from systems designed around circuit switching techniques. As the
DoD model (and its set of accompanying protocols and concepts) started to catch on
with researchers, these incumbents decided to build a new standards organization
that would, in turn, build an alternate system providing the “best of both worlds.”
They would incorporate the best elements of packet switching, while retaining the
best elements of circuit switching, creating a new standard that would satisfy every-
one. In 1977, this new standards organization was proposed, and adopted, as part of
the International Organization for Standardization (ISO).
This new ISO working group designed a layered model similar to the pro-
posed (and rejected) packet-based model, grounded in database communications.
The primary goal was to allow intercommunication between the large database-
focused systems dominant in the late 1970s. The committee was divided between
telecom engineers and the database contingent, making the standards complex. The
protocols developed needed to provide for both connection-oriented and connec-
tionless session control, and invent the entire application suite to create email, file
transfer, and many other applications (remember, applications are part of the stack).
For instance, various transport modes needed to be codified to carry a wide array of
services. In 1989—a full ten years later—the specifications were still not completely
done. The protocol had not reached widespread deployment, even though many gov-
ernments, large computer manufacturers, and telecom companies supported it over
the DoD protocol stack and model.
But during the ten years the DoD stack continued to develop; the Internet Engi-
neering Task Force (IETF) was formed to shepherd the TCP/IP protocol stack, pri-
marily for researchers and universities (the Internet, as it was then known, did not
allow commercial traffic, and would not until 1992). With the failure of the OSI
protocols to materialize, many commercial networks, and networking equipment,
turned to the TCP/IP protocol suite to solve real-world problems “right now.”
Further, because the development of the TCP/IP protocol stack was being paid
for under grants by the U.S. government, the specifications were free. There were, in
fact, TCP/IP implementations written for a wide range of systems available because
of the work of universities and graduate students who needed the implementations
for their research efforts. The OSI specifications, however, could only be purchased
in paper form from the ISO itself, and only by members of the ISO. The ISO was
designed to be a “members only” club, meant to keep the incumbents firmly in con-
trol of the development of packet switching technology. The “members only” nature
of the organization, however, worked against the incumbents, eventually playing a
role in their decline.
The OSI model, however, made many contributions to the advancement of net-
working; for instance, the careful attention paid to Quality of Service (QoS) and
routing issues paid dividends in the years after. One major contribution was the con-
cept of clear modularity; the complexity of interconnecting many different systems,
with many different requirements, drove the OSI community to call for clear lines of
responsibility, and well-defined interfaces between the layers.
A second was the concept of machine-to-machine communication. Middle boxes,
then called gateways, now called routers and switches, were explicitly considered
part of the networking model, as shown in Figure 3-3.
You probably do not even need to see this image to remember the OSI model—
everyone who’s ever been through a networking class, or studied for a network engi-
neering certification, is familiar with using the seven-layer model to describe the way
The genius of modeling a network in this way is it makes the interactions
between the various pieces much easier to see and understand. Each pair of lay-
ers, moving vertically through the model, interacts through a socket, or Application
Chapter 3 Modeling Network Transport
End System End System
Transport Transport Segment
Packet Network Network Network
FrameData Link Data Link Data Link
Physical Physical Physical Bits
Figure 3-3 The OSI Model, Including the Concept of an Intermediate System
Programming Interface (API). So to connect to a particular physical port, a piece of
code at the data link layer would connect to the socket for that port. This allows the
interaction between the various layers to be abstracted and standardized. A piece of
software at the network layer does not need to know how to deal with various sorts
of physical interfaces, only how to get data to the data link layer software on the
Each layer has a specific set of functions to perform.
The physical layer, also called layer 1, is responsible for getting the 0s and 1s mod-
ulated, or serialized, onto the physical link. Each link type will have a different for-
mat for signaling a 0 or 1; the physical layer is responsible for translating 0s and 1s
into these physical signals.
The data link layer, also called layer 2, is responsible for making certain transmit-
ted information is actually sent to the right computer connected to the same link.
Each device has a different data link (layer 2) address that can be used to send traffic
to a specific device. The data link layer assumes each frame within a flow of informa-
tion is separate from all other frames within the same flow, and only provides com-
munication for devices connected through a single physical link.
The network layer, also called layer 3, is responsible for transporting data between
systems not connected through a single physical link. The network layer, then, pro-
vides networkwide (or layer 3) addresses, rather than link local addresses, and also
provides some means for discovering the set of devices and links that must be crossed
to reach these destinations.
The transport layer, also called layer 4, is responsible for the transparent transfer
of data between different devices. Transport layer protocols can be either be “reli-
able,” which means the transport layer will retransmit data lost at some lower layer,
or “unreliable,” which means data lost at lower layers must be retransmitted by some
higher layer application.
The session layer, also called layer 5, does not really transport data, but rather
manages the connections between applications running on two different computers.
The session layer makes certain the type of data, the form of the data, and the reli-
ability of the data stream are all exposed and accounted for.
The presentation layer, also called layer 6, actually formats data in a way to allow
the application running on the two devices to understand and process the data.
Encryption, flow control, and any other manipulation of data required to provide an
interface between the application and the network happen here. Applications inter-
act with the presentation layer through sockets.
The application layer, also called layer 7, provides the interface between the user
and the application, which in turn interacts with the network through the presenta-
Not only can the interaction between the layers be described in precise terms
within the seven-layer model, the interaction between parallel layers on multiple
computers can be described precisely. The physical layer on the first device can be
said to communicate with the physical layer on the second device, the data link layer
on the first device with the data link layer on the second device, and so on. Just as
interactions between two layers on a device are handled through sockets, interactions
between parallel layers on different devices are handled through network protocols.
Ethernet describes the signaling of 0s and 1s onto a physical piece of wire, a for-
mat for starting and stopping a frame of data, and a means of addressing a single
device among all the devices connected to a single wire. Ethernet, then, falls within
both the physical and data link layers (1 and 2) in the OSI model.
IP describes the formatting of data into packets, and the addressing and other
means necessary to send packets across multiple data link layer links to reach a
device several hops away. IP, then, falls within the network layer (3) of the OSI model.
TCP describes session setup and maintenance, data retransmission, and inter-
action with applications. TCP, then, falls within the transport and session layers
(۴ and 5) of the OSI model.
One of the more confusing points for engineers who only ever encounter the TCP/
IP protocol stack is the different way the protocols designed in/for the OSI stack
interact with devices. In TCP/IP, addresses refer to interfaces (and, in a world of net-
works with a lot of virtualization, multiple addresses can refer to a single interface,
Chapter 3 Modeling Network Transport
or to an anycast service, or to a multicast data stream, etc.). In the OSI model, how-
ever, each device has a single address. This means the protocols in the OSI model are
often referred to by the types of devices they are designed to connect. For instance,
the protocol carrying reachability and topology (or routing) information through the
network is called the Intermediate System to Intermediate System (IS-IS) protocol,
because it runs between intermediate systems. There is also a protocol designed to
allow intermediate systems to discover end systems; this is called the End System to
Intermediate System (ES-IS) protocol (you did not expect creative names, did you?).
It is one of the sad facts of network engineering history that proponents of the
TCP/IP protocol suite developed an early dislike of the OSI protocol suite, to the
point of rejecting the lessons learned in their development. While this has largely
worn down into a rather more mild bit of fun in more recent years, the years lost
to rejecting a protocol based on its origins, rather than its technical merits, are a
lesson in humility in network engineering. Focus on the ideas, rather than the peo-
ple; learn from everyone and every project you can; do not allow your ego to get in
the way of the larger project, or solving the problem at hand.
Recursive Internet Architecture (RINA) Model
The DoD and OSI models have two particular focal points in common:
- They both contain application layers; this makes sense in the context of the
earlier world of network engineering, as the application and network software
were all part of a larger system.
- They combine the concepts of what data should be contained where with the
concept of what goal is accomplished by a particular layer.
This leads to some odd questions, such as
- The Border Gateway Protocol (BGP), which provides routing (reachability)
between independent entities (autonomous systems), runs on top of the trans-
port layer in both models. Does this make it an application? At the same time,
this protocol is providing reachability information the network layer needs to
operate. Does this make it a network layer protocol?
- IPsec adds information to the Internet Protocol (IP) header, and specifies the
encryption of information being carried across the network. Because IP is a
network layer, and IPsec (sort of) runs on top of IP, does this make IPsec a
transport protocol? Or, because IPsec run parallel to IP, is it a network layer
Arguing over these kinds of questions can provide a lot of entertainment at a
technical conference or standards meeting; however, they also point to some amount
of ambiguity in the way these models are defined. The ambiguity comes from the
careful mixture of form and function found in these models; do they describe where
information is contained, who uses the information, what is done to the informa-
tion, or a specific goal that needs to be met to resolve a specific problem in transport-
ing information through a network? The answer is—all of the above. Or perhaps,
This leads to the following observation: there are really only four functions any
data-carrying protocol can serve: transport, multiplexing, error correction, and flow
control. If these sound familiar, they should—because these are the same four func-
tions uncovered in the investigation of human language in Chapter 2, “Data Trans-
port Problems and Solutions.”
There are two natural groupings within these four functions: transport and multi-
plexing, error and flow control. So most protocols fall into doing one of two things:
The protocol provides transport, including some form of translation from one
data format to another; and multiplexing, the capability of the protocol to
keep data from different hosts and applications separate.
The protocol provides error control, either through the capability to correct
small errors or to retransmit lost or corrupted data; and flow control, which
prevents undue data loss because of a mismatch between the network’s capa-
bility to deliver data and the application’s capability to generate data.
From this perspective, Ethernet provides transport services and flow control, so
it is a mixed bag concentrated on a single link, port to port (or tunnel endpoint to
tunnel endpoint) within a network. IP is a multihop protocol (a protocol that spans
more than one physical link) providing transport services, while TCP is a multihop
protocol that uses IP’s transport mechanisms and provides error correction and flow
control. Figure 3-4 illustrates the iterative model.
Each layer of the model has one of the same two functions, just at a different
scope. This model has not caught on widely in network protocol work, but it pro-
vides a much simpler view of network protocol dynamics and operations than either
Chapter 3 Modeling Network Transport
Error/Flow Error/Flow Error/Flow
Transport/Multiplex Transport/Multiplex Transport/Multiplex
Link 1 Link 2 Link 3
Figure 3-4 The RINA Model
the seven- or four-layer models, and it adds in the concept of scope, which is of vital
importance in considering network operation. The scope of information is the foun-
dation of network stability and resilience.
Connection Oriented and Connectionless
The iterative model also brings the concepts of connection-oriented and connection-
less network protocols out into the light of day again.
Connection-oriented protocols set up an end-to-end connection, including all the
state to transfer meaningful data, before sending the first bit of data. The state could
include such things as the Quality of Service requirements, the path the traffic will
take through the network, the specific applications that will send and receive the
data, the rate at which data can be sent, and other information. Once the connection
is set up, data can be transferred with very little overhead.
Connectionless services, on the other hand, combine the data required to trans-
mit data with the data itself, carrying both in a single packet (or protocol data unit).
Connectionless protocols simply spread the state required to carry data through the
network to every possible device that might need the data, while connection-oriented
models constrain state to only devices that need to know about a specific flow of
packets. The result is single device or link failures in a connectionless network can be
healed by moving the traffic onto another possible path, rather than redoing all the
work needed to build the state to continue carrying traffic from source to destination.
Most modern networks are built with connectionless transport models combined
with connection-oriented Quality of Service, error control, and flow control models.
This combination is not always ideal; for instance, Quality of Service is normally
configured along specific paths to match specific flows that should be following
those paths. This treatment of Quality of Service as more connection oriented than
the actual traffic flows being managed causes strong disconnects between the ideal
state of a network and various possible failure modes.
Knowing a number of models, and how they apply to various network protocols, can
help you quickly understand a protocol you have not encountered before and diag-
nose problems in an operational network. Knowing the history of the protocol mod-
els can help you understand why particular protocols were designed the way they
were, particularly the problems the protocol designers thought needed to be solved,
and the protocols surrounding the protocol when it was originally designed. Differ-
ent kinds of models abstract a set of protocols in different ways; knowing several
models, and how to fit a set of protocols into each of the models, can help you under-
stand the protocol operation in different ways, rather than a single way, much like
seeing a vase in a painting is far different than seeing it in a three-dimensional
Of particular importance are the two concepts of connectionless and connection-
oriented protocols. These two concepts will be foundational in understanding flow
control, error management, and many other protocol operations.
The next chapter is going to apply these models to lower layer transport protocols.
Cerf, Vinton G., and Edward Cain. “The DoD Internet Architecture Model.” Com-
puter Networks 7 (1983): 307–۱۸٫
Day, J. Patterns in Network Architecture: A Return to Fundamentals. Indianapolis,
IN: Pearson Education, 2007.
Grasa, Eduard. “Design Principles of the Recursive InterNetwork Architecture.” In
۳rd FIArch Workshop. Brussels, 2011. http://www.future-internet.eu/fileadmin/
Maathuis, I., and W. A. Smit. “The Battle between Standards: TCP/IP Vs OSI Vic-
tory through Path Dependency or by Quality?” In Standardization and Inno-
Padlipsky, Michael A. The Elements of Networking Style and Other Essays and Ani-
madversions on the Art of Intercomputer Networking. Prentice-Hall, 1985.
Chapter 3 Modeling Network Transport
Russell, Andrew L. “OSI: The Internet That Wasn’t.” Professional Organization.
IEEE Spectrum, September 27, 2016. https://spectrum.ieee.org/tech-history/
White, Russ, and Denise Donohue. The Art of Network Architecture: Busi-
ness-Driven Design. 1st edition. Indianapolis, IN: Cisco Press, 2014.
- Research the protocols in the X.25 stack, which predates the three network
models described in this chapter. Does the X.25 protocol stack show a lay-
ered design? Which layers of the DoD and OSI models does each protocol in
the X.25 stack fit into? Can you describe each protocol in terms of the RINA
- Research the protocols in the IBM Systems Network Architecture (SNA) stack,
which predates the three network models described in this chapter. Does the
SNA protocol stack show a layered design? Which layers of the DoD and OSI
models does each protocol in the SNA stack fit in to? Can you describe each
protocol in terms of the RINA model?
- Billing is considered in some protocol stacks and models (such as the X.25
stack), and not in others. Why do you think this might be the case? Consider
the way in which network utilization is used in the IP and X.25 stacks, specifi-
cally the use of bandwidth versus packets as a primary measurement system.
- How does a layered network model contribute to the modularity of network
- How does a layered network model improve an engineer’s understanding of
how a network works?
- Draw a diagram comparing the DoD and OSI models. Does each layer from
one model fit neatly into the other?
- Consider the OSI and RINA models; can you figure out which services from
the RINA model fit into which layers in the OSI model?
- Consider the connectionless versus connection-oriented models of protocol
operation in light of the State/Optimization/Surface model, specifically in
terms of state and optimization. Can you explain where adding state in a con-
nection-oriented model increases optimal use of network resources? How does
it decrease the optimal use of network resources?
- In older network models, applications were often considered part of the proto-
col stack. Over time, however, applications seem to have been largely separated
out of the network protocol stack, and considered as a “user” or “consumer”
of network services. Can you think of a particular shift in the design of end
hosts in relationship to the applications running on end hosts that would cause
this shift in thinking in network engineering?
- Do you think fixed length packets (or frames, or cells) make more sense from a
protocol design perspective than variable length packets? How much state does
a variable length packet format add compared to a fixed length format? How
much optimization is gained? A useful point of departure for answering this
question would be a list or chart of the average packet lengths carried through
the global Internet.