Skip to main content

06 | Networking

Disclaimer | Attribution

This short "Networking Basics" is based on my own notes taken from the Computer Networking | A Top-Down Approach (8th edition) book written by Jim Kurose and Keith Ross.

Only the first chapter, "Chapter 1 | Computer Networks and the Internet" is documented here. I highly recommend this book to anybody starting out with networking.

Computer Networks and the Internet

Computer networks and the Internet are among the most complex systems humanity has ever created, with millions of connected devices, billions of users, and countless applications. At first glance, it might seem overwhelming to understand how this vast network operates. However, there is structure and logic behind it—principles that can make studying computer networking both interesting and accessible.

The goal of the book is to provide a modern introduction to the dynamic field of computer networking, giving you the foundational knowledge needed to grasp not only today’s networks but also those of the future. The first chapter sets the stage by offering an overview of computer networking and the Internet, painting a broad picture before diving into specific details.

We’ll start by examining the hardware and software components that make up a network, beginning at the edge where end systems like laptops and smartphones connect to the network. From there, we’ll explore the core of the network—those links and switches responsible for transporting data—and the physical media that connect everything together. Remember, the Internet is a network of networks, and understanding how these pieces fit together is key.

In the latter half of this chapter, I’ll introduce some abstract concepts, such as the variables affecting data transmission: delay, loss, and throughput. I’ll also discuss architectural principles like protocol layering and service models, which provide the framework for how networks function. Additionally, we’ll touch on security—how networks can be protected from attacks and vulnerabilities—and briefly review the history of computer networking.

Overall, this chapter is designed to give you a solid understanding of what makes computer networking unique, even in such a massive and complex system.

1 | What Is the Internet?

In this book, we'll use the public Internet as our principal vehicle for exploring computer networking and protocols. The Internet can be understood through its nuts-and-bolts hardware and software components, such as routers and switches, as well as its role as a networking infrastructure that supports distributed applications, as illustrated in Figure 1.1. This dual perspective provides a comprehensive view of the Internet's structure and function.

11 | A Nuts-and-Bolts Description

The Internet is a vast network connecting over 28 billion devices by 2022, encompassing traditional computers and a growing array of smartphones, tablets, IoT devices, smart home gadgets, and more. Around 50% of the world's population uses mobile Internet, projected to rise to 75% by 2025. Data transmission occurs through packets, which are routed via packet switches akin to traffic intersections in transportation networks. Communication links use various media like cables and radio spectrum, each with differing data rates.

Internet Service Providers (ISPs) facilitate connections between users and content providers, managing network access via broadband, LANs, or mobile services. These ISPs are interconnected through upper-tier ISPs using high-speed infrastructure. Protocols like TCP/IP, managed by the IETF through nearly 9000 RFCs, are essential for data transfer, while standards from groups like IEEE 802 ensure compatibility for network components. This structure allows seamless operation and growth of the Internet's connected ecosystem.

12 | A Services Description

The Internet can be viewed from two perspectives: as a network of hardware and software components and as an infrastructure providing services to distributed applications.

  • Distributed Applications | Examples include email, web surfing, mobile apps, streaming services, and social media. These applications run on end systems rather than within the network core.

  • Internet Socket Interface | This is akin to postal rules, where programs follow specific rules to send data across the Internet. The analogy helps illustrate how data is directed to specific destinations.

  • Postal Service Analogy | Similar to how Alice uses postal rules to mail a letter, programs use the Internet socket interface to ensure data delivery.

  • Multiple Services | The Internet offers various services, similar to postal services, which will be detailed in future sections of the book.

  • Clarification of Terms | Key concepts like packet switching, routers, communication links, and attaching devices (e.g., thermostats) to the Internet will be explained later.

13 | What Is a Protocol?

A protocol serves as the rulebook for communication between entities in computer networking. It defines the format and sequence of messages exchanged, guiding how devices interact and what actions should be taken upon receiving messages.

  • Communication Rules | Protocols are analogous to human etiquette, establishing expected behaviors in data transmission. They ensure that devices communicate correctly by dictating message formats and response actions.

  • Network Functionality | Without shared protocols, devices cannot effectively interact. This is akin to two people speaking different languages; without a common language, communication breaks down.

  • Examples and Processes | Protocols are integral in processes like web browsing. When you request a webpage, your computer follows specific protocols to send requests, receive responses, and handle data transmission.

  • Complexity and Importance | Protocols vary in complexity, from simple rules for basic communication to intricate systems for complex tasks. Understanding these protocols is essential for network efficiency, ensuring smooth interactions among diverse devices.

In essence, protocols are the foundation of effective computer networking, providing the necessary structure and guidelines for seamless data exchange.

2 | The Network Edge

The network edge is the access point where devices connect to the Internet, forming the perimeter of the network. These endpoints are referred to as end systems or hosts. End systems include a wide range of devices, such as desktop computers, mobile devices (laptops, smartphones), servers, and increasingly, non-traditional "things" like IoT devices.

End systems can be categorized into two main roles:

  • Clients | Devices that primarily access data or services, such as desktops, laptops, and smartphones.
  • Servers | More powerful machines that provide resources, such as Web servers, email servers, or data centers (like Google's millions of servers across continents).

The expansion of the Internet to include IoT devices highlights its growth beyond traditional computing, emphasizing the evolving connectivity of everyday objects in the network edge.

21 | Access Networks

The landscape of internet access technologies is diverse, each offering unique advantages tailored to different needs and contexts. Here's a structured overview:

  1. Broadband Access Technologies:

    • DSL (Digital Subscriber Line): Utilizes existing telephone lines for high-speed internet access, ideal for remote areas without fiber infrastructure.
    • Cable Internet: Uses the same infrastructure as cable TV, providing reliable and widely available connectivity.
    • FTTH (Fiber-to-the-House): Offers ultra-fast speeds with fiber optic cables directly to homes, though less common due to higher installation costs.
  2. 5G Fixed Wireless:

    • A cutting-edge solution for high-speed internet without traditional wiring, using 5G technology to connect stationary devices at home or businesses. It addresses infrastructure limitations and offers a cost-effective alternative where fiber isn't feasible.
  3. Enterprise and Home Networks:

    • Ethernet: Dominant in corporate settings, connecting devices via copper wire with twisted pairs, ensuring high-speed data transfer for servers and regular users alike.
    • WiFi (IEEE 802.11 standards): Evolving technology providing enhanced speed, range, and reliability across various bands (5 GHz, now including 6 GHz). It offers a seamless, integrated experience for devices both at home and on the go.
  4. Wide-Area Wireless Access:

    • 3G, LTE, and 5G Mobile Networks: Enable high-speed internet access for mobile devices, crucial for streaming, gaming, and large file downloads. 5G promises faster speeds and improved performance over previous generations.
  5. Integration and Future Trends:

    • Technologies like 5G fixed wireless and advanced WiFi standards are expanding connectivity options. The integration of these technologies ensures a smooth user experience, switching between mobile and stationary connections seamlessly.
    • Future advancements in 5G and beyond aim to enhance speed, reliability, and capacity, addressing the growing demand for high-speed internet.

In conclusion, the choice of internet access technology hinges on factors like availability, cost, required speed, and specific use cases. As technology evolves, more robust and efficient options will emerge, catering to diverse needs in a rapidly connected world.

22 | Physical Media

Introduction to Physical Media

  • Physical media include cables and wires used to transmit data, connecting devices within a network.

Coaxial Cable

  • Consists of concentric copper conductors with special insulation and shielding.
  • Historically used for cable television and now adapted for high-speed internet access via cable modems.

Fiber Optics

  • Utilizes thin, flexible media to transmit light pulses, offering significant advantages like immunity to electromagnetic interference, low attenuation over long distances, and security through tap resistance.
  • Widely used in the backbone of the Internet but expensive for short-term applications like residential use.

Terrestrial Radio Channels

  • Wireless communication without physical wires, capable of penetrating walls.
  • Used for local area coverage, including Wi-Fi and cellular networks, despite challenges like path loss and interference.

Satellite Links

  • Includes geostationary satellites providing broad coverage but introducing propagation delays.
  • Low-Earth Orbiting (LEO) satellites offer mobile communication and future potential for internet access in remote areas.

Comparison and Applications

  • Coaxial vs. Fiber | Coaxial offers cost-effectiveness for short-term use, while fiber provides superior performance at a higher cost.
  • The choice of medium depends on factors like cost, performance requirements, and specific network needs.

Emerging Technologies

  • Satellite-based internet access is poised to become more prominent in areas without traditional services, highlighting the evolution of data transmission solutions.

In conclusion, physical media each serve unique roles in networking, from traditional cables to cutting-edge fiber optics, with emerging technologies like satellite links offering new possibilities for connectivity.

3 | The Network Core

The network core is a complex mesh of interconnected packet switches and links that facilitate efficient data transmission across the Internet, ensuring reliability through multiple pathways.

31 | Packet Switching

Introduction to Networking

  • The movement of data packets across networks is facilitated by switches using MAC (Media Access Control) addresses for local communication and routers using IP (Internet Protocol) addresses for global communication.

Addressing and Forwarding

  • MAC Addresses | Used within a local network (LAN) to identify devices on a physical network segment, enabling efficient data transmission between neighboring nodes.
  • IP Addresses | Globally unique identifiers used in the Internet to locate devices across wide-area networks (WANs). Routers use forwarding tables to direct packets based on their destination IP addresses.

Queuing and Congestion

  • When multiple packets arrive at a router faster than the link can handle, they are stored in an output queue. Excessive traffic can lead to congestion, causing delays or packet loss as routers drop packets during overflow conditions.

Forwarding Tables and Routing Protocols

  • Routers maintain forwarding tables that map destination IP addresses to specific outbound links. These tables are dynamically updated by routing protocols, which determine the shortest path to a destination and configure the tables accordingly. This automation allows for efficient and scalable network operation without manual configuration.

32 | Circuit Switching

Circuit switching involves establishing a dedicated physical connection between devices until the data transfer is complete. Ideal for real-time services like telephone calls, it guarantees no delay once the connection is active. However, it's less efficient when not all allocated bandwidths are utilized, as unused resources go wasted.

Packet switching, on the other hand, sends data in packets through a network, with nodes switching these packets at junction points. This method leverages statistical multiplexing, allowing more users to share a single link without reserving physical lines. It's highly efficient, especially when not all allocated bandwidths are used simultaneously, as unused capacity is dynamically utilized by others.

While circuit switching excels in scenarios requiring simultaneous and predictable data transfer, packet switching offers greater scalability and cost-effectiveness, making it preferred in modern networks despite challenges in handling real-time applications due to potential delays or packet loss. The trend in telecommunications increasingly favors packet switching for its efficiency and flexibility.

33 | A Network of Networks

The Internet can be visualized as a complex hierarchical network composed of multiple layers, each serving specific functions to ensure efficiency, redundancy, and scalability. Here's a structured breakdown:

Multi-Tier Hierarchy (Network Structure 3)

  • Structure | Higher-tier ISPs (like tier-1) interconnect with lower-tier ISPs (access ISPs).
  • Function | Ensures that if one ISP fails, traffic can still be rerouted through other higher-tier providers, enhancing robustness.

Adding PoPs, Multi-Homing, Peering, and IXPs (Network Structure 4)

  • Points of Presence (PoPs) | Entry points where customer ISPs connect into provider networks.
  • Multi-Homing | Redundancy by connecting to multiple providers, crucial for higher-tier ISPs.
  • Peering | Direct connections between ISPs of similar levels to reduce costs and improve efficiency.
  • Internet Exchange Points (IXPs) | Physical locations where multiple ISPs meet to exchange traffic, facilitating efficient peering.

Content-Provider Networks (Network Structure 5)

  • Structure | Content providers like Google have their own global networks, separate from the public Internet, with data centers optimized for performance.
  • Function | Use peering and IXPs to bypass higher tiers, reducing costs and enhancing user experience by placing data centers closer to users.

Integration and Redundancy

  • Content providers integrate with lower-tier ISPs through PoPs or IXPs, optimizing their network layout.
  • Multi-homing and peering ensure redundancy and fault tolerance, allowing the system to withstand ISP failures.

Scalability and Efficiency

  • The hierarchical structure allows for efficient traffic flow and scalability, managing growth without a single point of failure.
  • Peering at different levels (between ISPs or content providers) enhances overall network performance and reduces costs.

In summary, the Internet's layered structure ensures redundancy, efficiency, and scalability, with content providers optimizing their paths through strategic peering and IXPs. This design balances cost-effectiveness with robustness, making the system reliable and scalable for future growth.

4 | Delay, Loss, and Throughput in Packet-Switched Networks

The Internet is viewed as an infrastructure that provides services to distributed applications running on end systems. While the ideal scenario would involve instantaneous, lossless data transfer between any two end systems, reality imposes significant constraints. Computer networks inevitably introduce delays, experience packet loss, and limit throughput—the amount of data transferred per second—due to physical limitations.

Despite these challenges, the problems of delay, loss, and constrained throughput present fascinating and intricate issues in network design and optimization. These challenges not only impose practical limitations but also motivate extensive research and innovation, driving the pursuit of PhD theses aimed at addressing these complexities. The section explores these issues in depth, examining how to quantify and mitigate their impacts to enhance network performance.

41 | Overview of Delay in Packet-Switched Networks

In network communication, delays can arise from four main factors: processing, queuing, transmission, and propagation. Each contributes uniquely to the total delay experienced by data packets.

Processing Delay (d_proc)

  • Time taken for a router to process each packet before transmission.
  • Often minimal but increases with overload, affecting throughput.

Queuing Delay (d_queue)

  • Time packets wait in a queue before transmission.
  • Increases with congestion and traffic volume, leading to bottlenecks.

Transmission Delay (d_trans)

  • Time for data bits to traverse a link based on transmission rate.
  • Negligible at high speeds but significant at slow connections like dial-up modems.

Propagation Delay (d_prop)

  • Time taken for data bits to travel over the physical link.
  • Depends on distance and connection speed, more pronounced in long-distance networks.

Interactions and Real-World Implications:

  • Network Conditions | High-speed links minimize transmission and propagation delays, making processing and queuing critical. Slow links amplify all delays, causing congestion and performance issues.

  • Example (Caravan Analogy) | Each tollbooth represents a router. The caravan's passage through each tollbooth mirrors the transmission delay, while inter-tollbooth travel reflects propagation delay. Congestion occurs if service rates are slower than data speed.

  • Troubleshooting and Optimization | Addressing each delay type is crucial. Tools like QoS policies can prioritize traffic, enhancing performance in systems like cloud computing or gaming where latency is critical.

Conclusion

Understanding these delays helps in diagnosing network issues, optimizing traffic flow, and improving applications such as cloud operations and online gaming by reducing lag and enhancing user experience.

42 | Queuing Delay and Packet Loss

In network communication, queuing delay is a significant factor that occurs when packets wait in a queue before they can be transmitted. This delay is influenced by the traffic intensity, which is defined as the ratio of the rate at which bits arrive to the transmission rate. A key insight is that if this ratio exceeds 1, the network becomes congested, leading to increased queuing delays as the queue length grows.

The impact of traffic intensity can be illustrated through examples:

  • Periodic Arrivals | When packets arrive at regular intervals, those arriving simultaneously find the queue empty and experience little delay. Conversely, if multiple packets arrive at once (burst arrivals), some packets may encounter a full queue, resulting in substantial waiting times.

  • Random Arrivals | In real-world scenarios, packet arrivals are typically random, making traffic intensity alone insufficient to predict all delay statistics. However, as the traffic intensity approaches 1, even minor increases can significantly escalate delays, highlighting the sensitivity of network performance to congestion.

Additionally, finite queue capacities lead to packet loss when the queue is full and a new packet arrives. This loss can reduce network efficiency and necessitate mechanisms like retransmission to ensure data integrity, emphasizing the importance of handling packet loss in network design.

In essence, queuing delay and packet loss are crucial considerations in network performance, affecting both delay times and data reliability, thus requiring careful management to optimize communication efficiency.

43 | End-to-End Delay

In network communication, understanding end-to-end delay is crucial for assessing performance. This delay encompasses several components: processing delay at each router (d_proc), transmission time across the link (d_trans = L/R), and propagation delay (d_prop). The formula d_end-end = N * (d_proc + d_trans + d_prop) highlights that as data traverses through N-1 routers, it accumulates these delays.

Traceroute is an essential tool for measuring these delays. It sends packets to a destination and tracks their journey, providing insights into each router's contribution to the total delay. The variability in queuing delay can cause differences between packets due to congestion fluctuations, as seen in the example where Router 12 had a shorter delay than Router 11.

Additionally, end systems contribute delays through actions like intentional media sharing delays (e.g., in Wi-Fi) and packetization delay in VoIP, which affects user experience. Understanding these factors is vital for optimizing network performance and ensuring efficient data transmission.

44 | Throughput in Computer Networks

Throughput refers to the rate at which data is transferred from one point to another over a given period, measured in bits per second. It is crucial because it determines how efficiently and quickly data can be transmitted across a network.

  • Bottleneck Link | The slowest link in a network path limits throughput, meaning even if other links are fast, the slowest one becomes the rate-limiting factor.

  • Access Network Dominance | In today's internet, access points (like home routers) often act as bottlenecks due to their slower connection speeds compared to the high-speed core networks.

  • Concurrent Transfers and Shared Links | Multiple simultaneous data transfers can reduce each transfer's throughput because shared links must divide their capacity among all flows, leading to a bottleneck for individual transfers.

  • Additional Factors | Protocols like TCP/IP and processing delays also influence real-world throughput, though they are not exhaustively detailed here.

In essence, understanding the dynamics of these factors is essential for optimizing data transmission in complex networks.

5 | Protocol Layers and Their Service Models

The Internet's complexity, characterized by numerous components such as applications, protocols, end systems, packet switches, and various link media, might seem overwhelming. However, there is a structured approach to organizing network architecture despite this complexity. Layered architectures, like the OSI model, provide a clear framework for managing and understanding these components. Each layer in this structure performs specific functions, enabling the Internet to operate efficiently.

Moreover, the defined service models within each protocol layer contribute to the overall functionality of the network. These layers ensure that different parts of the system can communicate and operate effectively, even amidst the intricate web of interconnected systems. This organization allows for easier maintenance and enhancements, making it possible to manage the complexity inherent in modern networks.

51 | Layered Architecture

The Internet's functioning can be understood through its layered structure, each serving a distinct role in data transmission. Here's a breakdown of how these layers work together:

Physical Layer | This is responsible for transmitting raw bitstreams over physical media (e.g., cables or wireless signals). It handles the actual movement of electrical signals and electromagnetic waves.

Link Layer | Manages data exchange between adjacent nodes (devices connected by a link, such as Wi-Fi or Ethernet). It ensures reliable transmission of frames (data packets) across the link, handling error correction and flow control.

Network Layer | Routes datagrams (packets) through networks using routers. It determines the path for data from one network to another, enabling communication across the Internet's interconnected networks.

Transport Layer | Ensures efficient data transfer between systems. TCP is used for reliable transmission with features like error checking and congestion control, while UDP is a faster protocol without such guarantees, useful for applications like streaming video.

Application Layer | Originates and terminates data at user applications. This layer handles the actual data generation, request/response handling, and interpretation of data.

Interaction and Functionality

  • Data originates from the Application Layer, moves down through Transport, Network, and Link layers to the Physical Layer for transmission.
  • Upon reaching the destination, data travels back up through each layer in reverse order, ensuring it reaches the intended application.
  • Each layer handles specific functions without interfering with others, allowing efficient communication.

Examples

  • Sending an email | The Application Layer sends the message, Transport (TCP) ensures delivery, Network routes via routers, Link transfers over Wi-Fi, and Physical transmits bits.
  • Congestion control by TCP slows data transmission during network congestion, affecting only the transport layer without disrupting others.
  • Error handling occurs at each layer | physical deals with media issues, link corrects frame errors, and transport checks segment integrity.

This layered approach ensures that each part of the process is specialized, allowing the Internet to function efficiently and reliably.

52 | Encapsulation

Encapsulation is a fundamental process in computer networking that involves wrapping data with additional information or headers as it traverses through different layers of a network stack. This process is crucial for managing data transmission across complex networks, ensuring that each layer can handle its specific responsibilities without interfering with others.

Here's a detailed breakdown of the encapsulation process and its significance:

Layers and Data Transmission | Data starts at the application layer, where it is first processed. It then moves down through the transport, network, and link layers, each adding their own headers or metadata to the data before transmission.

Encapsulation Process

  • Application Layer | The original data (e.g., a message) is generated here.
  • Transport Layer | Adds transport headers, converting the message into a segment suitable for transmission across networks.
  • Network Layer | Further adds network headers, turning the segment into a datagram that includes routing information.
  • Link Layer | Adds link-layer framing to prepare the data for actual transmission over physical media.

Reception and De-encapsulation | Upon arrival at the destination system, the reverse process of de-encapsulation occurs. Each layer removes its own headers and passes the cleaned data back up the stack until it reaches the application layer.

Importance of Encapsulation

  • Organization and Management | Each layer only deals with its specific portion of the data, reducing complexity and preventing information overload at any single layer.
  • Error Handling and Security | Layers implement their own error detection and correction mechanisms. Additionally, encryption can be applied at various layers to ensure secure communication.

Handling Large Data | When data is too large for a single transmission, it can be divided into multiple segments or datagrams. These are reassembled at the receiver, ensuring that the integrity of the original data is maintained.

Routers and Switches | These network devices focus on specific lower layers (routers up to layer 3, switches handle layers 1 and 2), reflecting their primary functions of routing and switching packets efficiently.

Performance Considerations | The addition of headers at each layer increases data size, leading to transmission overhead. However, this overhead is necessary for proper network functionality and can be managed effectively with efficient protocols.

In summary, encapsulation is essential for effective data communication across networks by structuring data appropriately for transmission through multiple layers, ensuring accurate and efficient routing, and providing mechanisms for error handling and security.

6 | Networks Under Attack

Malware Threats

  • Risk | Malware can spread through malicious links or files, compromising data integrity and accessibility.
  • Mitigation | Exercise caution with downloads, avoid suspicious attachments, and maintain regular backups.

DDoS Attacks

  • Risk | Overwhelms network infrastructure, causing downtime and data loss.
  • Mitigation | Implement protections like content delivery networks or traffic rate limiting to handle spikes efficiently.

Packet Sniffing

  • Risk | Invades privacy by capturing and analyzing network packets.
  • Mitigation | Use encryption and secure protocols (e.g., HTTPS) to protect data in transit, ensuring captured data is unreadable without decryption keys.

IP Spoofing

  • Risk | Malicious packets impersonate trusted sources.
  • Mitigation | Verify IP addresses and use end-point authentication to ensure traffic originates from trusted sources.

Man-in-the-Middle Attacks

  • Risk | Intercepts communication, pretending as one of the parties involved.
  • Mitigation | Employ strong authentication methods like certificates or biometrics to confirm each party's identity.

Original Internet Design

  • Challenge | The Internet was built without inherent security, assuming trust among users.
  • Mitigation | Retroactive security measures are necessary to address modern threats and ensure secure communication.

Conclusion | Addressing these challenges requires a multi-layered approach: caution with downloads, encryption for data protection, verification of sources, strong authentication, and robust network defenses. Understanding these issues is crucial for enhancing network security.