An Introduction to Computer Networks
    • [li]The advantages of the network layers abstraction.
      - Break a complex task of communication into smaller pieces.
      - Lower layers hide the implementation details from higher layers.
      - Lower layers can change implementation without affecting upper layers as long as the interface between layers remains the same.

      [li]- A consequence of layering is that a layer at the source host communicates with its peer layer at the destination, without concerning itself with the implementation details of the layers above and below.
      - The socket API provides computer applications with a reusable means to communicate with remote applications.

      [li]In the Internet model frames at one layer encapsulated by the next layer.
      - A frame at one layer consists of a data field - consisting solely of the entire frame from the layer above - and a header portion describing the data field and how it is to be demultiplexed at the other end.
      - In the inbound direction, the data portion of a frame is passed to the layer above, and typically carries information to identify which service should receive the frame.

      [li]When a packet is received by a layer-2 ethernet switch, The switch will make a forwarding decision based on the destination address in the Ethernet header.

      [li]IP service model.
      - The Internet protocol is an example of a network layer, and is required for all communications in the Internet.
      - There are currently two main versions of the IP protocol used in the Internet: IP Version 4, and IP Version 6.
      - The Internet protocol is responsible for delivering self-contained datagrams from a source host to the specified destination.
      - IP makes no promise to deliver packets in order, or at all.

      [li]IP service model, IP datagrams contain a field so that if a packet is routed in a perpetual loop, it will eventually be dropped.

      [li]In an alternative to the Internet Protocol called "ATM" proposed in the 1990s, the source and destination address is replaced by a unique "flow" identifier that is dynamically created for each new end-to-end communication. Before the communication starts, the flow identifier is added to each router along the path, to indicate the next hop, and then removed when the communication is over. The consequences.
      - There is state in the network for every communication flow, rather than just for every destination.
      - It means we no longer need a transport layer to reliably deliver correct, in-order data to applications.

      [li]An Internet router is allowed to drop packets when it has insufficient resources -- this is the idea of "best effort" service. There can also be cases when resources are available (e.g., link capacity) but the router drops the packet anyways. Example of scenarios where a router drops a packet even when it has sufficient resources.
      - A router configured as a firewall, that dictates which packets should be denied.
      - An ISP that limits bandwidth consumed by customers, even though there is available capacity.

      [li]TCP is responsible for providing reliable, in-sequence end-to-end delivery of data between applications. Direct consequences.
      - TCP will retransmit missing data even if the application can not use it - for example, in Internet telephony a late arriving retransmission may arrive too late to be useful.
      - TCP saves an application from having to implement its own mechanisms to retransmit missing data, or resequence arriving data.

      [li]TCP service model.
      - TCP delivers a stream of bytes from one end to the other, reliably and in-sequence, on behalf of an application.
      - When a TCP packet arrives at the destination, the data portion is delivered to the service (or application) identified by the destination port number.

      [li]Before communication begins, TCP establishes a new connection using a three-way handshake. This is because:
      - TCP end points maintain state about communications in both directions, and the handshake allows the state to be created and initialized.
      - TCP establishes a stream of bytes in both directions, and the three-way handshake allows both streams to be established and acknowledged.

      [li]UDP is a much simpler protocol compared to TCP. It is connectionless and it does not provide reliable transmission. And:
      - Short request-response transfer, such as DNS, would prefer to use UDP to avoid TCP overheads, such as three-way handshake.
      - UDP is suitable for applications that don't necessarily need a reliable transport (e.g., voice-over-IP, online games)
      - UDP is often used for broadcast communication because it does not require per-recipient state (such as acknowledgement numbers).

      [li]ICMP service model:
      - ICMP messages are typically used to diagnose network problems.
      - Some routers would prioritize ICMP messages over other packets.
      - ICMP messages can be maliciously used to scan a network and identify network devices.

      [li]"ping" program:
      - ping can be used to measure end-to-end delay.
      - ping can be used to test if a machine is alive.
      - ping can be maliciously used as a way to attack a machine by flooding it with ping requests.
      - ping sends out ICMP ECHO_REQUEST message to the destination.

      [li]"traceroute" program:
      - traceroute can be used to figure out network topology.
      - traceroute works by increasing the TTL values in each successive packet it sends.
      - traceroute can be used to identify incorrect routing tables.

      [li]ICMP is useful in diagnosing network problems. Problems it cannot help diagnose:
      - Test if a web server is sending correct responses to requests.
      - Know the exact link utilization between two routers to see if it is overloaded with packets.

      [li]You own a large gaming company and want to distribute an update to all of your players. The size of the update is 1GB and your server can send data at up to 1GB/s. Your engineers have found that assuming every player can download at 1MB/s leads to very accurate estimates of network performance. Furthermore, they've found you can assume that the server splits its capacity evenly across clients. You have 100,000 players.
      One of your engineers recommends that you distribute the patch by having all users download the full file from your company's server. You walk through this calculation with the engineer and determine it will take 100,000 seconds (~28 hours) for all of your players to download the update. The server can support 1,000 players downloading the update at once by splitting its 1GB/s across 1,000 1MB/s clients. It will take 100 rounds of 1,000 players for everyone (all 100,000) to receive the update. As it takes 1,000 seconds for a 1MB/s connection to download 1 GB, each round will take 1,000 seconds. 100 rounds of 1,000 second is 100,000 seconds, or 28 hours.
      Another engineer recommends a different, peer-to-peer, strategy. In this strategy, players who have downloaded the patch allow other players to download it from them. So in the first round, 1,000 players download the patch from the server. In the second round, 1,000 new players download the patch from the server, and 1,000 new players download the patch from the players who have already downloaded it. In the Third round, 1,000 new players download the patch from the server, and 3,000 new players download the patch from players who have already downloaded it. Until the last player to receive the update using the second strategy is faster than the first strategy Fourteen times (~7,000 seconds).

      [li]Skype's Rendezvous Service: Allows users not behind a NAT to call users behind a NAT.

      [li]Bittorrent's Tit-for-tat algorithm: Gives download preference to peers that give data to you.

      [li]Perform a tracerout to using the web service here. Hop does it take to reach the server is 1 Hop.

      [li]You perform a traceroute from a computer in San Francisco to a computer located in New York and get the following, fictional output:
      1 (  0.363 ms  0.427 ms  0.458 ms
      2 (  0.433 ms  0.439 ms  0.491 ms
      3 (  90.742 ms  90.641 ms  90.683 ms
      4 (  91.535 ms  91.509 ms  91.556 ms
      Your friend, Kristen, reads the output and says that the (geographic) distance between and *must* be very large because there was a long delay between b and c.
      You perform the traceroute again, this time from a different source in San Francisco and to a different destination in New York, this time getting the following output:
      1 (  0.632 ms  0.627 ms  0.658 ms
      2 (  0.743 ms  0.739 ms  0.791 ms
      3 (  92.742 ms  92.641 ms  92.683 ms
      4 (  193.335 ms  193.109 ms  193.001 ms
      5 (  195.201 ms  195.109 ms  195.813 ms
      Reading this output, your friend, Ewen, says that servers and must also *must* be geographically distant.
      Kristen is right and Ewen is wrong. Information cannot travel faster than the speed of light. Traveling at the speed of light, a packet would need at least ~15ms to get from one end of the country to the other. The only hop in the first traceroute to account for this time is the one Kristen identified. Ewen is wrong because he cannot be sure which of the two hops (or whether both hops) are geographically distant.

      [li]A router is not connected to at most two other routers.

      [li]The purpose of a default route in a routing table specifies where to forward packets whose destination address match no other rules in the routing table.

      [li]Fin packets are not sent by endpoints to set up a TCP connection.

      [li]The link layer is layer sits between the physical and network layers.

      [li]Layering allows engineers to deal with complexity by separating problems into chunks with well-defined interfaces between them.