7.1. Introduction to Network I/O
Network traffic in Linux and every other major operating system is abstracted as a series of hardware and software layers. The link, or lowest, layer contains network hardware such as Ethernet devices. When moving network traffic, this layer does not distinguish types of traffic but just transmits and receives data (or frames) as fast as possible.
Stacked above the link layer is a network layer. This layer uses the Internet Protocol (IP) and Internet Control Message Protocol (ICMP) to address and route packets of data from machine to machine. IP/ICMP make their best-effort attempt to pass the packets between machines, but they make no guarantees about whether a packet actually arrives at its destination.
Stacked above the network layer is the transport layer, which defines the Transport Control Protocol (TCP) and User Datagram Protocol (UDP). TCP is a reliable protocol that guarantees that a message is either delivered over the network or generates an error if the message is not delivered. TCP's sibling protocol, UDP, is an unreliable protocol that deliberately (to achieve the highest data rates) does not guarantee message delivery. UDP and TCP add the concept of a "service" to IP. UDP and TCP receive messages on numbered "ports." By convention, each type of network service is assigned a different number. For example, Hypertext Transfer Protocol (HTTP) is typically port 80, Secure Shell (SSH) is typically port 22, and File Transport Protocol (FTP) is typically port 23. In a Linux system, the file /etc/services defines all the ports and the types of service they provide.
The final layer is the application layer. It includes all the different applications that use the layers below to transmit packets over the network. These include applications such Web servers, SSH clients, or even peer-to-peer (P2P) file-sharing clients such as bittorrent.
The lowest three layers (link, network, and transport) are implemented or controlled within the Linux kernel. The kernel provides statistics about how each layer is performing, including information about the bandwidth usage and error count as data flows through each of the layers. The tools covered in this chapter enable you to extract and view those statistics.
7.1.1. Network Traffic in the Link Layer
At the lowest levels of the network stack, Linux can detect the rate at which data traffic is flowing through the link layer. The link layer, which is typically Ethernet, sends information into the network as a series of frames. Even though the layers above may have pieces of information much larger than the frame size, the link layer breaks everything up into frames to send them over the network. This maximum size of data in a frame is known as the maximum transfer unit (MTU). You can use network configuration tools such as ip or ifconfig to set the MTU. For Ethernet, the maximum size is commonly 1,500 bytes, although some hardware supports jumbo frames up to 9,000 bytes. The size of the MTU has a direct impact on the efficiency of the network. Each frame in the link layer has a small header, so using a large MTU increases the ratio of user data to overhead (header). When using a large MTU, however, each frame of data has a higher chance of being corrupted or dropped. For clean physical links, a high MTU usually leads to better performance because it requires less overhead; for noisy links, however, a smaller MTU may actually enhance performance because less data has to be re-sent when a single frame is corrupted.
At the physical layer, frames flow over the physical network; the Linux kernel collects a number of different statistics about the number and types of frames:
Transmitted/received— If the frame successfully flowed in to or out of the machine, it is counted as a transmitted or received frame. Errors— Frames with errors (possibly because of a bad network cable or duplex mismatch). Dropped— Frames that were discarded (most likely because of low amounts of memory or buffers). Overruns— Frames that may have been discarded by the network card because the kernel or network card was overwhelmed with frames. This should not normally happen. Frame— These frames were dropped as a result of problems on the physical level. This could be the result of cyclic redundancy check (CRC) errors or other low-level problems. Multicast— These frames are not directly addressed to the current system, but rather have been broadcast to a series of nodes simultaneously. Compressed— Some lower-level interfaces, such as Point-to-Point Protocol (PPP) or Serial Line Internet Protocol (SLIP) devices compress frames before they are sent over the network. This value indicates the number of these compressed frames.
Several of the Linux network performance tools can display the number of frames of each type that have passed through each network device. These tools often require a device name, so it is important to understand how Linux names network devices to understand which name represents which device. Ethernet devices are named ethN, where eth0 is the first device, eth1 is the second device, and so on. PPP devices are named pppN in the same manner as Ethernet devices. The loopback device, which is used to network with the local machine, is named lo.
When investigating a performance problem, it is crucial to know the maximum speed that the underlying physical layer can support. For example, Ethernet devices commonly support multiple speeds, such 10Mbps, 100Mbps, or even 1,000Mbps. The underlying Ethernet cards and infrastructure (switches) must be capable of handling the required speed. Although most cards can autodetect the highest support speed and set themselves up appropriately, if a card or switch is misconfigured, performance will suffer. If the higher speed cannot be used, the Ethernet devices often negotiate down to a slower speed, but they continue to function. If network performance is much slower than expected, it is best to verify with tools such as ethtool and mii-tool that the Ethernet speeds are set to what you expect.
7.1.2. Protocol-Level Network Traffic
For TCP or UDP traffic, Linux uses the socket/port abstraction to connect two machines. When connecting to a remote machine, the local application uses a network socket to open a port on a remote machine. As mentioned previously, most common network services have an agreed-upon port number, so a given application will be able to connect to the correct port on the remote machine. For example, port 80 is commonly used for HTTP. When loading a Web page, browsers connect to port 80 on remote machines. The Web server of the remote machine listens for connections on port 80, and when a connection occurs, the Web server sets up the connection for transfer of the Web page.
The Linux network performance tools can track the amount of data that flows over a particular network port. Because port numbers are unique for each service, it is possible to determine the amount of network traffic flowing to a particular service.
|