This is not the current version of the class.

# Lecture 23: Networking & OS interaction

Notes by Abby Lyons

## Livelock

Here is a drawing:

            CPU
|
Cache Hierarchy
|
Bus (PCR)
/      \    \
Primary memory, disk, network device


Let's examine the data movement on the lowest level of this machine:

• 4-32kb moves from disk to memory
• 64-128b moves between primary memory and cache
• 64-1500b move from the network device to primary memory And the interrupts:
• Disk interrupts: one for read complete, one for write complete. One interrupt every 32kb.
• Network interrupts: one for packet arrival, one for completing transmission. One interrupt per packet = 100-1000 times more interrupts for the same amount of data. Networks can send 100 gigabits/second, so this results in up to 1 billion interrupts per second. This is bad.

Livelock: throughput drops to 0 because too much new work. For networks, this means that the input packet rate increases, but the output packet rate approaches 0. This happens because new packets are handled immediately, even if there are old packets still waiting. As a result, if packets are coming in too quickly, the network device keeps getting interrupted before anything can get done.

1. Establish a connection
• Client sends SYN packet.
• Server sends SYN ACK in response.
• Client sends ACK. Connection is now established.
2. Talk to the server
• Client sends ACK. The acknowledgment number means "I have heard all previous pieces in this communication".
• Server sends ACK so client knows data is being received. (Not every packet, but O(number of packets)). This results in a lot of small (64b) packets.
• Rinse and repeat.

### Solutions to livelock

Anyway, back to the livelock badness. Let's come up with some solutions:

1. Polling. Every timer interrupt, process every network packet that's available. This still results in livelock eventually, because we are still privileging new work over existing work.
2. Polling, but with a limit on the number of packets processed (the example given was 5 per timer interrupt). This works.
3. Batched interrupts. We don't need to do polling anymore.

This paper described the problem precisely and showed how to solve it.

“Eliminating Receive Livelock in an Interrupt-driven Kernel.” Jeffrey C. Mogul and K. K. Ramakrishnan. In Proc. USENIX ATC 1996. PDF link, USENIX link

Most operating systems use interface interrupts to schedule network tasks. Interrupt-driven systems can provide low overhead and good latency at low offered load, but degrade significantly at higher arrival rates unless care is taken to prevent several pathologies. These are various forms of receive livelock, in which the system spends all its time processing interrupts, to the exclusion of other necessary tasks. Under extreme conditions, no packets are delivered to the user application or the output of the system.

To avoid livelock and related problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution. We modified an interrupt-driven networking implementation to do so; this eliminates receive livelock without degrading other aspects of system performance. We present measurements demonstrating the success of our approach.

## Direct packet delivery

Fast forward to 2008-ish. We have more CPUs and faster network cards, which come with some new bottlenecks, namely the single communication channel on the network card that needs to be synchronized. What to do?

1. Delegate a core to doing networking. This is really bad for the cache.

2. Network devices with multiple transmit and receive queues. Packets are split roughly evenly among receive queues. Similarly, one transmit queue per core means no synchronization is necessary among cores.

A whole set of interrelated networking device features enable this, including Receive Side Scaling (RSS). Linux documentation on scalable networking