Home Technical Articles Server Optimization at Scale: Eliminating Packet Drops and Improving Capacity
Applications

Server Optimization at Scale: Eliminating Packet Drops and Improving Capacity

About The Author

Outline

Improving the performance of our technology for the benefit of our customers and their audiences is an ongoing course of action at Verizon Media, now Edgio. For example, over the past two years, our performance and kernel engineers have eliminated virtually all packet drops (over 98% removed), improved performance health checks on our edge servers by 50%, and increased server capacity by up to 40%.

We’ve also coupled the above with network automation and organic network expansion — currently over 250 Tbps — to improve the user experience. Carefully tuning our performance has played a major role in our ability to support rapidly changing and sometimes unpredictable network surges as we deliver software updates for millions of game consoles, live video streams for major sporting events, and when multi-CDN load balancers move the load to our network.

Maintaining quality at scale involves optimizing performance across every part of the Edgio Media Platform tech stack: from its lower layers, at the CPU and NIC, up to the OS and the applications. Ultimately, our goal is always the same: great performance. To get there, we perform data-driven analysis, relying on measurable performance changes to validate our decision-making.

CPU cache optimizations

We run 20,000 servers worldwide, largely Broadwell and Haswell chipsets, typically with 32 to 40 cores. We added 12,000 servers in the last three years alone. However, most servers are not optimized to scale for our workloads out of the box. Simply adding more servers doesn’t make for more efficient operation and can create additional challenges. Effective scaling requires careful optimization of existing components. Being able to optimize one server so that it is capable of processing two or three times (or more) requests than with the default configuration can make a powerful difference to the network’s overall performance.

The switch from early snoop to home snoop

Modern CPUs employ a snoop protocol to guarantee that the local CPU cache is consistent with memory. This lets caches listen for modifications to variables on any CPU and update their versions of these variables accordingly. Not surprisingly, the particular technique used can significantly impact memory performance.

By default, our hardware vendors use a protocol called Early Snoop. It has a lower latency to resolve cache coherency, as all cores can make cache coherency requests simultaneously and send out broadcasts. We have found that our systems generate heavy amounts of simultaneous disk and NIC activity during peak load scenarios. These activities result in high snoop broadcasts, leading to communication bottlenecks. This causes the IO device to slow and can eventually lead to processing stopping entirely.

By switching to Home Snoop mode, an approach that coalesces snoop requests, we have seen a significant reduction in broadcast traffic. The processor’s Quick Path Interconnect (QPI) is no longer starved during periods of simultaneous heavy disk and network IO operations; furthermore, packet drops that we saw with Early Snoop significantly reduced in number.

Changing the snoop protocol depends simply on changing a BIOS setting. However, rebooting 20,000 servers without disrupting customers requires automation. We can make this kind of large-scale deployment change work in production partly thanks to our StackStorm-based IT automation platform, Crayfish.

An unexpected failover event

While testing the switch to Home Snoop, a failover occurred: one of our largest media customers, which has a multi-CDN deployment, experienced a problem with another vendor and moved a significant portion of their traffic to our CDN. This provided an opportunity to test the large-scale Home Snoop improvements, which were extremely impactful.

The figure above shows the effect of the change. The group still using Early Snoop saw an increase in drops by 13.75x (55K packet drops per server per day), while the group that had switched to Home Snoop saw an increase of only 4.23x (27K packet drops per machine per day). Home Snoop immediately proved its value during the failover event.

Network interface optimization and driver tunings

Another set of important performance tuning involved the network interface and the driver. Here, we focused on bringing down packet drops usually occurring with burst traffic. During large events, inbound traffic was so heavy that NIC could not keep up, and we saw packet drops sooner than expected. As we dug into why, we found several parameters on the NIC itself needed adjusting, including the number of queues, the queue size, and interrupt scheduling. To optimize these specifications for our particular workload and hardware configuration, we concentrated on tuning the Receive Side Scaling (RSS) by making the inbound queues longer, reducing their overall number, and balancing the interrupts across NUMA nodes.

The graph above shows a test we ran in North America, in which each PoP is divided into a control (i.e., untuned) group and a test (i.e., tuned) group. Here, we present the number of drops summed daily over one week. Following the tunings, our test group saw approximately 95% fewer packet drops than the control group, allowing significantly more requests to be processed. This also means less action is required to manually manage the network’s health during surges, leaving our engineers free to focus on other areas.

CPU scheduling tunings

While the NIC and driver level tuning concentrated on improving the total capacity we can deliver, the CPU scheduling tunings focused on enhancing how consistently we can deliver content.

Without these tunings, inbound and outbound messages have to compete for resources. When we began investigating the root cause, we found that the contention over resources resulted from how the kernel was scheduling the handling of these messages. This meant that the load wasn’t migrated away during peak traffic until after the CPUs in question were saturated. To fix this, we set the CPU affinity of our web server processes to exclude CPUs dedicated to processing incoming network traffic.

The graphs above show the impact of enabling the CPU scheduling tunings globally across the CDN on March 21–22. We assess the impact based on the 95th percentile and median values of a performance health check metric, a composite metric demonstrating a server’s relative response time. As expected, low-traffic valleys were not significantly reduced; however, the peaks reveal significantly reduced contention between incoming and outgoing traffic. This translates to a major improvement in both the outliers and medians, particularly during peak loads. We can now better handle surges in traffic and iron out problems related to high outlier behavior, such as rebuffers in video streams or the overall responsiveness of a server for all users.

Kernel performance updates

Optimizing the upper layers of our tech stack is equally as important as tuning the lower layer elements. In the process of recently upgrading our OS, we also upgraded our Linux kernels to take advantage of upstream engineering work from the Linux kernel community. The new kernel had around four years of development beyond the previous version deployed, including improvements to the memory management system, which reduces blocking page allocations and improves load distribution and performance when using the epoll API and socket sharding.

In the graph above, you can see the effect of the upgrading process from late November to early January as a decline in the 99th percentile performance health checks. The underlying kernel improvements led to a more even load distribution across all our web server request processors. This resulted in a substantial drop in these outliers, making requests for all our customers more reliable.

Performance tunings have a significant effect

Over the past two years, the far-reaching system tunings that performance and kernel engineering have deployed have eliminated virtually all packet drops (over 98% removed) and halved our performance health checks on our edge servers. Our server capacity has increased by 10–40% (the exact amount varies according to the customer profile and event), allowing us to deliver more traffic faster. Outlier behavior has improved significantly, making for a more consistent experience, and we have seen good improvement on the medians, particularly during peak load. In summary, the performance tuning to the entire tech stack has allowed us to better handle any unexpected traffic spikes (whether from a highly anticipated gaming console update or a popular streaming video live event) and deliver more consistent performance for all our users.