By Frank Orozco, Chief CDN Technology & Products Officer
The web and mobile application space has become somewhat commoditized. A key differentiator between these apps is user experience, and a critical component to delivering better user experience is the speed of delivery. Enter QUIC (Quick UDP Internet Connections), a protocol that aims to make the web as fast as its name implies.
Under development by Google since 2012, QUIC is an alternative transport for data connections between a client and a server and is designed primarily to reduce latency. Even though it’s technically still in development and has yet to be standardized by IETF, it’s already in heavy use today. Google estimates that seven percent of all internet transactions are conducted over QUIC. Studies have shown that page load times can be improved by up to 60 percent for mobile devices with 4G/LTE. Also, Google has even realized a 30 percent reduction in rebuffers for videos streamed through YouTube. Read on to learn how CDNs can use this evolving protocol to speed up video delivery and playback while keeping connections secure at the same time.
The web was initially built on a different protocol: Transmission Control Protocol (TCP). TCP was designed to establish internet connections reliably at a time when few other protocols could. But TCP is a three-step connection process. First, the client delivers a request to send data; second, the server confirms the request; and third, the client gives a final acknowledgement. Only after these handshakes can data be exchanged using TCP. Each of those steps back and forth from client to server is what web engineers call a “round-trip”, and each recurring round-trip creates delays in delivering the content that users expect instantly.
On the other hand, QUIC at its core requires zero round-trip time (RTT) connection establishment—it can start a connection and negotiate all of its parameters concurrently while delivering data to the client. It’s able to do this by bypassing TCP altogether. Instead, QUIC is based on User Datagram Protocol (UDP), which doesn’t require either client or server to confirm they’ve sent or received data the same way TCP does. Though TCP was vital during the web’s infancy, when the need for speed was far less, today, QUIC’s sidestepping of TCP’s back and forth connection process is a welcome advancement in web performance speed and security.
Figure 1: QUIC’s multipath to demonstrate packet loss on one path does not impede other paths
In addition to removing the three-way handshake delay created by TCP, QUIC is able to efficiently deliver data to the client and server once the connection is established. It does this by using multiple data streams (see Figure 1) within a connection so that data can be spread across them. This means that delay or loss of data on one stream does not impact data delivery on another. Bottom line: faster connections mean faster delivery speeds for applications and start-up times for video, resulting in reduced rebuffering rates by up to 30 percent.
QUIC moves fast, even while utilizing secure connections. At the same time that it accelerates connections, it’s also utilizing the latest encryption methods to keep the connection secure. QUIC simultaneously negotiates an accelerated connection between sender and receiver while establishing an HTTPS connection.
Figure 2: QUIC maintains security while improving connection setup time
In a TCP connection, security requires an additional, second round-trip (see Figure 2). Transport Layer Security (TLS) is an additional layer on top of the connection that provides privacy and data integrity. QUIC’s predecessor, SPDY (not an acronym), which was also created by Google, didn’t have its own built-in security and was subordinate to a security layer like TLS or SSL and ran over a TCP connection. In fact, SPDY wasn’t so speedy when it had to wait in line for a secure connection to be established before it could deliver. The difference is that QUIC has a version of TLS built into it, which allows it to speed up data delivery while maintaining security encryption.
When video streaming services implement QUIC, they significantly reduce the time it takes between when a user requests a video and when they are able to watch it. That’s why Google has led the way by connecting users to YouTube over Quick UDP Internet Connections. Users who connect via QUIC report 30 percent fewer rebuffers. Additionally, some reports show that an increasing percentage of mobile network traffic is QUIC traffic from YouTube, now up to 20 percent.
At Verizon Digital Media Services, we’ve been seeing steady improvements in video delivery, especially on mobile, since we began implementing Quick UDP Internet Connections two years ago. A video—which includes motion, visuals and sound—is one of the largest and, therefore, trickiest mediums to deliver quickly over the web. But since implementing QUIC, we’ve seen a significant improvement in time-to-download for even our longest videos. When it comes to mobile networks, which are already saddled with higher base latency than desktop connections, we’ve observed improvements in video download speeds. We don’t have exact numbers because these types of metrics, especially for mobile, are difficult to measure statistically in a lab environment. We are following the results of Google’s A/B real-world testing (over a billion samples) and will collect our own data with our network, along with our customers, now that QUIC has been deployed on our entire network.
Without needing to know the meaning of words like latency, UDP or HTTPS, our users can still reap the benefits of our QUIC implementation and simply sit back and enjoy the quick-loading, high-quality viewing experience it enables.