Why QUIC does not replace TCP • The Register – Yalla Match

by time news

System Policy Some would say QUIC could start to replace TCP. This week I would argue that QUIC actually solves a different problem than the one solved by TCP. As such, QUIC should not be considered a substitute for TCP.

QUIC may become the default transport for some (or most) applications, but I believe this is because TCP is being pushed into a role it wasn’t originally intended for. Let’s see why I make this claim.

In 1995, Larry Peterson and I Computer Networks: A System PolicyAnd we have reached the stage of writing a chapter in the transport protocol entitled “End-to-End Protocol”.

At that time, there were only two important transport protocols on the Internet, UDP and TCP, so each had its own section. Since the purpose of our book is to teach networking principles rather than just RFC content, we’ve divided the two sections into two different communication models: a simple demultiplexing service (as evidenced by UDP) and a trusted byte assembled as a stream (TCP).

But there was also a third form that Larry said should be covered. There was no known example of the Internet Protocol for this. It is a remote procedure call (RPC). The example you used to explain RPC in 1995 seems strange now. A local example from SunRPC and Larry working on the x-kernel at the time. There are many options for IP RPC applications these days, with gRPC being one of the most well-known examples.

The example you used to explain RPC in 1995 now seems strange.

Why did you feel the need for an entire section on RPC when most other networking books only covered TCP and UDP? One of the most exciting areas of research, the 1984 paper by Nelson and Brill stimulated a generation of RPC-related projects. In our view, a reliable byte stream is not a good abstraction for RPC.

At the core of RPC is the request/response model. The client sends a set of arguments to the server, and the server performs a computation with these arguments, and returns the result of the computation. Sure, a reliable byte stream might help to get all the arguments and results straight over the wire, but RPC is much more than that.

Aside from the issue of argument sequence for sending over the wire (covered later in this book as well), RPC isn’t really about transmitting a byte stream, it’s about sending messages and getting responses to them. The purpose is that. This is somewhat similar to a datagram service (provided by UDP or IP), but requires more than an untrusted datagram delivery.

RPC must handle lost, broken, and duplicate messages. An ID space is required to match requests and responses. Message segmentation/reassembly should be supported, to name a few. Out-of-order delivery, which is prevented by a reliable byte stream, is also suitable for RPC. There may be a reason why many RPC frameworks came into being in the 80s and 90s. People in distributed systems needed an RPC mechanism, so there was no readily available standard TCP/IP protocol suite. (RFC 1045 actually defines an experimental RPC-oriented transport, but it doesn’t seem to catch on). Nor is it clear that TCP/IP will ever become as dominant as it is today. As such, some RPC frameworks (such as DCE) are designed to be independent of the underlying network protocol.

The lack of RPC support in the TCP/IP stack laid the foundation for QUIC.

When HTTP came out in the early 90’s, it wasn’t trying to solve the RPC problem, it was trying to solve the information sharing problem, but it implemented the request/response semantics. The designers of HTTP decided to work HTTP over TCP, apparently due to the lack of better options. Older versions were notorious for slow performance due to a new connection being used for each “GET”.

Various modifications have been made to HTTP to improve performance, such as pipelines, persistent connections, and the use of parallel connections, but the trusted bytestream model of TCP is well suited to HTTP. I never did that.

With the introduction of the Transport Layer Security (TLS) protocol, a new back-and-forth exchange of cryptographic information occurred, and the discrepancy between what HTTP required and what TCP provided became increasingly apparent. This is well explained in the 2012 QUIC design document by Jim Roskind. Head-of-line blocking, poor congestion responses, and additional RTT(s) introduced by TLS were all identified as inherent problems with the operation of HTTP over TCP.

One way to frame what happened here is: The Internet’s “narrow waist” was originally just an Internet protocol, intended to support the various protocols above it. But somehow the “West” now also includes TCP and UDP. It was the only means of transportation available. If you only need datagram service, you can use UDP. If you need some kind of reliable delivery, TCP is the answer. If you want something that doesn’t map perfectly to either unreliable datagrams or reliable byte streams, you’re in luck. But requiring everything from TCP to many higher layer protocols was a pain.

QUIC does a lot of work. Its definition extends to three RFCs covering the underlying protocol (RFC 9000), the use of TLS (9001), and congestion control mechanisms (9002). But at its core, it is an implementation of the third lost paradigm of the Internet: RPC.

If you really need a reliable stream of bytes, such as downloading multi-gigabyte OS updates, TCP is well designed. But HTTP(S) is more like RPC than a reliable stream of bytes. One way to look at QUIC is to finally bring the RPC model into the IP suite.

This definitely benefits applications running over HTTP(S), especially gRPC and all the RESTful APIs we rely on.

When I wrote about QUIC earlier, I said it was a good case study of how to rethink layer systems as requirements become more explicit. byte stream requirements) and congestion control algorithms continue to evolve to meet these requirements.

QUIC actually meets a variety of requirements. Since HTTP is so central to the Internet today (here and here) it has been said to have become the new “Narrow West”, QUIC may become the dominant transport protocol. Because it meets the needs of the most important major applications. ®

You may also like

Leave a Comment