Technology

From HTTP/1.1 to HTTP/3: How network protocols evolved

Network protocols are the rules by which devices communicate on a network. Thanks to them, data is sent and received without loss or distortion of information. For protocols to work, all participants in the process must accept and follow the proposed conditions. Communication standards can be embedded in the hardware of devices (i.e., hardware) and/or in software code.

Protocols allow devices to successfully communicate with each other and exchange data regardless of differences in architecture and design.

Learn what network protocols are in use today and what protocols were used previously, how the HTTP/1.1 standard differs from later versions, how next-generation protocols are being implemented, and what their prospects are.

HTTP/1.1: The first standard

The foundation of the digital world is information. HTTP protocols make it available to the user. These are the rules of hypertext transmission, on which the work of web resources is based. HTTP protocol regulates the interaction between browsers and servers when loading pages, pictures, videos and other data.

During the existence of the Internet, network protocols have changed significantly. There has been an evolution that has made page loading faster and data transfer more reliable.

Early history

The development of protocols began not with HTTP/1.1, but with earlier versions. It’s worth briefly reviewing the first steps along the way before talking about the technologies we use today. Let’s find out what major network protocols existed and how they evolved.

Back in 1991, when the concept of the modern Internet was being created, developers thought about an efficient file transfer system. Models of network protocols at that time already existed, it was up to the product of the application layer.

The first protocol was HTTP/0.9, which used the TCP network transmission control model as a transport layer. HTTP/0.9 was only involved in the transmission of hypertext, which was the only concept of data representation that existed at the time.

The simple one-line protocol HTTP/0.9 was as simple as possible in terms of implementation and could be operated from the command line. The principles on which this tool worked are still in use today.

However, a few years after its launch, HTTP/0.9 ceased to meet the needs of users. In addition to hypertext documents, it was required to display images and play sound files, which the protocol clearly could not cope with. There was also no way to use proxy servers to increase traffic.

The above problems were partially corrected in HTTP/1.0, implemented in 1995 after the first World Wide Web conference. A working group created after a meeting of developers and IT industry representatives addressed the development of network protocols. The sites of MTV, Amazon, Microsoft and other large companies had already been launched by that time and needed infrastructure development.

The resources worked via HTTP/1.0 over TCP, and the new protocol was oriented to graphical browsers. To transfer metadata, the client and server used headers specifying the type of transferred data. Thus, it was possible to transfer graphic, audio and other files through the protocol, in addition to hypertext. The tool supported caching, authentication, transmission via SQUID proxies and gateways.

Despite all the innovations, there were also problems with the protocol that stemmed from its design. The principle of “one request – one response” showed lower efficiency than conveyor transfer of several requests at once. To solve the problem, several TCP-connections were established instead of one. This speeded up the load, but increased resource utilization.

Launching HTTP/1.1

Toward the end of the last century, programmers began looking for alternatives. They faced quite specific tasks:

  • to reduce the redundancy of TCP connections;
  • increase the efficiency and speed of data transfer;
  • increase caching capabilities;
  • reduce the use of IP addresses;
  • standardize methods of delivering web applications;
  • establish control over extensions to the protocol;
  • ensure interoperability of web applications.

All of these problems were solved to some extent in 1999 with the release of HTTP/1.1. This standard is now known to all developers and is still partially in use. The text protocol with extended semantics of previous versions worked on top of TCP with the help of an optional SSL/TLS encryption layer.

The algorithm of HTTP/1.1 is as follows:

  1. The client (the component sending the request) establishes a connection over TCP and negotiates an SSL/TLS binding.
  2. The server receives a string from the client with the method, route to the document, and protocol version.
  3. The client then sends the headers in the desired format, notifies the server to support stable connections if necessary, and can ask the server not to terminate the connection after the request is complete.
  4. In response, the server sends the protocol version, part of the status code string, and if possible, the body of the response.
  5. If both parties support persistent connections, the TCP connection is not closed, which solves the redundancy problem of this networking model.

The key point was the use of supporting connections that allow multiple uses of the TCP connection. For a long time this solution was considered optimal, although even then many developers were not satisfied with the new standard. As well as earlier versions, HTTP/1.1 protocol was characterized by delays and network blocking when requests increase. The problem became especially noticeable after the introduction of wireless technologies.

Another difficulty was the lack of full support of parallelism. Transferring many files simultaneously required additional resources.

Developers paid attention to Head-of-Line blocking (the beginning of the queue). This situation occurs when one object with low speed delays the whole thread. An analogy with a supermarket checkout, when customers with single purchases are waiting for one customer with a huge cart of goods to pay, fully reflects the essence of the problem.

The extensions that were used to circumvent blocking and delays were not effective enough. Therefore, since the mid-2000s, work has been underway on alternative solutions

HTTP/2: A big step forward

Web pages have grown in complexity over time and have incorporated more and more visual aids. Standalone web applications emerged, scripts grew in size, and interactive options emerged. The amount of data transferred via HTTP requests increased dramatically, which increased the overhead of using the current protocol.

There was an urgent need for new protocols. In early 2010, Google developers created an experimental solution called SPDY, an alternative for data exchange. The protocol increased response speed and solved the problem of data doubling. It became the basis for HTTP/2.

Characteristics and advantages of HTTP/2 protocol

The key difference from its predecessors is that HTTP/2 is a binary protocol, not a text protocol, that is, it uses a binary format for data transfer. Since the information does not need to be transformed into text, the speed of interaction between client and server is radically increased, and the downloading of files of any type, including audio and video, is simplified.

The official HTTP/2 standard was introduced in 2015, and by 2022 almost half of all websites were already using it. Resources with high traffic almost entirely switched to this protocol in order to save money on data transfer.

Relatively rapid implementation of the new standard is explained simply – when using HTTP/2 does not need to make changes to the code of sites and web applications. You only need a modern server that interacts with the current version of the browser. In the process of updating network resources and storages, the use of the protocol is realized naturally.

  • HTTP/2 is a multiplexed protocol, so concurrent requests can be executed over the same connection. This overrides Head-of-Line queue start locks – you don’t have to wait for the previous request to pass before sending the next one.
  • Compression of headers, which often coincide in different requests, eliminates duplication and saves traffic – unnecessary data is not transmitted, only necessary data.
  • Server Push feature. This technology predicts the list of resources that a client may need in the near future and promotes them in advance.
  • Data Security. The protocol does not require mandatory HTTPS encryption, but most browsers do it automatically, which increases the level of security when exchanging data.
  • Prioritization. The client side can specify the priority of the request, to which the server responds accordingly – sending more important resources first.

The HTTP/2 version speeds up page loading, allows you to send many requests at once through a single connection, reduces resource costs, makes surfing the web more efficient and enjoyable. Loading sites becomes fast even on mobile gadgets with unstable internet.

Support and prospects

The introduction of the second-generation protocol has also benefited developers. Now the creation and realization of web projects have become simpler procedures. In turn, the improvement of user experience and SEO is the growth of resource monetization and increase in commercial returns from projects.

Specialists note that the advantages of HTTP/2 for search engine optimization are due to the technical features of the protocol. Site performance is among the important ranking factors. A slow resource is unlikely to be liked by customers. Practice shows that the introduction of the new protocol increases the loading speed by 50-80%. Visitors have fewer problems when navigating and using interactive functions.

To switch to HTTP/2, support from the client and browser is required. Almost all modern browsers in current versions are created with automatic support of the standard. The software of servers and delivery networks is gradually being updated. Costs and time spent on upgrades are small compared to the benefits of HTTP/2 speed and performance.

HTTP/3: The next generation protocol

The development of the new protocol began back in 2018. However, Google created the experimental QUIC standard, on which HTTP/3 is based, back in 2012. It is thanks to the QUIC transport protocol that the next-generation technology got its advantages and unique qualities.

HTTP/3 is an improved version of the previous standards that will eventually replace them due to its even higher speed and reliability. Since HTTP/2 uses TCP, it imposes certain limitations on the speed of data transfer.

QUIC transport protocol is a development based on UDP (User Datagram Protocol), which operates its own, fundamentally different mechanisms of data management, control and error correction.

The main advantages of the new generation protocol are:

  • Accelerated connectivity. QUIC uses mutual exchange of greetings (handshake) of reduced length in comparison with TCP, so data transfer begins faster.
  • Reduced latency. The transport protocol is optimized for unstable Internet conditions – this has a positive impact on the user experience.
  • Higher throughput. QUIC utilizes network capacity more efficiently, even when packet loss occurs.
  • Increased security. TLS 1.3 encryption is used, making data transmission more secure.
  • Improved quality of web resources. Implementation of protocols brings the development of online services to a new level. Fast downloads, seamless audio and video transmission – all this speeds up the creation of new projects and increases their efficiency.

Although HTTP/2 allows you to download several resources at once, its TCP model is not particularly suitable for multiplexing. The reason is that when one request fails, the browser has to repeat the rest of the requests along with it. In HTTP/3 this problem is solved more rationally – only the problem request is repeated, which does not significantly affect the speed of connection.

Why do we need HTTP/3 if there is a second generation protocol? Experts say that the need for a new standard based on the QUIC transport protocol, came almost immediately after the widespread use of the Internet. TCP initially worked at maximum productivity, so it had significant limitations.

Years with the help of new functions and extensions developers tried to level out the shortcomings of TCP, but to deploy them on the scale of the entire Internet was extremely difficult. For this reason, and needed a new generation protocol HTTP/3, which is now becoming more and more in demand.

HTTP/3 implementation and support

In 2024, the HTTP/3 protocol is supported by many popular browsers. Chrome uses the standard for most server connections. Firefox, Safari, Microsoft Edge and other browsers also use HTTP/3.

Currently, this protocol is supported by millions of the most popular online resources, and this number is constantly growing. In addition to Google, which took an active part in the development of the new standard, the protocol is used, for example, the platform CloudFlare – its options without the use of servers are compatible with HTTP/3. Active users of the protocol are Akamai, Fastly, Amazon and many others.

A significant advantage of HTTP/3 for further distribution is its full encryption. Intermediaries (proxies) in the network do not notice and do not interpret its work, as in the case with TCP, so new versions of the protocol and new features will work correctly on any devices immediately after their update.

Summary – the future of network protocols

The evolution of network protocols is a continuous process. As technology and web applications evolve, so do the requirements for data transfer. As the number of devices connected to the network, including mobile gadgets, is constantly growing, it will be important to increase the bandwidth capacity of protocols, as well as the speed of data download. We cannot forget about the security of information, so this aspect will also be developed.

Optimization for IoT – the Internet of Things – will also be required. There are more and more such devices, and the need for effective protocols for them is growing. Standards are being developed at a particularly intensive pace in this area.

The use of artificial intelligence in improving network protocols is becoming more and more relevant. Machine learning and AI can be used to optimize various levels of network communication to improve performance and security.

In case you have found a mistake in the text, please send a message to the author by selecting the mistake and pressing Ctrl-Enter.

https://techplanet.today/storage/posts/2024/12/11/cu0RXfFcyllkWyLoq90is93cXSajOSf73s6Z7ow3.webp

2024-12-11 06:57:00

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button