According to Chrome the difference is "waiting" for the server response (47 ms Vs. 473 ms). You can tell that nginx is configured very differently between the two response types just by looking at the response headers.
I suspect the author set out to prove this and then made the environment fit their agenda. Nothing to see here. I'm sure Spdy is faster than HTTP/1.1 but nowhere near what this site claims.
The "Waiting" time listed in the Network panel of Chrome is actually the TTFB for each HTTP request/response. It is the latency from browser to server, server processing, and latency coming back. And because of the difference between latency and our huge bandwidth connections, your browser spends most of its HTTP networking time waiting.
This is exactly where the "SPDY vs HTTP" factor comes in. With HTTP, you request each item, and wait for a response. This can only be done one item at a time[1] per HTTP/TCP connection, with a limit of 2-6 (depending on the browser) HTTP/TCP connections per unique hostname. As I mentioned in a comment below, this benchmark is using a single hostname, which is the worse possible situation for HTTP.
SPDY, or HTTP/2, instead using a single TCP connection, and multiplexes all the requests over it. In other words it can be downloading pieces of many other responses, all while waiting on other requests.
[1] - Yes, I am ignoring HTTP pipelining, because every other person who implemented a piece of HTTP software did as well. And no, neither I nor anyone else cares about older Opera supporting it :-)
I increased the NGINX/ubuntu ulimit and work_connections, so you should see a fast test now without errors. Not prioritizing one protocol vs the other, I promise.
Even with SPDY disabled[1] HTTPS still appears faster (not by much: ~5% in my case). I do agree that enabling SPDY by default for the HTTPS case is unfair, but do not let it eclipse the takeaway: most users with most browsers will observe HTTPS being a bit faster (edit: I should rather say "comparable in speed") in this specific benchmark. How well this result translates to other sites is a different question. For example the benchmark does not measure TLS handshake latency:
<!-- "pre-load" HTTPS connection to remove TLS handshake latency when switching to HTTPS test. And set the detectio var -->
<script src="https://www.httpvshttps.com/detect-spdy.js"></script>
For some sites, handshake latency matters very little, so it makes sense to write the benchmark the way they did it. Anyway, the company who made the benchmark probably did it to prove a point: that an image-rich web page can load just as fast on HTTPS as on HTTP.
Also, contrary to what a commenter said below, the site does carefully avoid caching interferences by downloading images with a random query string such as https://www.httpvshttps.com/check.jpg?123.179881
Its not even HTTP vs SPDY. Its HTTP vs SPDY, when all content is coming from the same hostname. This is the best possible environment for SPDY, and the worse possible environment for HTTP.
You are looking at one TCP connection for SPDY, with everything multiplexed. With HTTP, you are looking at, best case, 4 TCP connections to the server, that all start cold, doing 350+ requests in parallel. That's not even considering the potential for the server push feature with SPDY.
I love SPDY and all, but this is not even close to a real world scenario.
In my testing I've seen SPDY / HTTP/2 be anywhere from 35% faster to 5% slower, in my experience key factors to getting it to perform are quality of TLS config, whether the server supports prioritisation and TCP config.
Yes, not technically real-world, but hopefully enough to push decision makers to start encrypting. I also added a param to reduce images loaded: http://www.httpvshttps.com/?images=100
From my testing, SPDY was generally 60% faster, but as of now it's been slow. Could be my computer or the server
Due to the way slow-start works the throughput of a TCP connection increases with use (assuming no packet loss) as the congestion window grows.
The HTTP test has images that are ~5KB so after the first round trip for an image the congestion window grows but none of the subsequent requests grow it further, where as in a real world example many of the files would be larger than 5KB the window would grow further an the number of round trips would be reduced.
The SPDY example can make use of the ever growing congestion window because the multiplexing will fill it i.e. we'll get the data from more than one image in a single round trip.
It's not that the test isn't 'technically real-world' it's that it has (I don't think intentionally) a design that highlights an area where HTTP performs really poorly due to the latency penalty i.e. many very small requests.
A more real world test case would mirror a typical page construction with varying file sizes - HTTP Archive can give you some clues here.
I don’t know what multiple XORs have to do with SPDY, but yeah, they should have done it with one site, a CDN for images, some javascript loaded from weird other CDNs, etc.
——
[1] Multiple XOR –> Multiplexor –> Multiplexer, an electronical circuit that hakes multiple bundles of lines and returns the values of the bundle which has been selected via the control input.
Multiplexing matters if you make 300 requests: with HTTP 1, you have to submit a request and wait for each response to transfer completely before making the next one (HTTP pipelining was supposed to solve this but has never been enabled by default because compatibility problems).
With SPDY, you can request all 300 requests up front and receive replies out of order so the server can send each chunk of data as it's ready.
Yeah, this isn't HTTP v HTTPS, it's HTTP vs SPDY. It also doesn't seem to address if the servers cache, and that subsequent requests may be faster than the first.
Sorry, everyone. I'm the site's author, and I increased the ulimit and worker_connections to 4096, so you should see better perfomance and no more errors now. My apologies...I'm a web guy, not a networking guy. Rookie mistake.
Of course SPDY keeps the same TCP connections alive, the encryption and even TLS handshake time is insignificant compared to the latency in creating a new connection for each HTTP request.
Looks pointless to me. Because choosing between turning your resource to user with http or https is not about speed. There are other considerations on the first place.
It does. That is because, if you look at TLS handshake, they are using session resumption to avoid 1 RTT and the need to renegotiate ciphers and session keys.
I suspect the author set out to prove this and then made the environment fit their agenda. Nothing to see here. I'm sure Spdy is faster than HTTP/1.1 but nowhere near what this site claims.