Thursday, July 14, 2011

Firefox Pipelining: Be Afraid

‹prev | My Chain | next›

Continuing my adventures with Firefox and HTTP/1.1 pipelining (a partial SPDY competitor), I begin analysis of the sessions that I have captured the past couple of nights.

Last night, I was finally able to get pipelining working in Firefox (or at least confirm that I had it working). My test page contains:
<p><img src="pipeline01.png"/></p>
<p><img src="pipeline02.png"/></p>
<p><img src="pipeline03.png"/></p>
<p><img src="pipeline04.png"/></p>
<p><img src="pipeline05.png"/></p>
<p><img src="pipeline06.png"/></p>
<p><img src="pipeline07.png"/></p>
<p><img src="pipeline08.png"/></p>
<p><img src="pipeline09.png"/></p>
<p><img src="pipeline10.png"/></p>
Each of those images are ~150kb. That's more than a typical web image ought to be, but it ought to really push pipelining and TCP/IP congestion window (CWND) through their respective paces. So let's have a look-see.

The tcptrace tool serves the dual purpose of identifying the most heavily used TCP/IP pipes (in bold below) and generating xplot graphs:
➜  14 git:(master) tcptrace -S -n -zx -zy  ../13/firefox_pipeline.pcap
1 arg remaining, starting with '../13/firefox_pipeline.pcap'
Ostermann's tcptrace -- version 6.6.7 -- Thu Nov 4, 2004

240 packets seen, 240 TCP packets traced
elapsed wallclock time: 0:00:00.033535, 7156 pkts/sec analyzed
trace file elapsed time: 0:01:29.060533
TCP connection info:
1: :0001:60110 - :0001:80 (a2b) 1> 1< (reset)
2: 127.0.0.1:48979 - 127.0.0.1:80 (c2d) 12> 8< (complete)
3: :0001:49647 - :0001:80 (e2f) 1> 1< (reset)
4: 127.0.0.1:55785 - 127.0.0.1:80 (g2h) 33> 36< (complete)
5: :0001:49649 - :0001:80 (i2j) 1> 1< (reset)
6: :0001:49650 - :0001:80 (k2l) 1> 1< (reset)
7: :0001:49651 - :0001:80 (m2n) 1> 1< (reset)
8: :0001:49652 - :0001:80 (o2p) 1> 1< (reset)
9: :0001:49653 - :0001:80 (q2r) 1> 1< (reset)
10: 127.0.0.1:55791 - 127.0.0.1:80 (s2t) 13> 13< (complete)
11: 127.0.0.1:55792 - 127.0.0.1:80 (u2v) 13> 13< (complete)
12: 127.0.0.1:55793 - 127.0.0.1:80 (w2x) 12> 13< (complete)
13: 127.0.0.1:55794 - 127.0.0.1:80 (y2z) 12> 13< (complete)
14: 127.0.0.1:55795 - 127.0.0.1:80 (aa2ab) 12> 13< (complete)
15: 127.0.0.1:55796 - 127.0.0.1:80 (ac2ad) 6> 4< (complete) (reset)
The pipe with the most activity, designated g2h by tcptrace, is the one that pipelined the last four requests (for pipeline07.png, pipeline08.png, pipeline09.png, and pipeline10.png) into a single request. Checking the plot of those packets, I find:


(the distance between the segments comes from a 100ms round trip time)

Looking at that graph, two things stick out for me: (1) it takes a long time to transfer all of that data (well over a second) and (2) the CWND (blue bar at the top) is quite variable. With regards to the CWND in (2), I am used to seeing a CWND grow relatively steadily as the interweb tube warms (a.k.a. TCP/IP slow start kicks in) when using SPDY. That is clearly not happening here. At points, the CWND shrinks, limiting the amount of data that can be transferred before waiting until the next trip. That ultimately adds time to the transfer.

But I don't think that explains the full delay. Looking at a similar diagram from a non-pipelined Firefox request (from two nights ago prior to enabling pipelining), I find:



There is similar choppiness in the CWND, but... ah wait. I see that I do not have a direct comparison. The non-pipelined session had a only a little more than 30,000 TCP/IP sequences. The pipelined request saw 80,000. Zooming in on the first 30,000 sequences of the pipelined session, I see:



That is almost identical to the non-pipelined version. Both have about the same choppiness and take ~600ms to run through the same number of sequences. That is to be expected, after all they are both non-pipelined up until this point. So it seems that everything is more or less OK until pipelining kicks.

When HTTP pipelining does start, it has a negative impact. By the time the pipelining session kicks in, the 6 TCP/IP tubes are still warming up and likely competing with each other for resources. The single tube that gets the pipeline is also still warming. Slamming it with 4 150kb images only helps so much with the CWND (again, possibly due to the other 5 open sessions). The end result is that pipelining is a big loser—almost doubling the time needed to serve up the page. Serving those 4 images over 4 different tubes would have been a better choice in this case.

I suppose edge cases like this (150 kb images) are why Firefox does not have pipelining on by default. I think tomorrow I will investigate a more realistic web case to see how the results change. Regardless, single tube SPDY connections will certainly handle something like this. That's something to prove another day as well.

Day #73

No comments:

Post a Comment