struct http_conn & HTTP Workshop
phk at phk.freebsd.dk
Fri Jul 31 12:12:39 CEST 2015
In message <CANTn4crN63L4HDECdoYN=ArVza6EpXn7LhPWZBLP=Y09L+bOLw at mail.gmail.com>
, Martin Blix Grydeland writes:
>Attached is a set of patches to fix up the connection handling in Varnish,
>making it more general for both client and backend connections.
I'm sort of torn about struct http_conn right now.
One one hand, it is a decent "pull stuff of a fd" handle, on the other
it seems to be a H1 dead end.
... Zooming out ...
I'm on my way back from the HTTP workshop and I think the short
summary of the workshop is that we are nowhere near done playing
with HTTP transport protocols and that we'll have "profiles" for
HTTP in the future.
The profiles will likely be along the lines of:
Internet Of Shit - minimal set for small-CPU with battery
Streamlined "RPC" sort of thing. Main point: Requests are
independent, and all the priority/push stuff does not apply.
Big, Fat and Heavy. The full monty where requests depend
on each other, have priorities, things gets pushed etc. etc.
That probably doesn't affect us much directly.
However transport protocols is a mess.
People have realized that putting H2 on a single TCP/TLS connection
was not the smart move, because everything stalls on packet loss.
Compare that to H1 where the other 5 TCP connections can while the
sixth is stalled.
What is absolutely clear is that TCP isn't a given, and that the
relatively clean layering of H1 over TLS isn't going to carry
Googles QUIC is a very interesting take on a transport for HTTP,
but it is unlikely to be the final word.
UDP seems to be a given for the "real thing", even if there is also
a layer of SCTP in there somewhere. TCP is single-stream and the
"browser" profile just cannot be squeezed into that.
UDP means DNS work to do service-discovery and all the carriers who
shape UDP beause its "just P2P" will be upset etc.
TLS1.3 is the key technology here, it's all in the pot and needs to
simmer for some months. Likely more once the cryptographers stick
their forks in it.
The good news is that TLS1.3 can be done in a lot less code than
The bad news is that they have not "as such" paid attention to
small/compact platforms (ie: IOS) and doesn't seem to care much for
it, focusing entirely on the "browser" profile.
There was widespread support for a "blind-caching" model, from
telco-kit people over CDNs to browsers and sites.
The model is essentially that an object is split in two parts.
The body is encrypted and put into a new object which gets no
'semantic' headers, ie: a random URL, a C-L and that's about it.
This object can be cached in caches nobody trust, like a
cell-tower cache or a corporate gateway cache.
The server can send an "indirect object" to a client, the headers
are the real thing, but it has "Content-Type: oob" or some such and
the body contains "fetch the real body using this URL and decrypt
it with this KEY". The client then picks up URL via the cache.
This scheme saves bandwidth, but not latency, and it has a large
number of use-cases, including video-on-demand, DRM etc.
It won't exactly be our core-case, but VCL is a damn good place to
control the policies for such use.
Speaking of VCL: Akamai is working on a "VCL to our own config"
converter. If they succeed, VCL will de-facto become the CDN
language. I wished them good luck and told them to keep in touch.
There is some hope that we can cooperate with various other people
on H2 test-tools but the bad news is that none of them had seen
anything like varnishtest before, so they hinted an expectation
that we would drag the heavy load since we're "clearly so much
H2 plaintext is stuck in "nobody wants to be the first" mode, so
we may end up being the first to deal with H1->H2 upgrade.
I think I got my point about "proportional response" as a smarter
political strategy across and that TLS-everywhere might be counter
productive, but it's probably too late to get meaningful change.
There were lots of semi-heated discussions, but many of them ended
abrubtly with "Ohh, didn't think of *THAT*" when the other party
got a chance to elaborate and explain.
One example was that many browser/site people clearly didn't grasp
the high rates at which load-balancers and varnish operates. Their
"A modern CPU can do $so_much crypto per core" arguments looked a
lot less impressive once they realized that $so_much were an order
of magnitude below our reality.
Overall it was a very good meeting, but 4 days of concentrated HTTP
was clearly an overdosis for most of us. We will probably have
more of that kind of pow-wows in the future.
... Zooming back in ...
So it is not obvious to me that struct http_conn will be all that
general in the future. It is a useful abstraction for "pull bytes
of a socket", and is used as such in H1 and PROXY, but it seems
pretty clear that H2 and later probably wont use it at all.
What we're really looking for here is the non-existent primitives
between the H2 semantic and H2 transport parts :-/
I'll try to come up with some more coherent thoughts.
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
More information about the varnish-dev