Safety of setting "beresp.do_gzip" in vcl_backend_response

Nigel Peck np.lists at
Thu Apr 13 22:45:24 CEST 2017

Thanks for this, great to have the detailed info. So it looks like the 
most efficient solution is going to be to only "do_gzip" uncacheable 
responses if the client supports it, which means also implementing (and 
modifying) the builtin code in my VCL, and not flowing through to it. 
Got it, thanks.


On 13/04/2017 04:27, Dridi Boukelmoune wrote:
> On Thu, Apr 13, 2017 at 8:44 AM, Guillaume Quintard
> <guillaume at> wrote:
>> You are right, subsequent requests will just be passed to the backend, so no
>> gzip manipulation/processing will occur.
> I had no idea [1] so I wrote a test case [2] to clear up my doubts:
>      varnishtest "uncacheable gzip"
>      server s1 {
>          rxreq
>          txresp -bodylen 100
>      } -start
>      varnish v1 -vcl+backend {
>          sub vcl_backend_response {
>              set beresp.do_gzip = true;
>              set beresp.uncacheable = true;
>              return (deliver);
>          }
>      } -start
>      client c1 {
>          txreq
>          rxresp
>      } -run
>      varnish v1 -expect n_gzip == 1
>      varnish v1 -expect n_gunzip == 1
> Despite the fact that the response is not cached, it is actually
> gzipped, because in all cases backend responses are buffered through
> storage (in this case Transient). It means that for clients that don't
> advertise gzip support like in this example, on passed transactions
> you will effectively waste cycles on doing both on-the-fly gzip and
> gunzip for a single client transaction.
> That being said, it might be worth it if you have a high rate of
> non-cacheable contents, but suitable for compression: less transient
> storage consumption. I'd say it's a trade off between CPU and memory,
> depending on what you wish to preserve you can decide how to go about
> that.
> You can even do on-the-fly gzip on passed transactions only if the
> client supports it and the backend doesn't, so that you save storage
> and bandwidth, at the expense of CPU time you'd have consumed on the
> client side if you wanted to save bandwidth anyway.
> The only caveat I see is the handling of the built-in VCL:
>> I am wondering if it is safe to do this even on responses that may
>> subsequently get set as uncacheable by later code?
> If you let your VCL flow through the built-in rules, then you have no
> way to cancel the do_gzip if the response is marked as uncacheable.
> Dridi
> [1] well I had an idea that turned out to be correct, but wasn't sure
> [2] tested only with 5.0, but I'm convinced it is stable behavior for 4.0+

More information about the varnish-misc mailing list