hit-for-pass vs. hit-for-miss
Geoffrey Simmons
geoff at uplex.de
Wed Sep 7 21:41:30 CEST 2016
This is worth responding to from vacation.
The "specific requirement we have" is a consequence of applying the HTTP protocol the way it was meant to be used -- responses specify their cacheability, without VCL having to intervene to classify which requests go to lookup or pass. Typically with a sequence of regex matched against URL patterns in vcl_recv.
That may indeed be unusual, but I see that a sad commentary on the state of web developers' knowledge about caching and HTTP. Not as somebody's peculiar requirement.
It would strike me as rather odd if a caching proxy has to treat it as a special case when backends actually do something the right way (like always set Cache-Control to determine TTLs).
Geoff
Sent from my iPhone
> On Sep 7, 2016, at 1:01 PM, Nils Goroll <slink at schokola.de> wrote:
>
> Hi,
>
> TL;DR please shout if you think you need the choice between hit-for-pass and
> hit-for-miss.
>
>
>
>> On 02/09/16 20:10, Nils Goroll wrote:
>> - previously: hit-for-pass
>> - now: hit-for-miss
>
> on IRC, phk has suggested that we could bring back hit-for-pass in a vmod *)
>
> I would like to understand if bringing back hit-for-pass is a specific
> requirement we have (in which case a vmod producing quite some overhead would be
> the right thing to do) or if others have more cases which would justify a
> generic solution in varnish core like this one:
>
>> sub vcl_hit {
>> if (obj.uncacheable) {
>> if (obj.http.Criterium) {
>> return (miss);
>> } else {
>> return (pass);
>> }
>> }
>> }
>
> Thank you, Nils
>
>
> *) using a secondary cache index (maybe as in the xkey vmod), mark objects we
> want to pass for in backend_response, check in recv if the object is marked and
> return(pass) if so.
>
> _______________________________________________
> varnish-dev mailing list
> varnish-dev at varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
More information about the varnish-dev
mailing list