suggesting bugwash topic: hit-for-pass vs. hit-for-miss
Poul-Henning Kamp
phk at phk.freebsd.dk
Mon Oct 3 10:13:44 CEST 2016
--------
In message <af719b48-41a3-5317-9c9d-e926c3ca0f14 at schokola.de>, Nils Goroll writ
es:
This is agenda item #1 for bugwash today, please everybody prepare.
Poul-Henning
>Hi,
>
>Geoff any myself would want to get ahead with the design aspects of this issue,
>so we'd appreciate if we could discuss it during the next bugwash.
>
>It would be great if anyone interested could review the background info - my
>initial email, quoted in full below, has the top-level overview.
>
>For the option of a vcl_hit based decision,
>https://github.com/varnishcache/varnish-cache/issues/1799 is relevant also.
>
>Thank you, Nils
>
>On 02/09/16 20:10, Nils Goroll wrote:
>> (quick brain dump before I need to rush out)
>>
>> Geoff discovered this interesting consequence of a recent important change of
>> phk and we just spent an hour to discuss this:
>>
>> before commit 9f272127c6fba76e6758d7ab7ba6527d9aad98b0, a hit-for-pass object
>> lead to a pass, not it's a miss. IIUC the discussions we had on a trip to
>> Amsterdam, phks main motivation was to eliminate the potentially deadly effect
>> unintentionally created hfps had on cache efficiency: No matter what, for the
>> lifetime of the hfp, all requests hitting that object became passes.
>>
>> so, in short
>>
>> - previously: an uncacheable response wins and sticks for its ttl
>> - now: an cacheable response wins and sticks for its ttl
>>
>> or eben shorter:
>>
>> - previously: hit-for-pass
>> - now: hit-for-miss
>>
>> From the perspective of a cache, the "now" case seems clearly favorable, but now
>> Geoff has discovered that the reverse is true for a case which is important to
>> one of our projects:
>>
>> - varnish is running in "do how the backend says" mode
>> - backend devs know when to make responses uncacheable
>> - a huge (600MB) backend response is uncacheable, but client-validatable
>>
>> so this is the case for the previous semantics:
>>
>> - 1st request creates the hfp
>> - 2nd request from client carries INM
>> - gets passed with INM
>> - 304 from backend goes to client
>>
>> What we have now is:
>>
>> - 1st request creates the hfm (hit-for-miss)
>> - 2nd request is a miss
>> - INM gets removed
>> - backend sends 600MB unnecessarily
>>
>> We've thought about a couple of options which I want to write down before they
>> expire from my cache:
>>
>> * decide in vcl_hit
>>
>> sub vcl_hit {
>> if (obj.uncacheable) {
>> if (obj.http.Criterium) {
>> return (miss);
>> } else {
>> return (pass);
>> }
>> }
>> }
>>
>> * Do not strip INM/IMS for miss and have a bereq property if it was a misspass
>>
>> - core code keeps INM/IMS
>> - builtin.vcl strips them in vcl_miss
>> - can check for hitpass in vcl_miss
>> - any 304 backend response forced as uncacheable
>> - interesting detail: can it still create a hfp object ?
>>
>> BUT: how would we know in vcl_miss if we see
>> *client* inm/ims or varnish-generated inm/ims ?
>>
>> So at this point I only see the YAS option.
>>
>> Nils
>>
>> _______________________________________________
>> varnish-dev mailing list
>> varnish-dev at varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
>>
>
>_______________________________________________
>varnish-dev mailing list
>varnish-dev at varnish-cache.org
>https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
>
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
More information about the varnish-dev
mailing list