Softpurge and nearly immediate refresh?
Danila Vershinin
ciapnz at gmail.com
Thu Apr 12 07:24:42 UTC 2018
Couldn’t really find anything useful besides obj.hits which indirectly tells us it’s a background fetch. So:
sub vcl_deliver {
if (req.http.grace == "normal(limited)" && obj.hits == 0 && req.http.Varied-Header = 'value1') {
set req.http.Varied-Header = 'value2’;
unset req.http.grace;
return (restart);
}
}
? :)
Best Regards,
Danila
> On 12 Apr 2018, at 00:24, Guillaume Quintard <guillaume at varnish-software.com> wrote:
>
> There's a beresp attribute for that last one (I'm on mobile, so I'm going to point you to man vcl :-))
>
> --
> Guillaume Quintard
>
> On Wed, Apr 11, 2018, 22:39 Danila Vershinin <ciapnz at gmail.com <mailto:ciapnz at gmail.com>> wrote:
> So I guess I’m looking into having one background fetch trigger another background fetch (to sequentially refresh different object variants). In this fashion:
>
> 1. Client PURGE request will do softpurge.softpurge(); return(restart); with GET method, etc. which will all lead to "return deliver()" in limited grace logic to fire background fetch .
>
> 2. Then in background fetch:
> → in its vcl_deliver() - the current object variation has already entered cache, so setting varied header to value2, removing grace limited flag and calling restart(). This way it should continue revalidation for the other object variant
> → we land inside limited grace logic again (as it’s a different object variant) and return deliver() again thus firing off second background fetch (which will refresh second object variant).
>
> So the standard grace logic + something like this:
>
> sub vcl_deliver {
> if (req.http.grace == "normal(limited)" && req.http.Varied-Header = 'value1') {
> set req.http.Varied-Header = 'value2’;
> unset req.http.grace;
> return (restart);
> }
> }
>
> However, it won’t work at least because req.http.grace flag will be set for both the background fetch and the request that kicked it off. (it will be there in vcl_deliver of both).
> Question is how can we tell if we are inside background fetch?
>
> Best Regards,
> Danila
>
>> On 11 Apr 2018, at 12:37, Guillaume Quintard <guillaume at varnish-software.com <mailto:guillaume at varnish-software.com>> wrote:
>>
>> Hi,
>>
>> That's indeed correct, a purge will kill all variations, and the restart only fetches one.
>>
>> The req.hash_always_miss trick however only kills/revalidate one variation.
>>
>> At this moment, we have no way to purge/revalidate all the object under one hash key.
>>
>> --
>> Guillaume Quintard
>>
>> On Wed, Apr 11, 2018 at 11:26 AM, Danila Vershinin <ciapnz at gmail.com <mailto:ciapnz at gmail.com>> wrote:
>> Hi Guillaume,
>>
>> A bit puzzled on something. If we use Vary: by some header.. am I correct that we need multiple restarts to refresh each object variation?
>>
>> Since the background fetch would only refresh the variation that matched initial purge request.
>>
>> Sent from my iPhone
>>
>> On 9 Apr 2018, at 12:18, Guillaume Quintard <guillaume at varnish-software.com <mailto:guillaume at varnish-software.com>> wrote:
>>
>>> Hi,
>>>
>>> You can purge then set the method to GET then restart. Would that work for you?
>>>
>>> Other way is to use req.hash_always_miss that will only revalidate if we are able to fetch a new object.
>>>
>>> --
>>> Guillaume Quintard
>>>
>>> On Sat, Apr 7, 2018 at 12:10 PM, Danila Vershinin <ciapnz at gmail.com <mailto:ciapnz at gmail.com>> wrote:
>>> Hi,
>>>
>>> What I work with:
>>>
>>> * Grace mode configured to be 60 seconds when backend is healthy
>>> * Using softpurge module to adjust TTL to 0 upon PURGE.
>>>
>>> The whole idea is increasing chances that visitors will get cached page after cache was PURGEd for a page.
>>>
>>> Standard piece:
>>> sub vcl_hit {
>>> if (obj.ttl >= 0s) {
>>> # normal hit
>>> return (deliver);
>>> }
>>>
>>> if (std.healthy(req.backend_hint)) {
>>> # Backend is healthy. Limit age to 60s.
>>> if (obj.ttl + 60s > 0s) {
>>> set req.http.grace = "normal(limited)";
>>> return (deliver);
>>> } else {
>>> return(fetch);
>>> }
>>> } else {
>>> # ...
>>> }
>>> }
>>> And use of softpurge:
>>>
>>> sub vcl_miss {
>>> if (req.method == "PURGE") {
>>> softpurge.softpurge();
>>> return (synth(200, "Successful softpurge"));
>>> }
>>> }
>>>
>>> sub vcl_hit {
>>> if (req.method == "PURGE") {
>>> softpurge.softpurge();
>>> return (synth(200, "Successful softpurge"));
>>> }
>>> }
>>>
>>>
>>> Current behaviour:
>>>
>>> * send PURGE for cached page
>>> * Visitor goes to the page within 60 seconds and sees a stale cached page (triggering background refresh)
>>> * Further visits to the page will show refreshed page
>>>
>>> What I’m looking for:
>>>
>>> Trigger the background refresh right after PURGE while still leveraging grace mode :) That is, serve stale cache for only as long as it takes to actually generate the new page, and not wait for 60 seconds:
>>>
>>> * upon PURGE: set TTL to 0 (softpurge) + trigger background page request (possible?)
>>> * serve stale cache only while the page is generated
>>>
>>> I could have adjusted the “healthy backend grace period” to lower than 60s, but I’m basically checking to see if it’s possible to refresh “nearly” immediately in this kind of setup.
>>>
>>> Hope I made any sense :)
>>>
>>> Best Regards,
>>> Danila
>>>
>>>
>>> _______________________________________________
>>> varnish-misc mailing list
>>> varnish-misc at varnish-cache.org <mailto:varnish-misc at varnish-cache.org>
>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc <https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20180412/57c8cfaf/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Message signed with OpenPGP
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20180412/57c8cfaf/attachment.bin>
More information about the varnish-misc
mailing list