Stale-while-Validate for varnish

Richard rzuidhof at gmail.com
Mon Jun 4 16:57:05 CEST 2012


On Mon, Jun 4, 2012 at 2:49 PM, Arun Dobriyal
<arundobriyaliitkgp at gmail.com>wrote:

> Hi,
>
> 1. I want a state-while-revalidate feature(available in some reverse
> proxies like squid) for my varnish server.  In a normal setting with the
> grace time, the varnish
>
> makes the first client to wait while the other clients are served stale
> content. But I want all clients to be served stale
>
> content, while the refereshing of the cache is done at the background
> asychronously (that way no one is kept at waiting) by varnish.
>
> Varnish doesnt support this as of now.. Is there any tweak to achieve it ?
>
No, this is not supported. I have heard it is bit too complicated to
implement and that the current functionality is good usually enough so
there is not much request for it.

> 2. I am planning a tweak for the above functionality which is as follows,
> for the first request, the varnish normally sends the request it to backend
> where its reported as a cache miss and then the request is validated (i
> used varnishlog to see that it reports vcl_miss for the first request) ..
> whereas for the other queued requests, they are just treated as a normal
> cache hit and delivered immediately..
>
> now, I want to stop the first request to wait for the validation, so I am
> planning to  somehow find if the current request is a stale request and
> within the grace time, if so, I will serve the stale response for it
> quickly, and since I want to validate the cache at the backend
> asynchronously, I willl send this url to an asynchronous queue which will
> cache this request whenever free..
>
> now my question is, how can I find if the request is the first object and
> is stale but within the grace time ? If I am able to find this, I can
> probably create a Stale-while-Validate for varnish..
>
You can do a trick like this with three backend definitions. First backend
should have very low first_byte_timeout and between_byte_timeout. The
second backend is a dummy (unhealthy) backend that will serve stale content
if available. The third backend is a normal backend that can deliver
requests that are generated by the backend server. The downside of this
approach is that the backend server will receive more requests because the
first request to the backend server will be cancelled and disregarded by
Varnish. And this approach really needs a mechanism to asynchronously
refresh content that is served from the stale cache. Otherwise the stale
content will become too old.

Kind regards,

Richard Zuidhof
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20120604/29c62a1f/attachment.html>


More information about the varnish-misc mailing list