Deliver on HIT, otherwise redirect using "503; Location: ..."

Guillaume Quintard guillaume at varnish-software.com
Sun Dec 18 20:59:26 CET 2016


I think Jason is right in asking "why?". What do you want to achieve
specifically with this behavior?

Varnish has streaming and request coalescing, meaning a request can be
served as soon as data starts being available AND the backend doesn't
suffer from simultaneous misses on the same object. I feel that should
cover almost all your needs, so I'm curious about the use-case.

On Dec 18, 2016 20:27, "Jason Price" <japrice at gmail.com> wrote:

> It would be possible to do this with varnish... but I have to ask... why
> bother?
>
> If the purpose is to offload the IO load, then varnish is good, but you
> need to prime the cache... TBH, what I'd do first is put one or a pair of
> varnish boxes really close to the overloaded box, and force all traffic to
> that server through the close varnish boxes... using the do_stream feature,
> you'll get stuff out there fairly quickly.
>
> After that is working nicely, I'd layer in the further out varnish boxes
> which interact with the near-varnish boxes to get their data.
>
> This works well at scale since the local caches offer whatever's useful
> local to them, and the 'near-varnish' boxes handle the 'global caching'
> world.
>
> This was how I arranged it at $PreviousGig and the outer CDN was getting a
> 85-90% cache hit ratio, and the inner tier was seeing 60% cache hit
> ratio's.  (The inner tier's ratio will depend heavily on how many outer
> tier's there are...)
>
> On Sat, Dec 17, 2016 at 8:09 PM, Anton Berezhkov <
> bubonic.pestilence at gmail.com> wrote:
>
>> This is how I semi-implemented: http://pastebin.com/drDP8JxP
>> Now i need to use script which will run "curi -I -X PUT
>> <url-to-put-into-cache>".
>>
>>
>> > On Dec 18, 2016, at 3:58 AM, Mark Staudinger <mark.staudinger at nyi.net>
>> wrote:
>> >
>> > Hi Anton,
>> >
>> > Have you looked into the "do_stream" feature of Varnish?  This will
>> begin serving the content to the visitor without waiting for the entire
>> object to be downloaded and stored in cache.  Set in vcl_backend_response.
>> >
>> > https://github.com/mattiasgeniar/varnish-4.0-configuration-
>> templates/blob/master/default.vcl
>> >
>> > Cheers,
>> > Mark
>> >
>> > On Sat, 17 Dec 2016 19:05:48 -0500, Anton Berezhkov <
>> bubonic.pestilence at gmail.com> wrote:
>> >
>> >> Hello.
>> >>
>> >> Switched to Varnish from Nginx for additional functionality and better
>> control of handling requests.
>> >> But still can't implement what i want. And I want simple behaviour
>> "Redirect on MISS/PASS".
>> >> I want to use VC for deploying quick "cdn" servers for our
>> mp4-video-servers (used for HTML5 players), without need to store all files
>> on this quick (ssd, upto 2x480GB space, full database about 6TB).
>> >>
>> >> Currently we have 6 servers with SATA HDDs and hitting iowait like a
>> trucks :)
>> >>
>> >> Examples:
>> >> - Request -> Varnish -> HIT: serve it using Varnish.
>> >> - Request -> Varnish -> MISS: start caching data from backend, and
>> instantly reply to client: `Location: http://backend/$req.url"
>> >> - Request -> Varnish -> UPDATE: see `-> MISS` behaviour.
>> >>
>> >> From my perspective, i should do this "detach & reply redirect"
>> somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood
>> correctly https://www.varnish-cache.org/docs/4.1/reference/states.html,
>> i need vcl_backend_response to keep run in background (as additional
>> thread) while doing return(synth(...)) to redirect user.
>> >>
>> >> Similiar thing is "hitting stale content while object is updating".
>> >> But in my case "replying redirect while object is updating".
>> >>
>> >> Also, i pray to implement this without writing additional scripts,
>> why? I could do external php/ruby checker/cache-pusher with nginx & etc.
>> But scared by performance downgrade :(
>> >> _______________________________________________
>> >> varnish-misc mailing list
>> >> varnish-misc at varnish-cache.org
>> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>>
>> _______________________________________________
>> varnish-misc mailing list
>> varnish-misc at varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20161218/463704a4/attachment.html>


More information about the varnish-misc mailing list