<div dir="ltr">Thanks for the context. So, if I get what you're writing, the goal is too redirect to a node that has the object in cache?<div><br></div><div>Question is, what's the time needed to:</div><div>- send a request to the server on the LAN and receive the object</div><div>- send the redirect across the web, and wait for the client to send a new request, again across the web.</div><div><br></div><div>If the former is not at least an order of magnitude larger than the latter, I wouldn't bother.<br></div><div><br></div><div>The issues I have with your redirection scheme are that:</div><div>- IIUC, you are basically explaining to people where the backend is, instead of shielding it with Varnish</div><div>- it doesn't lower the backend traffic</div><div>- as said, I'm not even sure the user-experience is better/faster</div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>-- <br></div>Guillaume Quintard<br></div></div></div>
<br><div class="gmail_quote">On Sun, Dec 18, 2016 at 9:21 PM, Anton Berezhkov <span dir="ltr"><<a href="mailto:bubonic.pestilence@gmail.com" target="_blank">bubonic.pestilence@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">We have 6 servers with 6TB on each one (same video files). Currently they're hitting iowait limit of SATA disk (240ops total). At same time, each server can provide 500Mbit of guaranteed bandwidth.<br>
<br>
With HDD restriction, each server provides about 320Mbit. There is also problem with fragmentation, which is caused by nature of HTML5 video players & HTTP (allowing to request partial data with Range header).<br>
<br>
Before this moment, we were scaling horizontally, by duplicating this servers.<br>
<br>
There is also option to get same server with 2x480GB SSDs. As i reasearched from nginx logs, 98% of daily traffic lays in ≈800GB of files.<br>
<br>
What i want to achieve: To build a varnish server with 2x480GB SSDs(no raid), with storage for varnish about 800GBs. Which will guarantedly fill all available bandwidth for a server.<br>
<br>
Also, I built simple load-balancer, which monitors each server for current eth0 load (in Mbps) and decide to which one redirect (using HTTP Location header).<br>
<br>
Request for Video -> LBServer: Find lowest loaded(1 of 6) & Redirect to LBNode -> LBN: serve request<br>
<br>
To add new HDD-LBN, i need to setup server, sync videos, setup some additional software.<br>
<br>
My wish: add new SSD-LBN, setup & sync varnish config, and it will build cached pool itself.<br>
<br>
Why i need redirect?<br>
1. It will offload bandwidth of SSD-LBN, pass-through will take bandwidth of both servers + still cause iowait problems on HDD-LBN.<br>
2. It will "prove" that uncached video will be take from HDD-LBN which always have all videos.<br>
<br>
Currently all LBN servers are hosted on OVH and we're good with them, especially because of low price :)<br>
<br>
If you have any suggestions, i'll be glad to hear them :)<br>
<div class="HOEnZb"><div class="h5"><br>
> On Dec 18, 2016, at 10:59 PM, Guillaume Quintard <<a href="mailto:guillaume@varnish-software.com">guillaume@varnish-software.<wbr>com</a>> wrote:<br>
><br>
> I think Jason is right in asking "why?". What do you want to achieve specifically with this behavior?<br>
><br>
> Varnish has streaming and request coalescing, meaning a request can be served as soon as data starts being available AND the backend doesn't suffer from simultaneous misses on the same object. I feel that should cover almost all your needs, so I'm curious about the use-case.<br>
><br>
> On Dec 18, 2016 20:27, "Jason Price" <<a href="mailto:japrice@gmail.com">japrice@gmail.com</a>> wrote:<br>
> It would be possible to do this with varnish... but I have to ask... why bother?<br>
><br>
> If the purpose is to offload the IO load, then varnish is good, but you need to prime the cache... TBH, what I'd do first is put one or a pair of varnish boxes really close to the overloaded box, and force all traffic to that server through the close varnish boxes... using the do_stream feature, you'll get stuff out there fairly quickly.<br>
><br>
> After that is working nicely, I'd layer in the further out varnish boxes which interact with the near-varnish boxes to get their data.<br>
><br>
> This works well at scale since the local caches offer whatever's useful local to them, and the 'near-varnish' boxes handle the 'global caching' world.<br>
><br>
> This was how I arranged it at $PreviousGig and the outer CDN was getting a 85-90% cache hit ratio, and the inner tier was seeing 60% cache hit ratio's. (The inner tier's ratio will depend heavily on how many outer tier's there are...)<br>
><br>
> On Sat, Dec 17, 2016 at 8:09 PM, Anton Berezhkov <<a href="mailto:bubonic.pestilence@gmail.com">bubonic.pestilence@gmail.com</a>> wrote:<br>
> This is how I semi-implemented: <a href="http://pastebin.com/drDP8JxP" rel="noreferrer" target="_blank">http://pastebin.com/drDP8JxP</a><br>
> Now i need to use script which will run "curi -I -X PUT <url-to-put-into-cache>".<br>
><br>
><br>
> > On Dec 18, 2016, at 3:58 AM, Mark Staudinger <<a href="mailto:mark.staudinger@nyi.net">mark.staudinger@nyi.net</a>> wrote:<br>
> ><br>
> > Hi Anton,<br>
> ><br>
> > Have you looked into the "do_stream" feature of Varnish? This will begin serving the content to the visitor without waiting for the entire object to be downloaded and stored in cache. Set in vcl_backend_response.<br>
> ><br>
> > <a href="https://github.com/mattiasgeniar/varnish-4.0-configuration-templates/blob/master/default.vcl" rel="noreferrer" target="_blank">https://github.com/<wbr>mattiasgeniar/varnish-4.0-<wbr>configuration-templates/blob/<wbr>master/default.vcl</a><br>
> ><br>
> > Cheers,<br>
> > Mark<br>
> ><br>
> > On Sat, 17 Dec 2016 19:05:48 -0500, Anton Berezhkov <<a href="mailto:bubonic.pestilence@gmail.com">bubonic.pestilence@gmail.com</a>> wrote:<br>
> ><br>
> >> Hello.<br>
> >><br>
> >> Switched to Varnish from Nginx for additional functionality and better control of handling requests.<br>
> >> But still can't implement what i want. And I want simple behaviour "Redirect on MISS/PASS".<br>
> >> I want to use VC for deploying quick "cdn" servers for our mp4-video-servers (used for HTML5 players), without need to store all files on this quick (ssd, upto 2x480GB space, full database about 6TB).<br>
> >><br>
> >> Currently we have 6 servers with SATA HDDs and hitting iowait like a trucks :)<br>
> >><br>
> >> Examples:<br>
> >> - Request -> Varnish -> HIT: serve it using Varnish.<br>
> >> - Request -> Varnish -> MISS: start caching data from backend, and instantly reply to client: `Location: <a href="http://backend/$req.url" rel="noreferrer" target="_blank">http://backend/$req.url</a>"<br>
> >> - Request -> Varnish -> UPDATE: see `-> MISS` behaviour.<br>
> >><br>
> >> From my perspective, i should do this "detach & reply redirect" somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood correctly <a href="https://www.varnish-cache.org/docs/4.1/reference/states.html" rel="noreferrer" target="_blank">https://www.varnish-cache.org/<wbr>docs/4.1/reference/states.html</a><wbr>, i need vcl_backend_response to keep run in background (as additional thread) while doing return(synth(...)) to redirect user.<br>
> >><br>
> >> Similiar thing is "hitting stale content while object is updating".<br>
> >> But in my case "replying redirect while object is updating".<br>
> >><br>
> >> Also, i pray to implement this without writing additional scripts, why? I could do external php/ruby checker/cache-pusher with nginx & etc. But scared by performance downgrade :(<br>
> >> ______________________________<wbr>_________________<br>
> >> varnish-misc mailing list<br>
> >> <a href="mailto:varnish-misc@varnish-cache.org">varnish-misc@varnish-cache.org</a><br>
> >> <a href="https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc" rel="noreferrer" target="_blank">https://www.varnish-cache.org/<wbr>lists/mailman/listinfo/<wbr>varnish-misc</a><br>
><br>
><br>
> ______________________________<wbr>_________________<br>
> varnish-misc mailing list<br>
> <a href="mailto:varnish-misc@varnish-cache.org">varnish-misc@varnish-cache.org</a><br>
> <a href="https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc" rel="noreferrer" target="_blank">https://www.varnish-cache.org/<wbr>lists/mailman/listinfo/<wbr>varnish-misc</a><br>
><br>
><br>
> ______________________________<wbr>_________________<br>
> varnish-misc mailing list<br>
> <a href="mailto:varnish-misc@varnish-cache.org">varnish-misc@varnish-cache.org</a><br>
> <a href="https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc" rel="noreferrer" target="_blank">https://www.varnish-cache.org/<wbr>lists/mailman/listinfo/<wbr>varnish-misc</a><br>
<br>
</div></div></blockquote></div><br></div>