Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

Ken Brownfield kb+varnish at slide.com
Mon Jan 18 21:34:05 CET 2010


On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:

> On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne <bheltne at gmail.com> wrote:
> 
> Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
> memory and the backends have a much easier life than before Varnish.
> We are about to upgrade RAM on the Varnish boxes, and eventually we
> can switch to disk cache if needed. 
> 
> If you receive more than 100 requests/sec per Varnish instance and you use a disk cache, you will die.  

I was surprised by this, what appears to be grossly irresponsible guidance, given how large the installed base is that does thousands per second quite happily.

Perhaps there's missing background for this statement?  Do you mean swap instead of Varnish file/mmap?  Disk could just as easily mean SSD these days.  Even years ago on Squid and crappy EIDE drives you could manage 1-2,000 requests per second.
-- 
Ken

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20100118/394ae265/attachment-0001.html>


More information about the varnish-misc mailing list