Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

Michael S. Fischer michael at dynamine.net
Mon Jan 18 21:47:57 CET 2010



On Jan 18, 2010, at 12:31 PM, Ken Brownfield <kb at slide.com> wrote:

> On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
>
>> On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne <bheltne at gmail.com>  
>> wrote:
>>
>> Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
>> memory and the backends have a much easier life than before Varnish.
>> We are about to upgrade RAM on the Varnish boxes, and eventually we
>> can switch to disk cache if needed.
>>
>> If you receive more than 100 requests/sec per Varnish instance and  
>> you use a disk cache, you will die.
>
> I was surprised by this, what appears to be grossly irresponsible  
> guidance, given how large the installed base is that does thousands  
> per second quite happily.
>
> Perhaps there's missing background for this statement?  Do you mean  
> swap instead of Varnish file/mmap?  Disk could just as easily mean  
> SSD these days.  Even years ago on Squid and crappy EIDE drives you  
> could manage 1-2,000 requests per second

I should have been more clear.  If you overcommit and use disk you  
will die.  Even SSD is a problem as the write latencies are high.

--Michael

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20100118/ed208066/attachment-0001.html>


More information about the varnish-misc mailing list