Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?
quasirob at googlemail.com
Tue Jan 19 14:25:38 CET 2010
2010/1/15 Rob S <rtshilston at gmail.com>
> John Norman wrote:
> > Folks,
> > A couple more questions:
> > (1) Are they any good strategies for splitting load across Varnish
> > front-ends? Or is the common practice to have just one Varnish server?
> > (2) How do people avoid single-point-of-failure for Varnish? Do people
> > run Varnish on two servers, amassing similar local caches, but put
> > something in front of the two Varnishes? Or round-robin-DNS?
> We're running with two instances and round-robin DNS. The varnish
> servers are massively underused, and splitting the traffic also means we
> get half the hit rate. But it avoids the SPOF.
> Is anyone running LVS or similar in front of Varnish and can share their
We run two varnish servers behind a netscaler load balancer to eliminate
SPOF. Works fine, as the previous poster mentions you lower your hit rate -
but not as much as I expected.
As far as load is concerned, we could easily use just one server and it
would probably still be 99% idle.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the varnish-misc