<div class="gmail_quote">2010/1/15 Rob S <span dir="ltr"><<a href="mailto:rtshilston@gmail.com">rtshilston@gmail.com</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">John Norman wrote:<br>
> Folks,<br>
><br>
> A couple more questions:<br>
><br>
> (1) Are they any good strategies for splitting load across Varnish<br>
> front-ends? Or is the common practice to have just one Varnish server?<br>
><br>
> (2) How do people avoid single-point-of-failure for Varnish? Do people<br>
> run Varnish on two servers, amassing similar local caches, but put<br>
> something in front of the two Varnishes? Or round-robin-DNS?<br>
><br>
</div>We're running with two instances and round-robin DNS. The varnish<br>
servers are massively underused, and splitting the traffic also means we<br>
get half the hit rate. But it avoids the SPOF.<br>
<br>
Is anyone running LVS or similar in front of Varnish and can share their<br>
experience?<font color="#888888"></font><br></blockquote><div><br>We run two varnish servers behind a netscaler load balancer to eliminate SPOF. Works fine, as the previous poster mentions you lower your hit rate - but not as much as I expected. <br>
<br>As far as load is concerned, we could easily use just one server and it would probably still be 99% idle.<br><font color="#888888"></font><font color="#888888"></font></div></div>