Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

David Birdsong david.birdsong at gmail.com
Fri Jan 15 22:19:33 CET 2010


On Fri, Jan 15, 2010 at 10:11 AM, Rodrigo Benzaquen
<rodrigo at mercadolibre.com> wrote:
> HA PROXY is open spurce and works pretty well. Also you can do load balance
> based on HAS URL if you want.

aye, the development is pretty active also.  i asked for a consistent
hash option in haproxy and got one in less than 2 weeks -only
available in the dev version for now.

>
>
> On Fri, Jan 15, 2010 at 3:09 PM, Bendik Heltne <bheltne at gmail.com> wrote:
>>
>> > A couple more questions:
>> >
>> > (1) Are they any good strategies for splitting load across Varnish
>> > front-ends? Or is the common practice to have just one Varnish server?
>>
>> We have 3 servers. A bit overkill, but then we have redundancy even if
>> one fail. I guess 2 is the minimum option if you have an important
>> site and 99,5% uptime guarantee.
>>
>> > (2) How do people avoid single-point-of-failure for Varnish? Do people
>> > run Varnish on two servers, amassing similar local caches, but put
>> > something in front of the two Varnishes? Or round-robin-DNS?
>>
>> We use a loadbalancer from F5 called BigIP. It's no exactly free, but
>> there are free alternatives that will probably do much of the basic
>> stuff:
>> http://lcic.org/load_balancing.html
>>
>> - Bendik
>> _______________________________________________
>> varnish-misc mailing list
>> varnish-misc at projects.linpro.no
>> http://projects.linpro.no/mailman/listinfo/varnish-misc
>
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>
>



More information about the varnish-misc mailing list