How to set up varnish not be a single point of failure

Bedis 9 bedis9 at gmail.com
Fri Jan 28 15:50:44 CET 2011


On Fri, Jan 28, 2011 at 3:01 PM, Gresens, August
<AGresens at scholastic.com> wrote:
> We have two varnish servers behind the load balancer (nginx). Each varnish server has an identical configuration and load balances the actual backends (web servers).
>
> Traffic for particular url patterns are routed to one of the varnish servers by the load balancer. For each url pattern the secondary source is the alternate varnish server. In this way we can we partition traffic between the two varnish servers and avoid redundant caching but the second one will act as a failover if the primary goes down.
>
> Best,
>
> A
>
> -----Original Message-----
> From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Angelo Höngens
> Sent: Friday, January 28, 2011 8:42 AM
> To: varnish-misc at varnish-cache.org
> Subject: Re: How to set up varnish not be a single point of failure
>
> On 28-1-2011 14:38, Caunter, Stefan wrote:
>>
>>
>>
>> On 2011-01-28, at 6:26 AM, "Stewart Robinson" <stewsnooze at gmail.com> wrote:
>>
>>> Other people have configured two Varnish servers to be backends for
>>> each other. When you see the other Varnish cache as your remote IP you
>>> then point the request to the real backend. This duplicates your cache
>>> items in each cache.
>>>
>>> Be aware of http://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy
>>>
>>> Stew
>>>
>>> On 28 January 2011 10:46, Siju George <sgeorge.ml at gmail.com> wrote:
>>>> Hi,
>>>>
>>>> I understand that varnish does not support cache peering like Squid.
>>>> My planned set up is something like
>>>>
>>>>
>>>>           ---- Webserver1 ---              ------- Cache ---
>>>> ------ API
>>>> LB ----|                          |---- LB----|                    |---- LB
>>>> ----|
>>>>           ---- Webserver2 ---              ------- Cache ---
>>>> ------ API
>>>>
>>>> So if I am using Varnish as Cache what is the best way to configure them so
>>>> that there is redundancy and the setup can continue even if one Cache fails?
>>>>
>>>> Thanks
>>>>
>>>> --Siju
>>
>>
>> Put two behind LB. Caches are cooler but you get high availability.
>> Easy to do maintenance this way.
>
>
> We use Varnish on CentOS machines. We use Pacemaker for
> high-availability (multiple virtual ip's) and DNSRR for balancing
> end-users to the caches.
>
> see
> http://blog.hongens.nl/guides/setting-up-a-pacemaker-cluster-on-centosrhel/
> for the pacemaker part..
>
> --
>
>
> With kind regards,
>
>
> Angelo Höngens
> systems administrator
>
> MCSE on Windows 2003
> MCSE on Windows 2000
> MS Small Business Specialist
> ------------------------------------------
> NetMatch
> tourism internet software solutions
>
> Ringbaan Oost 2b
> 5013 CA Tilburg
> +31 (0)13 5811088
> +31 (0)13 5821239
>
> A.Hongens at netmatch.nl
> www.netmatch.nl
> ------------------------------------------
>
>
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at varnish-cache.org
> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
> SCHOLASTIC
> Read Every Day.
> Lead a Better Life.
>
>
>
>
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at varnish-cache.org
> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>


Hey,

You can use HAproxy for your LB.
It has a hash metric, usefull for caches (and much more functionnality).

cheers




More information about the varnish-misc mailing list