How to set up varnish not be a single point of failure

Bret A. Barker bret at iwin.com
Fri Jan 28 16:34:55 CET 2011


For some of our clusters we use a slightly different approach with the F5s:

[1. F5] -> [2. Varnish pool] -> [3. F5] -> [4. Tomcat pool]

By going back through the F5, this setup allows us to keep all of our backend selection logic (not to mention other iRule goodness) together in the F5 configs instead of VCL. We've been using this scheme for quite some time w/good results - the extra hop is negligible in terms of latency vs. the average backend response times for dynamic requests.

And we likewise don't have an issue w/redundant cache data for our use-cases. The extra backend request per Varnish instance per TTL period is minor vs. the impact of losing a Varnish instance that is the sole cache for a large percentage of your URL space. I think hashed based balancing is generally better for static content.

-bret

On Fri, Jan 28, 2011 at 03:55:47PM +0100, jdzstz - gmail dot com wrote:
> In my opinion, the problem of having separate caching based on URL is
> that in case of problems, secondary failover server has a empty cache
> for rest of URL, so can affect to throughtput.
> 
> Our architecture is the following:
> 
> [1. F5 LB] => [2. Varnish]  => [3. Tomcat]
> 
> 1) F5 Big IP Hardware Load Balancer
> 2) Four Varnish cache in diferent machines
> 3) Four Tomcat servers in diferent machines
> 
> We don't care to have redundant caching because:
>   -  we don't have resource problems
>   -  in case of problems, all varnish instances has the cache already populated
> 





More information about the varnish-misc mailing list