Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

pub crawler pubcrawler.com at gmail.com
Sat Jan 16 00:39:30 CET 2010


Have we considered adding pooling functionality to Varnish much like
what they have in memcached?

Run multiple Varnish(es) and load distributed amongst the identified
Varnish server pool....  So an element in Varnish gets hashed and the
hash identifies the server in the pool it's on.  If the server isn't
there or the element isn't there cold lookup to the backend server
then we store it where it should be.

Seems like an obvious feature - unsure of the performance implications though.

The recommendation of load balancers in front on Varnish to facilitate
this feature seems costly when talking about F5 gear.   The open
source solutions require at least two severs dedicated to this load
balancing function for sanity sake (which is costly).  Finally, Vanish
already offers load balancing (although limited) to the back end
servers - so lets do the same within Varnish to make sure Varnish
scales horizontally and doesn't require these other aids to be deemed
totally reliable.



More information about the varnish-misc mailing list