Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

Tollef Fog Heen tfheen at
Mon Jan 18 10:01:58 CET 2010

]] Ken Brownfield 

| 3) Hash/bucket URLs to cache pairs.
| Same as 2), but for every hash bucket you would send those hits to two
| machines (think RAID-10).  This provides redundancy from the effects
| of 2a), and gives essentially infinite scalability for the price of
| doubling your miss rate once (two machines per bucket caching the same
| data).  The caveat from 2b) still applies.

I've pondered having a semi-stable hash algorithm which would hash to
one host, say, 90% of the time and another 10% of the time.  This would
allow you much more flexible scalability here as you would not need
twice the number of servers, only the number you need to have
redundant.  And you could tune the extra cache miss rate versus how much
redundancy you need.

I don't know of any products having this out of the box.  I am fairly
sure you could do it on F5s using iRules, and I would not be surprised
if HAProxy or nginx can either do it or be taught how to do this.

Tollef Fog Heen 
Redpill Linpro -- Changing the game!
t: +47 21 54 41 73

More information about the varnish-misc mailing list