Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?

Poul-Henning Kamp phk at
Sat Jan 16 10:52:45 CET 2010

In message <FF646D15-26B5-4843-877F-FB8D469D248C at>, Ken Brownfield wri

It is important to be absolutely clear about what your objective is here,
availability, cache-hit-ratio or raw performance, the best solution will
depend on what you are after.

For a lot of purposes, you will get a lot of mileage out of a number of
parallel Varnish machines with DNS round-robin, for all practical
purposes, a zero-cost solution.

In the other end, you have a load-balancer in front of your varnishes,
which gives you all sorts of neat features at a pretty steep cost.

The spectrum between is filled with things like pound, haproxy and other
open-source solution, which may, or may not, run on their own hardware.

There is no "perfect fit for all" solutions in this space, you will
need to make your own choice.

>Squid has a peering feature; [...]

Squids peering feature was created for hit-rate only, the working scenario
is two squids each behind a very slow line to the internet, asking each
other before they pull down a file.

Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

More information about the varnish-misc mailing list