Strategies for splitting load across varnish instances? And avoiding single-point-of-failure?
Poul-Henning Kamp
phk at phk.freebsd.dk
Sat Jan 16 23:06:59 CET 2010
In message <4c3149fb1001161400n38a1ef1al18985bc3ad1ad41e at mail.gmail.com>, pub c
rawler writes:
>Just trying to figure out the implications of this because in our
>environment we regularly find ourselves pulling servers offline.
>Wondering if the return of a Varnish would operate like a cold-cache
>miss or what magic in Varnish deals with the change in hashing per se.
There is no built-in magic for that[1].
One of the really powerful things Varnish can do, is chance VCL code
on-the-fly, instantly.
So it is possible to start your Varnish with one VCL program, and have
a small script change to another one some minutes later.
You can use that, to start with a VCL where it only uses its neighbors
as backends, and then some minutes later when the cache has the most
common objects loaded, switch to another VCL that goes directly to
the backend.
If you want to get fancy, you can use VCL restarts, to ask the
neighbors and if they don't have it, go directly to the backend on
restart.
Poul-Henning
[1] In general Varnish has no built in magic, all the magic is your
responsibility to write in the VCL code :-)
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
More information about the varnish-misc
mailing list