Fwd: How to duplicate cached data on all Varnish instance?

Xianzhe Wang wxz19861013 at gmail.com
Wed Feb 20 04:44:20 CET 2013


I have seen the same question and some answered:
https://www.varnish-cache.org/lists/pipermail/varnish-misc/2012-April/021957.html


"

I don't think what you have in mind would work. Varnish requires an
explicit lock on the files in manages. Sharing a cache between Varnish
instances won't ever work.

What I would recommend you do  is to hash incoming requests based on URL so
each time the same URL is hit it is served from the same server. That way
you don't duplicate the content between caches. Varnish can do this, F5's
can do it, haproxy should be able to do this as well.

"

So I don't think what I have in mind would work.


---------- Forwarded message ----------
From: Xianzhe Wang <wxz19861013 at gmail.com>
Date: 2013/2/20
Subject: Re: How to duplicate cached data on all Varnish instance?
To: Sascha Ottolski <ottolski at web.de>
Cc: Varnish misc <varnish-misc at varnish-cache.org>


Thank you for help.
What you said is all correct.

I think I'm not clear. I want exactly is that:
2 varnishes share only one cache file(or cache memory). As one request, 2
varnishes share only one cache object.
Something like this:
varnish1--> cache file <--varnish2

Could you have any suggestions or experiences?
Everything will be appreciated.

Regard

Shawn



2013/2/20 Sascha Ottolski <ottolski at web.de>

> Am Dienstag, 19. Februar 2013, 19:44:53 schrieb Xianzhe Wang:
> > Here I take nginx as a load balancer and it contects 2 varnish severs.
> > I wanna share the cache object between 2 varnishes.
> > When one varnish is down, the left one will work fine, and the cache
> > object is still work.
> > Is there anything I can do for this?
> >
> > I aslo saw an  example something like this:
> > https://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy
> > But i think it will increase network delay. So I don't want do it like
> > this.
> >
> > Is someone can share their experience? Thanks a lot.
> >
> > Shawn
>
> I would say, you already have your solution. If nginx send the requests
> randomly to any of the two servers, each will obviously fill its cache;
> so if one goes down, the other is still there. The two caches may not be
> completely identically, depending on the size of your cacheable content,
> but each should be "warm" enough to serve most requests from its cache.
>
> And you're not limited to two varnish servers, of course. The more you
> put into your loadbalanced cluster, the lower the impact if one fails.
>
> Cheers
>
> Sascha
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20130220/d5713ec2/attachment-0001.html>


More information about the varnish-misc mailing list