Fwd: How to make multiple clients can get the response at the same time by stream.

Xianzhe Wang wxz19861013 at gmail.com
Wed Jan 30 03:32:49 CET 2013


Hi,
Thanks a lot.

I tried option
"set req.hash_ignore_busy = true;"
in vlc_recv.
I think it works. But there are side effects: it would increase backend
load.

I have an idea about it in my previous email. what do you think about it?

Another question is that where can I find the "plus" branch of Varnish
which matches this issue.

 Any suggestions  will be appreciate.
Thanks again for help.

Regards,
--
Shawn Wang


---------- Forwarded message ----------
From: Xianzhe Wang <wxz19861013 at gmail.com>
Date: 2013/1/30
Subject: Re: How to make multiple clients can get the response at the same
time by stream.
To: Jakub Słociński <kuba at ovh.net>


Hi  Jakub S.
Thank you very much.
I tried, and take a simple test, two client request the big file at the
same time, they get the response stream immediately, so  it works.
In that case, multiple requests will go directly to "pass", they do not
need to wait, but it would increase backend load.
We need to balance the benefits and drawbacks.

I wanna is that:
    Client 1 requests url /foo
    Client 2..N request url /foo
    Varnish tasks a worker to fetch /foo for Client 1
    Client 2..N are now queued pending response from the worker
    Worker fetch response header(just header not include body) from
backend, and find it  non-cacheable, then  make the remaining
requests(Client 2..N) go directly to "pass". And creat the hit_for_pass
object synchronously in the first request(Client 1).
    Subsequent requests are now given the hit_for_pass object instructing
them to go to the backend as long as the hit_for_pass object exists.

As I mentioned below, is it feasible? Or do you have any Suggestions?

Thanks again for help.

Regards,
--
Shawn Wang



2013/1/29 Jakub Słociński <kuba at ovh.net>

> Hi Xianzhe Wang,
> you should try option
> "set req.hash_ignore_busy = true;"
> in vlc_recv.
>
> Regards,
> --
> Jakub S.
>
>
> Xianzhe Wang napisał(a):
> > Hello everyone,
> >     My varnish version is 3.0.2-streaming release.And I set
> > "beresp.do_stream  = true" in vcl_fetch in order to "Deliver the object
> to
> > the client directly without fetching the whole object into varnish";
> >
> > This is a part of my *.vcl file:
> >
> >  sub vcl_fetch {
> >     set beresp.grace = 30m;
> >
> >     set beresp.do_stream = true;
> >
> >     if (beresp.http.Content-Length && beresp.http.Content-Length ~
> > "[0-9]{8,}") {
> >        return (hit_for_pass);
> >     }
> >
> >      if (beresp.http.Pragma ~ "no-cache" || beresp.http.Cache-Control ~
> > "no-cache" || beresp.http.Cache-Control ~ "private") {
> >            return (hit_for_pass);
> >        }
> >
> >      if (beresp.ttl <= 0s ||
> >          beresp.http.Set-Cookie ||
> >          beresp.http.Vary == "*") {
> >
> >                 set beresp.ttl = 120 s;
> >                 return (hit_for_pass);
> >      }
> >
> >     return (deliver);
> >  }
> >
> > Then I request a big file(about 100M+) like "xxx.zip" from clients.There
> is
> > only one client can access the object.because "the object will marked as
> > busy as it is delivered."
> >
> > But if  the request goes directly to “pass” ,multiple clients can get the
> > response at the same time.
> >
> > Also if I remove
> >   if (beresp.http.Content-Length && beresp.http.Content-Length ~
> > "[0-9]{8,}") {
> >        return (hit_for_pass);
> >     }
> > to make the file cacheable,multiple clients can get the response at the
> > same time.
> >
> > Now I want "multiple clients can get the response at the same time." in
> all
> > situations("pass","hit","hit_for_pass").
> >
> > What can I do for it?
> > Any suggestions  will be appreciate.
> > Thank you.
> >
> >  -Shawn Wang
>
> > _______________________________________________
> > varnish-misc mailing list
> > varnish-misc at varnish-cache.org
> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20130130/7cbf959f/attachment-0001.html>


More information about the varnish-misc mailing list