How to make multiple clients can get the response at the same time by stream.
Xianzhe Wang
wxz19861013 at gmail.com
Mon Feb 4 07:59:03 CET 2013
Thank you for help.
I would like to show the detail of my thinking:
1. Client requests /foo.jpg, And the request header has an additional field
which called x-rewrite-level (x-rewrite-level have three value: 1,2,3,
indicate different rewrite level). For example some request header may
contain the "x-rewrite-level: 1".
2. Varnish looks up foo.jpg based on "url + req.x-rewrite-level" hash
value, at the first time it will miss.
3. Varnish goes to backend and asks for /foo.jpg and backend responds.
4. Varnish notifies that this should be rewritten, and rewrites it
according to x-rewrite-level value by the rewrite module.
5. The rewritten image replaces the original one(fetch from backend) in the
varnish storage.
6. Subsequent request /foo.jpg with req.x-rewrite-level = 1 will hit the
rewritten object in varnish.
7. Subsequent request /foo.jpg with req.x-rewrite-level = 2 or 3 will miss
based on "url + req.x-rewrite-level" hash value. It goes to step 3.
The request with the same url and different x-rewrite-level value will
receive different image from varnish.
Varnish should store 3 different objects according to x-rewrite-level
value(1,2,3)
Regards,
--
Shawn Wang
2013/2/2 Per Buer <perbu at varnish-software.com>
> That is an interesting use case. So, if I understand it correctly what
> would happen would be something like this:
>
> 1. Client requests /foo
> 2. Varnish goes to backend and asks for /foo
> 3. Backend responds and notifies that this would be accessed in the future
> as /bar
> 4. Varnish changes the object in memory or maybe copies it.
>
> I'm guessing that would not be possible as the hashing happens quite early
> and you would have to alter the hash before looking it up. It might be
> easier to maintain a map of various rewrites in memory, memcache or redis
> and lookup the rewrite in vcl_recv.
>
>
>
>
> On Sat, Feb 2, 2013 at 3:35 AM, Xianzhe Wang <wxz19861013 at gmail.com>wrote:
>
>> Hi,
>> I'm sorry, I wasn't being clear. Thank you for your patience.
>> I'm trying to estimate the response header from backend, if the
>> "content-type:" is " image/*", we know that the response body is image.
>> Then I'll rewrite
>> the image and insert it into varnish cache. Subsequent request will hit
>> the rewritten image.
>> That's I wanna to do. Fetch backend response, rewrite image and insert
>> into varnish cache.
>>
>> Regards,
>> --
>> Shawn Wang
>>
>>
>>
>>
>>
>> 2013/2/1 Per Buer <perbu at varnish-software.com>
>>
>>> Hi,
>>>
>>> I don't quite understand what you're trying to do. Varnish will store
>>> the jpg together with the response headers in memory. When you request the
>>> object Varnish will deliver it verbatim along with the HTTP headers. What
>>> exactly are you trying to do?
>>>
>>>
>>> PS: I see we haven't built packages of 3.0.3-plus, yet. This should pop
>>> up in the repo next week. Until then 3.0.2s might suffice.
>>>
>>>
>>> On Fri, Feb 1, 2013 at 8:01 AM, Xianzhe Wang <wxz19861013 at gmail.com>wrote:
>>>
>>>> Hi,
>>>> Thanks for clarification. What you say is very clear.
>>>> I am sorry to show my poor English, but I have tried my best to
>>>> communicate.
>>>>
>>>> There is another question.For example, if we request a .jpg
>>>> file(cacheable), varnish will encapsulation it as an object and insert
>>>> into memory. How can we get the .jpg file from the object?
>>>>
>>>> Thank you for help again.
>>>>
>>>> -Shawn Wang
>>>>
>>>>
>>>> 2013/1/30 Per Buer <perbu at varnish-software.com>
>>>>
>>>>> Hi,
>>>>>
>>>>> I was a bit quick and I didn't read the whole email the first time.
>>>>> Sorry about that. You're actually using the streaming branch, already I
>>>>> see. What you're writing is really, really odd. There is a slight lock
>>>>> while the "first" object is being fetched where other requests will be put
>>>>> on the waiting list. However, when the hit-for-pass object is created these
>>>>> should be released and pass'ed to the clients.
>>>>>
>>>>> If the backend takes forever coming back with the response headers
>>>>> then the situation would be something like what you describe. However, that
>>>>> would be odd and doesn't make much sense.
>>>>>
>>>>> PS: The streaming branch was renamed "plus" when it got other
>>>>> experimental features. You'll find source
>>>>> https://github.com/mbgrydeland/varnish-cache and packages at
>>>>> repo.varnish-cache.org/test if I recall correctly.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jan 30, 2013 at 3:32 AM, Xianzhe Wang <wxz19861013 at gmail.com>wrote:
>>>>>
>>>>>>
>>>>>> Hi,
>>>>>> Thanks a lot.
>>>>>>
>>>>>> I tried option
>>>>>> "set req.hash_ignore_busy = true;"
>>>>>> in vlc_recv.
>>>>>> I think it works. But there are side effects: it would increase
>>>>>> backend load.
>>>>>>
>>>>>> I have an idea about it in my previous email. what do you think about
>>>>>> it?
>>>>>>
>>>>>> Another question is that where can I find the "plus" branch of
>>>>>> Varnish which matches this issue.
>>>>>>
>>>>>> Any suggestions will be appreciate.
>>>>>> Thanks again for help.
>>>>>>
>>>>>> Regards,
>>>>>> --
>>>>>> Shawn Wang
>>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: Xianzhe Wang <wxz19861013 at gmail.com>
>>>>>> Date: 2013/1/30
>>>>>> Subject: Re: How to make multiple clients can get the response at the
>>>>>> same time by stream.
>>>>>> To: Jakub Słociński <kuba at ovh.net>
>>>>>>
>>>>>>
>>>>>> Hi Jakub S.
>>>>>> Thank you very much.
>>>>>> I tried, and take a simple test, two client request the big file at
>>>>>> the same time, they get the response stream immediately, so it works.
>>>>>> In that case, multiple requests will go directly to "pass", they do
>>>>>> not need to wait, but it would increase backend load.
>>>>>> We need to balance the benefits and drawbacks.
>>>>>>
>>>>>> I wanna is that:
>>>>>> Client 1 requests url /foo
>>>>>> Client 2..N request url /foo
>>>>>> Varnish tasks a worker to fetch /foo for Client 1
>>>>>> Client 2..N are now queued pending response from the worker
>>>>>> Worker fetch response header(just header not include body) from
>>>>>> backend, and find it non-cacheable, then make the remaining
>>>>>> requests(Client 2..N) go directly to "pass". And creat the hit_for_pass
>>>>>> object synchronously in the first request(Client 1).
>>>>>> Subsequent requests are now given the hit_for_pass object
>>>>>> instructing them to go to the backend as long as the hit_for_pass object
>>>>>> exists.
>>>>>>
>>>>>> As I mentioned below, is it feasible? Or do you have any Suggestions?
>>>>>>
>>>>>> Thanks again for help.
>>>>>>
>>>>>> Regards,
>>>>>> --
>>>>>> Shawn Wang
>>>>>>
>>>>>>
>>>>>>
>>>>>> 2013/1/29 Jakub Słociński <kuba at ovh.net>
>>>>>>
>>>>>>> Hi Xianzhe Wang,
>>>>>>> you should try option
>>>>>>> "set req.hash_ignore_busy = true;"
>>>>>>> in vlc_recv.
>>>>>>>
>>>>>>> Regards,
>>>>>>> --
>>>>>>> Jakub S.
>>>>>>>
>>>>>>>
>>>>>>> Xianzhe Wang napisał(a):
>>>>>>> > Hello everyone,
>>>>>>> > My varnish version is 3.0.2-streaming release.And I set
>>>>>>> > "beresp.do_stream = true" in vcl_fetch in order to "Deliver the
>>>>>>> object to
>>>>>>> > the client directly without fetching the whole object into
>>>>>>> varnish";
>>>>>>> >
>>>>>>> > This is a part of my *.vcl file:
>>>>>>> >
>>>>>>> > sub vcl_fetch {
>>>>>>> > set beresp.grace = 30m;
>>>>>>> >
>>>>>>> > set beresp.do_stream = true;
>>>>>>> >
>>>>>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~
>>>>>>> > "[0-9]{8,}") {
>>>>>>> > return (hit_for_pass);
>>>>>>> > }
>>>>>>> >
>>>>>>> > if (beresp.http.Pragma ~ "no-cache" ||
>>>>>>> beresp.http.Cache-Control ~
>>>>>>> > "no-cache" || beresp.http.Cache-Control ~ "private") {
>>>>>>> > return (hit_for_pass);
>>>>>>> > }
>>>>>>> >
>>>>>>> > if (beresp.ttl <= 0s ||
>>>>>>> > beresp.http.Set-Cookie ||
>>>>>>> > beresp.http.Vary == "*") {
>>>>>>> >
>>>>>>> > set beresp.ttl = 120 s;
>>>>>>> > return (hit_for_pass);
>>>>>>> > }
>>>>>>> >
>>>>>>> > return (deliver);
>>>>>>> > }
>>>>>>> >
>>>>>>> > Then I request a big file(about 100M+) like "xxx.zip" from
>>>>>>> clients.There is
>>>>>>> > only one client can access the object.because "the object will
>>>>>>> marked as
>>>>>>> > busy as it is delivered."
>>>>>>> >
>>>>>>> > But if the request goes directly to “pass” ,multiple clients can
>>>>>>> get the
>>>>>>> > response at the same time.
>>>>>>> >
>>>>>>> > Also if I remove
>>>>>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~
>>>>>>> > "[0-9]{8,}") {
>>>>>>> > return (hit_for_pass);
>>>>>>> > }
>>>>>>> > to make the file cacheable,multiple clients can get the response
>>>>>>> at the
>>>>>>> > same time.
>>>>>>> >
>>>>>>> > Now I want "multiple clients can get the response at the same
>>>>>>> time." in all
>>>>>>> > situations("pass","hit","hit_for_pass").
>>>>>>> >
>>>>>>> > What can I do for it?
>>>>>>> > Any suggestions will be appreciate.
>>>>>>> > Thank you.
>>>>>>> >
>>>>>>> > -Shawn Wang
>>>>>>>
>>>>>>> > _______________________________________________
>>>>>>> > varnish-misc mailing list
>>>>>>> > varnish-misc at varnish-cache.org
>>>>>>> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> <http://www.varnish-software.com/> *Per Buer*
>>>>>
>>>>> CEO | Varnish Software AS
>>>>> Phone: +47 958 39 117 | Skype: per.buer
>>>>> We Make Websites Fly!
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> <http://www.varnish-software.com/> *Per Buer*
>>> CEO | Varnish Software AS
>>> Phone: +47 958 39 117 | Skype: per.buer
>>> We Make Websites Fly!
>>>
>>>
>>
>
>
> --
> <http://www.varnish-software.com/> *Per Buer*
> CEO | Varnish Software AS
> Phone: +47 958 39 117 | Skype: per.buer
> We Make Websites Fly!
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20130204/30520ead/attachment-0001.html>
More information about the varnish-misc
mailing list