<div class="gmail_quote">On Mon, Dec 20, 2010 at 9:25 AM, Kristian Lyngstol <span dir="ltr"><<a href="mailto:kristian@varnish-software.com">kristian@varnish-software.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On Fri, Dec 17, 2010 at 03:40:35PM +0100, Stig Bakken wrote:<br>
> I have a script processing varnishlog output to give me the serving<br>
> latency based on the ReqEnd tag, and frequently see objects served from<br>
> cache, size around 25kB, that take 5+ seconds to serve.<br>
<br>
</div>If there are two requests for an object, request A and request B, the<br>
following will happen:<br>
<br>
Time | Action<br>
0s | A asks for /foo<br>
| Cache-miss - fetch from backend<br>
0.5s | B asks for /foo<br>
| Cache hit on busy object - wait for it<br>
5s | Web server is finished delivering /foo<br>
5.1s | A gets /foo - cache miss<br>
5.1s | B gets /foo - cache hit<br>
<br>
It's a bit difficult to spot this as it happens, it's been a matter of<br>
discussion recently (Arthur being the one who first became aware of the<br>
issue).<br>
<br>
It sounds like this might be what you are seeing.<font class="Apple-style-span" color="#888888"><br></font></blockquote></div><div><br></div>Here are some more details. The attached file is an edited varnishlog dump, with elapsed time appended after each ReqEnd. Notice that one of the requests takes 137 seconds, during which time all of the others in the dump complete much faster. The Age: header is high, which rules out the cache injection race condition, right?<div>
<br></div><div>This is with 2.1.4 installed from the official rpm on RHEL, btw.</div><div><div><div><br> - Stig<br clear="all"><br>-- <br>Stig Bakken<br>CTO, Zedge.net - free your phone!<br>
</div></div></div>