<div class="gmail_quote">On Mon, Dec 20, 2010 at 9:25 AM, Kristian Lyngstol <span dir="ltr"><<a href="mailto:kristian@varnish-software.com">kristian@varnish-software.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On Fri, Dec 17, 2010 at 03:40:35PM +0100, Stig Bakken wrote:<br>
> I have a script processing varnishlog output to give me the serving<br>
> latency based on the ReqEnd tag, and frequently see objects served from<br>
> cache, size around 25kB, that take 5+ seconds to serve.<br>
<br>
</div>If there are two requests for an object, request A and request B, the<br>
following will happen:<br>
<br>
Time | Action<br>
0s | A asks for /foo<br>
| Cache-miss - fetch from backend<br>
0.5s | B asks for /foo<br>
| Cache hit on busy object - wait for it<br>
5s | Web server is finished delivering /foo<br>
5.1s | A gets /foo - cache miss<br>
5.1s | B gets /foo - cache hit<br>
<br>
It's a bit difficult to spot this as it happens, it's been a matter of<br>
discussion recently (Arthur being the one who first became aware of the<br>
issue).<br>
<br>
It sounds like this might be what you are seeing.<br>
<br><font class="Apple-style-span" color="#888888">
- Kristian</font></blockquote><div><br></div><div>Yeah, that could be it. A race condition that is to be expected. I'll test whether this explains the issue by comparing 95/99-percentile latency with cache hit rate over time.</div>
<div><br></div><div> - Stig</div><div><br></div></div>-- <br>Stig Bakken<br>CTO, Zedge.net - free your phone!<br>