[Varnish] #951: varnish stalls connections on high traffic to non-cacheable urls

Varnish varnish-bugs at varnish-cache.org
Sat Jul 2 09:52:02 CEST 2011


#951: varnish stalls connections on high traffic to non-cacheable urls
---------------------------------+------------------------------------------
 Reporter:  tttt                 |        Type:  defect  
   Status:  new                  |    Priority:  normal  
Milestone:  Varnish 2.1 release  |   Component:  varnishd
  Version:  2.1.5                |    Severity:  major   
 Keywords:                       |  
---------------------------------+------------------------------------------

Comment(by tttt):

 Replying to [comment:2 kb]:
 > I believe this is actually expected behavior.  Varnish wants to download
 these objects and store them in cache before letting subsequent requests
 "in" to the object.  This is common in two situations I've seen:
 >
 > 1. Your web server takes longer to respond than your
 .first_byte_timeout, and thus never makes it into Varnish.  All requests
 pile up on a linear line of requests that each take .first_byte_timeout
 seconds.
 [[BR]]
 i have first_byte_timeout set at 51s and its highly unlikely that apache
 responds routinely that slow.

 in fact, if i skip varnish in the request path, the affected url is
 handled by apache just fine.

 apache processes can get stuck under pressure, so its certainly reasonable
 to assume that varnish gets all sorts of timeouts or random trash response
 from time to time. varnish is expected to handle that. #942 describes one
 case where varnish may be failing to perform correctly.

 [[BR]]

 > 2. Your web server is taking a "long time" to reply, and the object is
 not cacheable.  A similar serialization takes place, orthogonal to
 .first_byte_timeout.
 >
 > Varnish doesn't know whether the object is cacheable or not until it
 receives the response, and I don't know of a way to tell Varnish whether
 an object is cacheable /before/ the request happens.
 >
 > My only suggestion for a "fix" would be to add something like this to
 your vcl_recv():
 >
 > if ( req.url ~ "/your/very/slow/URLs" ) {
 >     set req.hash_ignore_busy = true;
 > }
 >
 > That should allow incoming requests to open new requests to your backend
 (removing the serialization).
 [[BR]]
 I wasn't aware of this option, thanks. Actually, i have set it globally
 now for a test and it seems to break the stall (i'm aware that it also
 should break request pileup protection). we'll see how this affects
 operation at peak.

 [[BR]]



 >
 > But honestly, if you have painfully slow, non-cacheable resources, it
 might be better to route those directly to the backend(s) rather than
 clutter up Varnish.  Or perhaps separate those requests into different
 servers along functional lines.
 [[BR]]
 Its not that simple in our case. We have millions of user generated files
 that might be cacheable or not, depending on the file content and context
 (logged in or not, account has forced ads or not); response time also
 depends on external ad sources, so its not really that predictable.


 The expected behaviour for me would be that when varnish gets uncacheable
 response for url it marks that url as non-waitable until it gets a
 cacheable response from it (again)

-- 
Ticket URL: <http://www.varnish-cache.org/trac/ticket/951#comment:4>
Varnish <http://varnish-cache.org/>
The Varnish HTTP Accelerator




More information about the varnish-bugs mailing list