Varnish with many TIME_WAIT sockets and traffic problems

John S. mailingoijv at
Tue Sep 27 15:41:20 CEST 2011

2011/9/27 Marinos Yannikos <mjy at>:
> How are you distributing the traffic between your servers? From the
> bandwidth vs. hits it seems that connections are handled differently, my
> guess was keepalives lasting longer with Varnish and therefore if your load
> balancer uses a "least connections" metric or similar, it will simply send
> more traffic to servers that close connections earlier. Which means that
> this says nothing about the actual performance of the servers and whether it
> would get better or worse if you switched all servers to Varnish.

The loadbalancer is an OpenBSD with a relayd pool in round-robin mode,
all hosts having the same weight inside the pool.
The timeout for an established session is fixed at 600 seconds, which
is more than Varnish's default_ttl (120 seconds).

> How do the HTTP headers look when coming from Varnish vs. Squid? Perhaps
> they offer some hints.

For the same file, we have :

Squid :
  HTTP/1.0 200 OK
  Server: nginx/1.0.4
  Date: Tue, 27 Sep 2011 13:09:21 GMT
  Content-Type: image/jpeg
  Content-Length: 95888
  Last-Modified: Fri, 23 Sep 2011 08:17:19 GMT
  Expires: Wed, 28 Sep 2011 13:09:21 GMT
  Cache-Control: max-age=86400
  Accept-Ranges: bytes
  Age: 795
  X-Cache: HIT from
  X-Cache-Lookup: HIT from
  Connection: keep-alive

Varnish :
  HTTP/1.1 200 OK
  Content-Type: image/jpeg
  Last-Modified: Fri, 23 Sep 2011 08:17:19 GMT
  Expires: Wed, 28 Sep 2011 13:11:32 GMT
  Cache-Control: max-age=86400
  Content-Length: 95888
  Accept-Ranges: bytes
  Date: Tue, 27 Sep 2011 13:22:36 GMT
  Age: 664
  Connection: keep-alive

The size difference between the two is 108 bytes, not enough to
explain the traffic difference.

Thank you for your time.

More information about the varnish-misc mailing list