High read IOPS with file backend

Bryan Stillwell bstillwell at photobucket.com
Fri Aug 16 23:01:05 CEST 2013


Here are a couple more graphs showing the requests/sec and network
traffic for both varnish and nginx:


As for testing method, that involved having both servers equally
weighted on an F5 load balancer taking production traffic.  The F5 was
configured to re-use connections when possible with one-connect, and
to use persistent hashing to always send the same requests to the same
caching server.

As for the configurations for both I'll send those to you in a private email.


On Fri, Aug 16, 2013 at 12:10 PM, Crowder, Travis
<Travis.Crowder at penton.com> wrote:
> Can you publish your testing methods and configurations for both Varnish
> and Nginx?  Can you include the Network graphs and/or the Connections
> graphs for Varnish and Nginx?
> -Travis
> On 8/16/13 12:29 PM, "Bryan Stillwell" <bstillwell at photobucket.com> wrote:
>>I'm looking at converting all of our nginx caching servers over to
>>varnish, but I'm seeing some oddities with the 'file' storage backend
>>that I'm hoping someone could shed some light on.
>>Hardware specs for both machines:
>>Dell PowerEdge R610
>>  Dual Xeon E5620 @ 2.40GHz
>>  64GiB memory
>>  4x 240GB SSDs in a RAID5 attached to a PERC H700 RAID controller
>>The following image compares CPU, memory, and IOPS for both a machine
>>running varnish (left), and one running nginx (right):
>>At the most recent sample, each machine was handling ~350 requests/sec.
>>As you can see the varnish machine has a lot more CPU time dedicated to
>>I/O wait, which matches up with ~5x higher IOPS numbers.  However, the
>>biggest difference is that varnish is using ~25x more read IOPS than
>>As for the jumps in the IOPS graph, I believe they can be explained by:
>>Wednesday @ 10:00a: Started taking traffic
>>Wednesday @ 11:00a: Memory cache filled, started using SSDs
>>Wednesday @  4:00p: TTL of 6 hours was hit, objects start expiring
>>Wednesday @  7:15p: SSD cache filled
>>I pre-allocated the storage with fallocate (fallocate -l 450g
>>/cache/varnish_storage.bin) to make sure that wasn't helping contribute
>>to the issue.
>>Any ideas on what could be tuned to reduce the number of read IOPS to be
>>more inline with nginx?
>>varnish-misc mailing list
>>varnish-misc at varnish-cache.org


Bryan Stillwell

E: bstillwell at photobucket.com
O: 303.228.5109
M: 970.310.6085

More information about the varnish-misc mailing list