High read IOPS with file backend
Travis.Crowder at penton.com
Fri Aug 16 20:10:34 CEST 2013
Can you publish your testing methods and configurations for both Varnish
and Nginx? Can you include the Network graphs and/or the Connections
graphs for Varnish and Nginx?
On 8/16/13 12:29 PM, "Bryan Stillwell" <bstillwell at photobucket.com> wrote:
>I'm looking at converting all of our nginx caching servers over to
>varnish, but I'm seeing some oddities with the 'file' storage backend
>that I'm hoping someone could shed some light on.
>Hardware specs for both machines:
>Dell PowerEdge R610
> Dual Xeon E5620 @ 2.40GHz
> 64GiB memory
> 4x 240GB SSDs in a RAID5 attached to a PERC H700 RAID controller
>The following image compares CPU, memory, and IOPS for both a machine
>running varnish (left), and one running nginx (right):
>At the most recent sample, each machine was handling ~350 requests/sec.
>As you can see the varnish machine has a lot more CPU time dedicated to
>I/O wait, which matches up with ~5x higher IOPS numbers. However, the
>biggest difference is that varnish is using ~25x more read IOPS than
>As for the jumps in the IOPS graph, I believe they can be explained by:
>Wednesday @ 10:00a: Started taking traffic
>Wednesday @ 11:00a: Memory cache filled, started using SSDs
>Wednesday @ 4:00p: TTL of 6 hours was hit, objects start expiring
>Wednesday @ 7:15p: SSD cache filled
>I pre-allocated the storage with fallocate (fallocate -l 450g
>/cache/varnish_storage.bin) to make sure that wasn't helping contribute
>to the issue.
>Any ideas on what could be tuned to reduce the number of read IOPS to be
>more inline with nginx?
>varnish-misc mailing list
>varnish-misc at varnish-cache.org
More information about the varnish-misc