High read IOPS with file backend

Bryan Stillwell bstillwell at photobucket.com
Fri Aug 16 19:29:46 CEST 2013


I'm looking at converting all of our nginx caching servers over to
varnish, but I'm seeing some oddities with the 'file' storage backend
that I'm hoping someone could shed some light on.

Hardware specs for both machines:

Dell PowerEdge R610
  Dual Xeon E5620 @ 2.40GHz
  64GiB memory
  4x 240GB SSDs in a RAID5 attached to a PERC H700 RAID controller

The following image compares CPU, memory, and IOPS for both a machine
running varnish (left), and one running nginx (right):

http://i1217.photobucket.com/albums/dd391/bstillwell_pb/Graphs/varnish-vs-nginx.png

At the most recent sample, each machine was handling ~350 requests/sec.

As you can see the varnish machine has a lot more CPU time dedicated to
I/O wait, which matches up with ~5x higher IOPS numbers.  However, the
biggest difference is that varnish is using ~25x more read IOPS than
nginx.

As for the jumps in the IOPS graph, I believe they can be explained by:

Wednesday @ 10:00a: Started taking traffic
Wednesday @ 11:00a: Memory cache filled, started using SSDs
Wednesday @  4:00p: TTL of 6 hours was hit, objects start expiring
Wednesday @  7:15p: SSD cache filled

I pre-allocated the storage with fallocate (fallocate -l 450g
/cache/varnish_storage.bin) to make sure that wasn't helping contribute
to the issue.

Any ideas on what could be tuned to reduce the number of read IOPS to be
more inline with nginx?

Thanks,
Bryan



More information about the varnish-misc mailing list