Strategy for large cache sets
Anders Nordby
anders at fupp.net
Thu Jul 3 06:40:20 CEST 2008
Hi,
On Tue, Jul 01, 2008 at 11:16:15AM -0700, Skye Poier Nott wrote:
> I want to deploy Varnish with very large cache sizes (200GB or more)
> for large, long lived file sets. Is it more efficient to use large
> swap or large mmap in this scenario?
>
> According to the FreeBSD lists, even 20GB of swap requires 200MB of
> kern.maxswzone just to keep track of it, so it doesn't seem like that
> will scale too well. Is one or the other method better for many small
> files vs less many big files?
My experience with Varnish on FreeBSD with long lived (~1 week) large
data sets tells me that using the file storage backend easily gives you
60-70 second hangs. The malloc backend works smoother. I've been using
256 MB maxswzone on a few servers with upto 80 GB of data in the swap
and did not have any problems with maxswzone beeing too small.
That said, I do get large peaks in number of threads and vm faults with
peak/high traffic, which makes it difficult to scale further. I don't
know if this is due to bottlenecks in the VM subsystem, Varnish or if I
have too little RAM. But I hope to find out more about it. I suspect
there is more work needed in this area to be done by the developers.
PS: FreeBSD supports swap devices upto only 32 GB, so you may need to
split your disks/volumes up in many partitions.
Bye,
--
Anders.
More information about the varnish-misc
mailing list