This document is out of date and does not reflect recommended values for recent versions.
To see further description of these settings, also check param.show -l in the Varnish management interface.
- -p thread_pool_min=200 (default: 5)
Idle threads are harmless. This number is multipled by the number of thread pools you have available, and the total should be roughly what you need to run on a normal day.
- -p thread_pool_max=4000 (default 1000)
The maximum number of threads is in essence limited by available file descriptors, however, setting it too high does not increase performance. Having a number of idle threads is reasonably harmless, but do not increase this number above roughly 5000 or you risk running into file-descriptor related issues, among other things.
- -p thread_pool_add_delay=2 (default: 20ms, default in master: 2ms)
Reducing the add_delay lets you create threads faster which is essential - specially at startup - to avoid filling up the queue and dropping requests.
- -p session_linger=100 OR MORE (default: 0ms in <= 2.0.4 and 50ms in > 2.0.4)
To avoid too much context switching when you starve your CPU (and in general), letting each thread wait for new requests is essential. The value depends on how long it takes you to deliver the typical object. This will also reduce the amount of threads piling up (which is somewhat counter intuitive).
- -s malloc,(YOURMEMORY-20%)G
Keep data in memory using -s malloc.
Enable grace period (varnish serves stale (but cacheable) objects while retriving object from backend)
set req.grace = 30s;
set obj.grace = 30s;
- If using FreeBSD 7.0 or newer, try using SCHED_ULE instead of SCHED_4BSD in your kernel config.
- Turn off soft-updates on the filesystems where you keep your Varnish data files. It will not help Varnish.
- sysctl.conf settings (see tuning(7) manpage and http://www.freebsd.org/doc/en/books/handbook/configtuning-kernel-limits.html):
kern.ipc.nmbclusters=65536 kern.ipc.somaxconn=16384 kern.maxfiles=131072 kern.maxfilesperproc=104856 kern.threads.max_threads_per_proc=4096
- loader.conf settings:
kern.ipc.maxsockets="131072" kern.ipc.maxpipekva="104857600" (only if you get the "kern.ipc.maxpipekva exceeded" messages in your logs, varnish does not use pipes for worker pool synchronization any more)
- If you run 32-bit FreeBSD, you will need to change set kern.maxdsiz (maximum data size per process in number of bytes) in loader.conf to a larger number if you want to cache more than 512 MB (the default setting) of objects.
- If you use the malloc storage type, and your system hangs with "swap zone exhausted, increase kern.maxswzone" on the console, try increasing kern.maxswzone (default is 32 MB in FreeBSD 7.0) in loader.conf.
Mount the working-directory of Varnish on tmpfs. Typically /usr/lib/varnish. It will reduce unnecessary disk access for the shmlog.
All UNIX platforms
- Set the mount option noatime and nodiratime on the filesystems where you keep your Varnish data files. There is no point in keeping track of how often they are accessed, it will waste cycles/give unneccessary disk activity.
- Make sure you monitor your cache hit ratio, the ratio of requests in % that is actually cached. This should be a high number, in order for Varnish to take the load of the backends. Use varnishstat (see hitrate avg), and if possible also monitor and graph it. Tools here can be [Nagios http://www.nagios.org/], or [Munin http://munin.projects.linpro.no/] (see also [Muninexchange http://muninexchange.projects.linpro.no/ and http://anders.fupp.net/plugins/] for plugins).
- Monitor the number of Varnish threads. It should never be as high as the Varnish thread_pool_max setting.