Varnish virtual memory usage
Henry Paulissen
h.paulissen at qbell.nl
Thu Nov 5 01:46:02 CET 2009
I know it is really bad thing to don't have keep-alive.
But our load balancer / failover software isn't supporting it
(http://haproxy.1wt.eu/).
Our traffic goes as follows:
Through round robin dns data is sent to 1 of 2 dedicated haproxy servers.
In haproxy there are 6 varnish servers defined.
Haproxy chooses on round robin (and on availability) the dedicated varnish
server he wants to sent the data to.
So all our static traffic is distributed to 6 dedicated varnish servers.
As I can see from memory usage and cpu load, this behavior isn't stressing
them.
-----Oorspronkelijk bericht-----
Van: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] Namens
Poul-Henning Kamp
Verzonden: donderdag 5 november 2009 1:16
Aan: Henry Paulissen
CC: 'Ken Brownfield'; varnish-misc at projects.linpro.no
Onderwerp: Re: Varnish virtual memory usage
In message <003201ca5da9$57ae7e30$070b7a90$@paulissen at qbell.nl>, "Henry
Pauliss
en" writes:
>Our load balancer transforms all connections from keep-alive to close.
That is a bad idea really, it increases the amount of work varnish
has to do significantly.
>but 1,610 threads with your
>1MB stack limit will use 1.7GB of RAM.
It is very important to keep "Virtual Address Space" and "RAM" out
from each other.
The stacks will use 1.7G of VM-space, but certainly not as much
RAM as most of the stacks are not accessed.
The number you care about is the resident size, the _actual_ amount
of RAM used.
Only on 32bit systems is there any reason to be concerned about
VM-space used.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
More information about the varnish-misc
mailing list