Vanrish 2.1.5 eating memory, hit % decrease
Jean-Francois Laurens
jean-francois.laurens at rts.ch
Tue Apr 12 17:05:57 CEST 2011
Thanks for those advises, I¹ll try with 131072 and see If I can get a better
behavior already.
Jef
Le 08/04/11 23:09, « Ken Brownfield » a écrit :
> I forgot about your min_free_kbytes question:
>
> While I would personally recommend 131072 as a starting point, this value does
> not translate directly to what is actually retained as free RAM. In my
> experience, the kernel's behavior is non-linear, non-deterministic, and very
> delicate. Usually the kernel will keep much more free RAM than specified
> (2-3x), and modifying this value too often under load will cause permanent
> behavior problems in the kernel.
>
> Setting it to 10% is a terrible idea under any circumstance I can imagine.
> The goal with this setting in the context of a backing-store cache is to set
> it high enough that you have 5-15 seconds of read/write I/O throughput
> available for bursts. For example, if Varnish is committing 5MB/s to/from
> disk, make sure you have 25-75MB of RAM free at a minimum. This might only
> translate to a min_free_kbytes of 12000-30000.
>
> I'd strongly suggest modifying the value slowly and carefully, ideally only
> once after a reboot via sysctl.conf. But once done, my 1TB -spersistent
> Varnish instances became very stable.
> --
> kb
>
>
>
> On Fri, Apr 8, 2011 at 13:55, Ken Brownfield <kbrownfield at google.com> wrote:
>> This means the child process died and restarted (the reason for this should
>> appear earlier in the log; perhaps your cli_timeout is too low under a
>> heavily loaded system -- try 20s).
>>
>> "-sfile" is not persistent storage, so when the child process restarts it
>> uses a new, empty storage structure. You should have luck with
>> "-spersistent" on the latest Varnish or trunk, at least for child process
>> restarts.
>>
>> FWIW,
>> --
>> kb
>>
>>
>>
>> On Fri, Apr 8, 2011 at 01:55, Jean-Francois Laurens
>> <jean-francois.laurens at rts.ch> wrote:
>>> Hi Ken,
>>>
>>> Thanks for the hint !
>>> You¹re affecting here 128Mb, how did you get to this munber ? I read
>>> somewhere that this value can be set to 10% of the actual memory size which
>>> would be in my case 800Mb, does it make sense for you ?
>>> I read aswell that setting this value to high would crash the system
>>> immediately.
>>>
>>>
>>> Yesterday evening, the system was in heavy load but varnish did not hang !
>>> Instead it dropped all its objects ! Then the load went back fine.
>>> It seems setting sfile to 40Gb suits better the memory capability for this
>>> server.
>>> A question remains though ... Why all the objects were dropped ?
>>> Attached is a plot from cacti regarding the number of objects.
>>>
>>> The only thing I could get form the messages log is this :
>>> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (3733) died signal=3
>>> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child cleanup complete
>>> Apr 7 19:00:29 server-01-39 varnishd[3732]: child (29359) Started
>>> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said
>>> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said Child starts
>>> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said managed to
>>> mmap 42949672960 bytes of 42949672960
>>>
>>>
>>> How could I get to know what is realy happening that could explain this
>>> behaviour ?
>>>
>>> Cheers,
>>> Jef
>
>
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at varnish-cache.org
> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
> Jean-Francois Laurens
> Ingénieur Système Unix
> Resources et Développement
> Secteur Backend
> RTS - Radio Télévision Suisse
> Quai Ernest-Ansermet 20
> Case postale 234
> CH - 1211 Genève 8
> T +41 (0)58 236 81 63
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20110412/c5696c45/attachment-0003.html>
More information about the varnish-misc
mailing list