Worker thread stack size
Ken Brownfield
kb+varnish at slide.com
Thu Nov 19 03:08:07 CET 2009
I've attached a zip with /proc/PID/maps output as well as the "pmap -x PID" output for a 2.0.5 process that's been running in production for about four days. It's similarly patched for worker /and/ backend thread stacksizing, and I'm specifying thread_pool_stacksize=256K. The stack for other threads will be the system default (8MB on my systems).
In my testing on x86_64, I wasn't able to get below 256K using "ulimit -s" for the entire process, and I wasn't able to get below a 128K stacksize just applied to the worker/backend threads. 256K worker/backend stacksize has had no failures for me. All with default 16k sess_workspace.
No i686 experience, sorry.
Given the sometimes high ratio of backend threads to worker threads, I thought that controlling the backend thread stack size was also a good idea. I'm curious why a similar tweak to cache_backend_poll.c wasn't made?
Thx,
--
Ken
-------------- next part --------------
A non-text attachment was scrubbed...
Name: varnish_maps.zip
Type: application/zip
Size: 113733 bytes
Desc: not available
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20091118/bedc7fc6/attachment-0003.zip>
-------------- next part --------------
On Nov 18, 2009, at 4:43 AM, Poul-Henning Kamp wrote:
>
> I have added a parameter to set the worker thread stack size.
>
> I suspect we can get away with something as low as maybe 64k +
> $sess_workspace, but I have absolutely no data to confirm or
> deny this claim.
>
> If a couple of you could spend a few minutes to examine actual
> stack sizes and report back, that would be nice.
>
> The number I am interested in, is the number of mapped and
> modified pages in the worker-thread stacks.
>
> On FreeBSD, the mincore(2) could report this, but on Linix mincore(2)
> only reports mapped vs. unmapped pages, which may or may not be
> enough. In either case, it would require some hackish code in Varnish.
>
> A better way is to ask your systems VM system, for instance by
> looking at /proc/$pid/map.
>
> On a 64bit FreeBSD system, the entries you are looking for look like
> this:
> 0x7ffffddd0000 0x7ffffddf0000 3 0 0xffffff003d87e0d8 rw- 1 0 0x3100 NCOW NNC default - CH 488
> 0x7ffffdfd1000 0x7ffffdff1000 3 0 0xffffff0028845d80 rw- 1 0 0x3100 NCOW NNC default - CH 488
> 0x7ffffe1d2000 0x7ffffe1f2000 3 0 0xffffff00635eea20 rw- 1 0 0x3100 NCOW NNC default - CH 488
> 0x7ffffe3d3000 0x7ffffe3f3000 3 0 0xffffff0095d57870 rw- 1 0 0x3100 NCOW NNC default - CH 488
> 0x7ffffe5d4000 0x7ffffe5f4000 3 0 0xffffff00630ec0d8 rw- 1 0 0x3100 NCOW NNC default - CH 488
>
> And the number I need is the difference between the first two colums
> for all your worker threads, (min, max, average accepted also )
> and the value of your sess_workspace parameter.
>
> In the case you would find:
> 0x7ffffddf0000 - 0x7ffffddd0000 = 128K
> 0x7ffffdff1000 - 0x7ffffdfd1000 = 128K
> ...
>
> Poul-Henning
>
>
> In message <20091118123439.6A18C38D0D at projects.linpro.no>, phk at projects.linpro.
> no writes:
>> Author: phk
>> Date: 2009-11-18 13:34:39 +0100 (Wed, 18 Nov 2009)
>> New Revision: 4352
>>
>> Modified:
>> trunk/varnish-cache/bin/varnishd/cache_pool.c
>> trunk/varnish-cache/bin/varnishd/heritage.h
>> trunk/varnish-cache/bin/varnishd/mgt_pool.c
>> Log:
>> Add a parameter to set the workerthread stacksize.
>>
>> On 32 bit systems, it may be necessary to tweak this down to get high
>> numbers of worker threads squeezed into the address-space.
>>
>> I have no idea how much stack-space a worker thread normally uses, so
>> no guidance is given, and we default to the system default.
>>
>> Fixes #572
>
> --
> Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
> phk at FreeBSD.ORG | TCP/IP since RFC 956
> FreeBSD committer | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
More information about the varnish-misc
mailing list