Bottleneck on connection accept rates
James Mathiesen
jmathiesen at tripadvisor.com
Thu May 4 00:55:09 CEST 2017
Thank you Dridi. I will bump that tomorrow morning and test again.
Premature optimization on my part apparently.
I had thought I would be fine with 2 thread pools of 2,000 threads each. Do I need a thread for each backend connection as well as for each client connection?
james
________________________________
From: Dridi Boukelmoune <dridi at varni.sh>
Sent: Wednesday, May 3, 2017 6:38 PM
To: James Mathiesen
Cc: varnish-misc at varnish-cache.org
Subject: Re: Bottleneck on connection accept rates
> I believe I've ruled out the acceptor_sleep scenario (no debug messages that would accompany it are logged), but I'm going to try and disable it explicitly and see if that helps. I'm also going to try using the accept-filter feature, although I'm not sure how supported it is. And maybe try reducing timeout_linger.
>
> My goal is to have 1-2K simultaneous connections with an establish rate of 1-2K/second. Cache miss rate will be 100% so there will be lots of backend connection management going on. Is this a realistic goal?
Not sure about the bottleneck but the 2000 workers will become one if
you reach 2K concurrent connections.
> thread_pool_add_delay 0.000 [seconds] (default)
> thread_pool_destroy_delay 1.000 [seconds] (default)
> thread_pool_fail_delay 0.200 [seconds] (default)
> thread_pool_max 1000 [threads]
> thread_pool_min 1000 [threads]
> thread_pool_reserve 0 [threads] (default)
> thread_pool_stack 48k [bytes] (default)
> thread_pool_timeout 300.000 [seconds] (default)
> thread_pools 2 [pools] (default)
Bump thread_pool_max back to 5000 (the default value) to get enough
room to handle the traffic you are expecting.
> MAIN.uptime 7546 1.00 Child process uptime
> MAIN.sess_conn 277145 36.73 Sessions accepted
> MAIN.sess_drop 0 0.00 Sessions dropped
> MAIN.sess_fail 0 0.00 Session accept failures
No failures, and no session dropped, looking good.
> MAIN.thread_queue_len 234 . Length of session queue
And here you are running out of workers apparently.
Dridi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20170503/0a18d1e1/attachment.html>
More information about the varnish-misc
mailing list