<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
</head>
<body dir="ltr">
<div id="divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Constantia,Serif;" dir="ltr">
<p>Thank you Dridi. I will bump that tomorrow morning and test again. <br>
</p>
<p><br>
</p>
<p>Premature optimization on my part apparently.</p>
<p><br>
</p>
<p>I had thought I would be fine with 2 thread pools of 2,000 threads each. Do I need a thread for each backend connection as well as for each client connection?</p>
<p><br>
</p>
<p>james<br>
</p>
<br>
<br>
<div style="color: rgb(0, 0, 0);">
<div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>From:</b> Dridi Boukelmoune <dridi@varni.sh><br>
<b>Sent:</b> Wednesday, May 3, 2017 6:38 PM<br>
<b>To:</b> James Mathiesen<br>
<b>Cc:</b> varnish-misc@varnish-cache.org<br>
<b>Subject:</b> Re: Bottleneck on connection accept rates</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText">> I believe I've ruled out the acceptor_sleep scenario (no debug messages that would accompany it are logged), but I'm going to try and disable it explicitly and see if that helps. I'm also going to try using the accept-filter feature,
although I'm not sure how supported it is. And maybe try reducing timeout_linger.<br>
><br>
> My goal is to have 1-2K simultaneous connections with an establish rate of 1-2K/second. Cache miss rate will be 100% so there will be lots of backend connection management going on. Is this a realistic goal?<br>
<br>
Not sure about the bottleneck but the 2000 workers will become one if<br>
you reach 2K concurrent connections.<br>
<br>
> thread_pool_add_delay 0.000 [seconds] (default)<br>
> thread_pool_destroy_delay 1.000 [seconds] (default)<br>
> thread_pool_fail_delay 0.200 [seconds] (default)<br>
> thread_pool_max 1000 [threads]<br>
> thread_pool_min 1000 [threads]<br>
> thread_pool_reserve 0 [threads] (default)<br>
> thread_pool_stack 48k [bytes] (default)<br>
> thread_pool_timeout 300.000 [seconds] (default)<br>
> thread_pools 2 [pools] (default)<br>
<br>
Bump thread_pool_max back to 5000 (the default value) to get enough<br>
room to handle the traffic you are expecting.<br>
<br>
> MAIN.uptime 7546 1.00 Child process uptime<br>
> MAIN.sess_conn 277145 36.73 Sessions accepted<br>
> MAIN.sess_drop 0 0.00 Sessions dropped<br>
> MAIN.sess_fail 0 0.00 Session accept failures<br>
<br>
No failures, and no session dropped, looking good.<br>
<br>
> MAIN.thread_queue_len 234 . Length of session queue<br>
<br>
And here you are running out of workers apparently.<br>
<br>
Dridi<br>
</div>
</span></font></div>
</div>
</body>
</html>