<div dir="ltr"><div>Hi Guillaume,</div><div><br></div><div><span style="background-color:rgb(255,255,0)">pz_list and recv_queue are not (to my knowledge) Varnish counters, where are you seeing them?</span><br></div><div>Yes, could not find them in varnish counters</div><div><br></div><div>for pz_list: we checked count of processes list during issue period [9th June]</div><div>Ex: sar -q -f /var/log/sa/sa09 -s 21:15:00 -e 21:30:00</div><div><br></div><div>for recv_queue, we get this metric using netstat command [But here, we cannot get old logs now]</div><div>Ex: sudo netstat -nltp</div><div><br></div><div><div><span style="background-color:rgb(255,255,0)">Could you share a "varnishstat -1" of the impacted machine?</span></div></div><div>I doubt, if we can get varnishstat of machine during the issue period - correct me if I'm wrong</div><div><br></div><div>Below are the stats taken now --> 21st June -  02:00</div><div>MGT.uptime             9960545         1.00 Management process uptime<br>MGT.child_start              1         0.00 Child process started<br>MGT.child_exit               0         0.00 Child process normal exit<br>MGT.child_stop               0         0.00 Child process unexpected exit<br>MGT.child_died               0         0.00 Child process died (signal)<br>MGT.child_dump               0         0.00 Child process core dumped<br>MGT.child_panic              0         0.00 Child process panic<br>MAIN.summs           999145669       100.31 stat summ operations<br>MAIN.uptime            9960546         1.00 Child process uptime<br>MAIN.sess_conn       230308639        23.12 Sessions accepted<br>MAIN.sess_drop               0         0.00 Sessions dropped<br>MAIN.sess_fail               0         0.00 Session accept failures<br>MAIN.client_req_400         8578         0.00 Client requests received, subject to 400 errors<br>MAIN.client_req_417            0         0.00 Client requests received, subject to 417 errors<br>MAIN.client_req        272378619        27.35 Good client requests received<br>MAIN.cache_hit          61997297         6.22 Cache hits<br>MAIN.cache_hitpass             0         0.00 Cache hits for pass.<br>MAIN.cache_hitmiss        147993         0.01 Cache hits for miss.<br>MAIN.cache_miss        210380895        21.12 Cache misses<br>MAIN.backend_conn        5153783         0.52 Backend conn. success<br>MAIN.backend_unhealthy            0         0.00 Backend conn. not attempted<br>MAIN.backend_busy                 0         0.00 Backend conn. too many<br>MAIN.backend_fail                42         0.00 Backend conn. failures<br>MAIN.backend_reuse        206881432        20.77 Backend conn. reuses<br>MAIN.backend_recycle      208365094        20.92 Backend conn. recycles<br>MAIN.backend_retry             3347         0.00 Backend conn. retry<br>MAIN.fetch_head                   0         0.00 Fetch no body (HEAD)<br>MAIN.fetch_length           2073051         0.21 Fetch with Length<br>MAIN.fetch_chunked        209964885        21.08 Fetch chunked<br>MAIN.fetch_eof                    9         0.00 Fetch EOF<br>MAIN.fetch_bad                    0         0.00 Fetch bad T-E<br>MAIN.fetch_none                  40         0.00 Fetch no body<br>MAIN.fetch_1xx                    0         0.00 Fetch no body (1xx)<br>MAIN.fetch_204                    0         0.00 Fetch no body (204)<br>MAIN.fetch_304                    0         0.00 Fetch no body (304)<br>MAIN.fetch_failed            415941         0.04 Fetch failed (all causes)<br>MAIN.fetch_no_thread              0         0.00 Fetch failed (no thread)<br>MAIN.pools                        2          .   Number of thread pools<br>MAIN.threads                     31          .   Total number of threads<br>MAIN.threads_limited             75         0.00 Threads hit max<br>MAIN.threads_created         290240         0.03 Threads created<br>MAIN.threads_destroyed       290209         0.03 Threads destroyed<br>MAIN.threads_failed               0         0.00 Thread creation failed<br>MAIN.thread_queue_len             0          .   Length of session queue<br>MAIN.busy_sleep             1000325         0.10 Number of requests sent to sleep on busy objhdr<br>MAIN.busy_wakeup            1000325         0.10 Number of requests woken after sleep on busy objhdr<br>MAIN.busy_killed                  0         0.00 Number of requests killed after sleep on busy objhdr<br>MAIN.sess_queued             324645         0.03 Sessions queued for thread<br>MAIN.sess_dropped                 0         0.00 Sessions dropped for thread<br>MAIN.req_dropped                  0         0.00 Requests dropped<br>MAIN.n_object               1363649          .   object structs made<br>MAIN.n_vampireobject              0          .   unresurrected objects<br>MAIN.n_objectcore           1363658          .   objectcore structs made<br>MAIN.n_objecthead           1365235          .   objecthead structs made<br>MAIN.n_backend                   18          .   Number of backends<br>MAIN.n_expired             18708716          .   Number of expired objects<br>MAIN.n_lru_nuked          190240122          .   Number of LRU nuked objects<br>MAIN.n_lru_moved           48881176          .   Number of LRU moved objects<br>MAIN.losthdr                    194         0.00 HTTP header overflows<br>MAIN.s_sess               230308639        23.12 Total sessions seen<br>MAIN.s_pipe                       0         0.00 Total pipe sessions seen<br>MAIN.s_pass                     221         0.00 Total pass-ed requests seen<br>MAIN.s_fetch              210381116        21.12 Total backend fetches initiated<br>MAIN.s_synth                  15291         0.00 Total synthethic responses made<br>MAIN.s_req_hdrbytes    152588868591     15319.33 Request header bytes<br>MAIN.s_req_bodybytes    47945048076      4813.50 Request body bytes<br>MAIN.s_resp_hdrbytes    61429100826      6167.24 Response header bytes<br>MAIN.s_resp_bodybytes  31545695399316   3167064.88 Response body bytes<br>MAIN.s_pipe_hdrbytes              0         0.00 Pipe request header bytes<br>MAIN.s_pipe_in                    0         0.00 Piped bytes from client<br>MAIN.s_pipe_out                   0         0.00 Piped bytes to client<br>MAIN.sess_closed          118139254        11.86 Session Closed<br>MAIN.sess_closed_err       11257806         1.13 Session Closed with error<br>MAIN.sess_readahead               0         0.00 Session Read Ahead<br>MAIN.sess_herd             65082138         6.53 Session herd<br>MAIN.sc_rem_close         100948777        10.13 Session OK  REM_CLOSE<br>MAIN.sc_req_close         116459448        11.69 Session OK  REQ_CLOSE<br>MAIN.sc_req_http10             1044         0.00 Session Err REQ_HTTP10<br>MAIN.sc_rx_bad                    0         0.00 Session Err RX_BAD<br>MAIN.sc_rx_body                   0         0.00 Session Err RX_BODY<br>MAIN.sc_rx_junk                8578         0.00 Session Err RX_JUNK<br>MAIN.sc_rx_overflow               0         0.00 Session Err RX_OVERFLOW<br>MAIN.sc_rx_timeout         11248199         1.13 Session Err RX_TIMEOUT<br>MAIN.sc_tx_pipe                   0         0.00 Session OK  TX_PIPE<br>MAIN.sc_tx_error                  0         0.00 Session Err TX_ERROR<br>MAIN.sc_tx_eof              1617225         0.16 Session OK  TX_EOF<br>MAIN.sc_resp_close                0         0.00 Session OK  RESP_CLOSE<br>MAIN.sc_overload                  0         0.00 Session Err OVERLOAD<br>MAIN.sc_pipe_overflow             0         0.00 Session Err PIPE_OVERFLOW<br>MAIN.sc_range_short               0         0.00 Session Err RANGE_SHORT<br>MAIN.sc_req_http20                0         0.00 Session Err REQ_HTTP20<br>MAIN.sc_vcl_failure               4         0.00 Session Err VCL_FAILURE<br>MAIN.shm_records        34703350941      3484.08 SHM records<br>MAIN.shm_writes          2492186358       250.21 SHM writes<br>MAIN.shm_flushes          413619708        41.53 SHM flushes due to overflow<br>MAIN.shm_cont              16129035         1.62 SHM MTX contention<br>MAIN.shm_cycles               31683         0.00 SHM cycles through buffer<br>MAIN.backend_req          212039555        21.29 Backend requests made<br>MAIN.n_vcl                        9          .   Number of loaded VCLs in total<br>MAIN.n_vcl_avail                  9          .   Number of VCLs available<br>MAIN.n_vcl_discard                0          .   Number of discarded VCLs<br>MAIN.vcl_fail                     4         0.00 VCL failures<br>MAIN.bans                         1          .   Count of bans<br>MAIN.bans_completed               1          .   Number of bans marked 'completed'<br>MAIN.bans_obj                     0          .   Number of bans using obj.*<br>MAIN.bans_req                     0          .   Number of bans using req.*<br>MAIN.bans_added                   1         0.00 Bans added<br>MAIN.bans_deleted                 0         0.00 Bans deleted<br>MAIN.bans_tested                  0         0.00 Bans tested against objects (lookup)<br>MAIN.bans_obj_killed              0         0.00 Objects killed by bans (lookup)<br>MAIN.bans_lurker_tested            0         0.00 Bans tested against objects (lurker)<br>MAIN.bans_tests_tested             0         0.00 Ban tests tested against objects (lookup)<br>MAIN.bans_lurker_tests_tested            0         0.00 Ban tests tested against objects (lurker)<br>MAIN.bans_lurker_obj_killed              0         0.00 Objects killed by bans (lurker)<br>MAIN.bans_lurker_obj_killed_cutoff            0         0.00 Objects killed by bans for cutoff (lurker)<br>MAIN.bans_dups                                0         0.00 Bans superseded by other bans<br>MAIN.bans_lurker_contention                   0         0.00 Lurker gave way for lookup<br>MAIN.bans_persisted_bytes                    16          .   Bytes used by the persisted ban lists<br>MAIN.bans_persisted_fragmentation             0          .   Extra bytes in persisted ban lists due to fragmentation<br>MAIN.n_purges                                 0          .   Number of purge operations executed<br>MAIN.n_obj_purged                             0          .   Number of purged objects<br>MAIN.exp_mailed                       400657705        40.22 Number of objects mailed to expiry thread<br>MAIN.exp_received                     400655888        40.22 Number of objects received by expiry thread<br>MAIN.hcb_nolock                       272378417        27.35 HCB Lookups without lock<br>MAIN.hcb_lock                         210148272        21.10 HCB Lookups with lock<br>MAIN.hcb_insert                       210147129        21.10 HCB Inserts<br>MAIN.esi_errors                               0         0.00 ESI parse errors (unlock)<br>MAIN.esi_warnings                             0         0.00 ESI parse warnings (unlock)<br>MAIN.vmods                                    2          .   Loaded VMODs<br>MAIN.n_gzip                                   0         0.00 Gzip operations<br>MAIN.n_gunzip                                 0         0.00 Gunzip operations<br>MAIN.n_test_gunzip                    125951749        12.65 Test gunzip operations<br>LCK.backend.creat                            20         0.00 Created locks<br>LCK.backend.destroy                           0         0.00 Destroyed locks<br>LCK.backend.locks                     436028686        43.78 Lock Operations<br>LCK.backend_tcp.creat                         2         0.00 Created locks<br>LCK.backend_tcp.destroy                       0         0.00 Destroyed locks<br>LCK.backend_tcp.locks                 839324465        84.26 Lock Operations<br>LCK.ban.creat                                 1         0.00 Created locks<br>LCK.ban.destroy                               0         0.00 Destroyed locks<br>LCK.ban.locks                         485319875        48.72 Lock Operations<br>LCK.busyobj.creat                     275769226        27.69 Created locks<br>LCK.busyobj.destroy                   275772300        27.69 Destroyed locks<br>LCK.busyobj.locks                   45764668121      4594.59 Lock Operations<br>LCK.cli.creat                                 1         0.00 Created locks<br>LCK.cli.destroy                               0         0.00 Destroyed locks<br>LCK.cli.locks                           3319775         0.33 Lock Operations<br>LCK.exp.creat                                 1         0.00 Created locks<br>LCK.exp.destroy                               0         0.00 Destroyed locks<br>LCK.exp.locks                        1601937269       160.83 Lock Operations<br>LCK.hcb.creat                                 1         0.00 Created locks<br>LCK.hcb.destroy                               0         0.00 Destroyed locks<br>LCK.hcb.locks                         418989369        42.06 Lock Operations<br>LCK.lru.creat                                 2         0.00 Created locks<br>LCK.lru.destroy                               0         0.00 Destroyed locks<br>LCK.lru.locks                         658492325        66.11 Lock Operations<br>LCK.mempool.creat                             5         0.00 Created locks<br>LCK.mempool.destroy                           0         0.00 Destroyed locks<br>LCK.mempool.locks                    1539542397       154.56 Lock Operations<br>LCK.objhdr.creat                      210205399        21.10 Created locks<br>LCK.objhdr.destroy                    208843468        20.97 Destroyed locks<br>LCK.objhdr.locks                     4042638826       405.87 Lock Operations<br>LCK.pipestat.creat                            1         0.00 Created locks<br>LCK.pipestat.destroy                          0         0.00 Destroyed locks<br>LCK.pipestat.locks                            0         0.00 Lock Operations<br>LCK.sess.creat                        230305038        23.12 Created locks<br>LCK.sess.destroy                      230306744        23.12 Destroyed locks<br>LCK.sess.locks                        716377977        71.92 Lock Operations<br>LCK.vbe.creat                                 1         0.00 Created locks<br>LCK.vbe.destroy                               0         0.00 Destroyed locks<br>LCK.vbe.locks                           7305124         0.73 Lock Operations<br>LCK.vcapace.creat                             1         0.00 Created locks<br>LCK.vcapace.destroy                           0         0.00 Destroyed locks<br>LCK.vcapace.locks                             0         0.00 Lock Operations<br>LCK.vcl.creat                                 1         0.00 Created locks<br>LCK.vcl.destroy                               0         0.00 Destroyed locks<br>LCK.vcl.locks                         424850700        42.65 Lock Operations<br>LCK.vxid.creat                                1         0.00 Created locks<br>LCK.vxid.destroy                              0         0.00 Destroyed locks<br>LCK.vxid.locks                           265345         0.03 Lock Operations<br>LCK.waiter.creat                              2         0.00 Created locks<br>LCK.waiter.destroy                            0         0.00 Destroyed locks<br>LCK.waiter.locks                      820192788        82.34 Lock Operations<br>LCK.wq.creat                                  3         0.00 Created locks<br>LCK.wq.destroy                                0         0.00 Destroyed locks<br>LCK.wq.locks                         1514678352       152.07 Lock Operations<br>LCK.wstat.creat                               1         0.00 Created locks<br>LCK.wstat.destroy                             0         0.00 Destroyed locks<br>LCK.wstat.locks                       889992298        89.35 Lock Operations<br>MEMPOOL.busyobj.live                          4          .   In use<br>MEMPOOL.busyobj.pool                         10          .   In Pool<br>MEMPOOL.busyobj.sz_wanted                 65536          .   Size requested<br>MEMPOOL.busyobj.sz_actual                 65504          .   Size allocated<br>MEMPOOL.busyobj.allocs                210381121        21.12 Allocations<br>MEMPOOL.busyobj.frees                 210381117        21.12 Frees<br>MEMPOOL.busyobj.recycle               210069371        21.09 Recycled from pool<br>MEMPOOL.busyobj.timeout                 6923652         0.70 Timed out from pool<br>MEMPOOL.busyobj.toosmall                      0         0.00 Too small to recycle<br>MEMPOOL.busyobj.surplus                   11360         0.00 Too many for pool<br>MEMPOOL.busyobj.randry                   311750         0.03 Pool ran dry<br>MEMPOOL.req0.live                             3          .   In use<br>MEMPOOL.req0.pool                            11          .   In Pool<br>MEMPOOL.req0.sz_wanted                    65536          .   Size requested<br>MEMPOOL.req0.sz_actual                    65504          .   Size allocated<br>MEMPOOL.req0.allocs                   142052697        14.26 Allocations<br>MEMPOOL.req0.frees                    142052694        14.26 Frees<br>MEMPOOL.req0.recycle                  142028490        14.26 Recycled from pool<br>MEMPOOL.req0.timeout                    6143066         0.62 Timed out from pool<br>MEMPOOL.req0.toosmall                         0         0.00 Too small to recycle<br>MEMPOOL.req0.surplus                       1959         0.00 Too many for pool<br>MEMPOOL.req0.randry                       24207         0.00 Pool ran dry<br>MEMPOOL.sess0.live                            8          .   In use<br>MEMPOOL.sess0.pool                           10          .   In Pool<br>MEMPOOL.sess0.sz_wanted                     512          .   Size requested<br>MEMPOOL.sess0.sz_actual                     480          .   Size allocated<br>MEMPOOL.sess0.allocs                  115153369        11.56 Allocations<br>MEMPOOL.sess0.frees                   115153361        11.56 Frees<br>MEMPOOL.sess0.recycle                 115143094        11.56 Recycled from pool<br>MEMPOOL.sess0.timeout                   6113104         0.61 Timed out from pool<br>MEMPOOL.sess0.toosmall                        0         0.00 Too small to recycle<br>MEMPOOL.sess0.surplus                      2787         0.00 Too many for pool<br>MEMPOOL.sess0.randry                      10275         0.00 Pool ran dry<br>LCK.smf.creat                                 1         0.00 Created locks<br>LCK.smf.destroy                               0         0.00 Destroyed locks<br>LCK.smf.locks                        3920420601       393.59 Lock Operations<br>SMF.s0.c_req                         2064438640       207.26 Allocator requests<br>SMF.s0.c_fail                         195848371        19.66 Allocator failures<br>SMF.s0.c_bytes                     26626288971776   2673175.64 Bytes allocated<br>SMF.s0.c_freed                     26445825654784   2655057.83 Bytes freed<br>SMF.s0.g_alloc                         12608309          .   Allocations outstanding<br>SMF.s0.g_bytes                     180463316992          .   Bytes outstanding<br>SMF.s0.g_space                       2072793088          .   Bytes available<br>SMF.s0.g_smf                           12977499          .   N struct smf<br>SMF.s0.g_smf_frag                        369190          .   N small free smf<br>SMF.s0.g_smf_large                            0          .   N large free smf<br>LCK.sma.creat                                 1         0.00 Created locks<br>LCK.sma.destroy                               0         0.00 Destroyed locks<br>LCK.sma.locks                         263980168        26.50 Lock Operations<br>SMA.Transient.c_req                   131990084        13.25 Allocator requests<br>SMA.Transient.c_fail                          0         0.00 Allocator failures<br>SMA.Transient.c_bytes               59316019413      5955.10 Bytes allocated<br>SMA.Transient.c_freed               59316019413      5955.10 Bytes freed<br>SMA.Transient.g_alloc                         0          .   Allocations outstanding<br>SMA.Transient.g_bytes                         0          .   Bytes outstanding<br>SMA.Transient.g_space                         0          .   Bytes available<br>MEMPOOL.req1.live                             1          .   In use<br>MEMPOOL.req1.pool                            10          .   In Pool<br>MEMPOOL.req1.sz_wanted                    65536          .   Size requested<br>MEMPOOL.req1.sz_actual                    65504          .   Size allocated<br>MEMPOOL.req1.allocs                   142089847        14.27 Allocations<br>MEMPOOL.req1.frees                    142089846        14.27 Frees<br>MEMPOOL.req1.recycle                  142065460        14.26 Recycled from pool<br>MEMPOOL.req1.timeout                    6145611         0.62 Timed out from pool<br>MEMPOOL.req1.toosmall                         0         0.00 Too small to recycle<br>MEMPOOL.req1.surplus                       2350         0.00 Too many for pool<br>MEMPOOL.req1.randry                       24387         0.00 Pool ran dry<br>MEMPOOL.sess1.live                            7          .   In use<br>MEMPOOL.sess1.pool                           11          .   In Pool<br>MEMPOOL.sess1.sz_wanted                     512          .   Size requested<br>MEMPOOL.sess1.sz_actual                     480          .   Size allocated<br>MEMPOOL.sess1.allocs                  115155274        11.56 Allocations<br>MEMPOOL.sess1.frees                   115155267        11.56 Frees<br>MEMPOOL.sess1.recycle                 115144847        11.56 Recycled from pool<br>MEMPOOL.sess1.timeout                   6114536         0.61 Timed out from pool<br>MEMPOOL.sess1.toosmall                        0         0.00 Too small to recycle<br>MEMPOOL.sess1.surplus                      2953         0.00 Too many for pool<br>MEMPOOL.sess1.randry                      10427         0.00 Pool ran dry<br>VBE.reload_2024-06-20T181903.node66.happy 18446744073709551615          .   Happy health probes<br>VBE.reload_2024-06-20T181903.node66.bereq_hdrbytes    193871874        19.46 Request header bytes<br>VBE.reload_2024-06-20T181903.node66.bereq_bodybytes            0         0.00 Request body bytes<br>VBE.reload_2024-06-20T181903.node66.beresp_hdrbytes     65332553         6.56 Response header bytes<br>VBE.reload_2024-06-20T181903.node66.beresp_bodybytes  40260910590      4042.04 Response body bytes<br>VBE.reload_2024-06-20T181903.node66.pipe_hdrbytes               0         0.00 Pipe request header bytes<br>VBE.reload_2024-06-20T181903.node66.pipe_out                    0         0.00 Piped bytes to backend<br>VBE.reload_2024-06-20T181903.node66.pipe_in                     0         0.00 Piped bytes from backend<br>VBE.reload_2024-06-20T181903.node66.conn                        1          .   Concurrent connections to backend<br>VBE.reload_2024-06-20T181903.node66.req                    247959         0.02 Backend requests sent<br>VBE.reload_2024-06-20T181903.node67.happy            18446744073709551615          .   Happy health probes<br>VBE.reload_2024-06-20T181903.node67.bereq_hdrbytes      193960668        19.47 Request header bytes<br>VBE.reload_2024-06-20T181903.node67.bereq_bodybytes             0         0.00 Request body bytes<br>VBE.reload_2024-06-20T181903.node67.beresp_hdrbytes      65315238         6.56 Response header bytes<br>VBE.reload_2024-06-20T181903.node67.beresp_bodybytes  40142940116      4030.19 Response body bytes<br>VBE.reload_2024-06-20T181903.node67.pipe_hdrbytes               0         0.00 Pipe request header bytes<br>VBE.reload_2024-06-20T181903.node67.pipe_out                    0         0.00 Piped bytes to backend<br>VBE.reload_2024-06-20T181903.node67.pipe_in                     0         0.00 Piped bytes from backend<br>VBE.reload_2024-06-20T181903.node67.conn                        3          .   Concurrent connections to backend<br>VBE.reload_2024-06-20T181903.node67.req                    247956         0.02 Backend requests sent<br></div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><font color="#666666"><i>Thanks & Regards,</i></font><div><font color="#666666"><i>Uday Kumar</i></font></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jun 20, 2024 at 11:36 PM Guillaume Quintard <<a href="mailto:guillaume.quintard@gmail.com">guillaume.quintard@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Uday,</div><div><br></div><div>pz_list and recv_queue are not (to my knowledge) Varnish counters, where are you seeing them?</div><div><br></div><div>I doubt Varnish is actually replying with 0, so that probably is your client faking a response code to have something to show. But that's a detail, as the unresponsiveness is real.</div><div><br></div><div>Could you share a "varnishstat -1" of the impacted machine?<br></div><div><br></div><div><div><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>-- <br></div><div>Guillaume Quintard<br></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jun 20, 2024 at 9:30 AM Uday Kumar <<a href="mailto:uday.polu@indiamart.com" target="_blank">uday.polu@indiamart.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">Hello all,<div dir="auto"><br></div><div dir="auto">We are facing frequent issues of varnish unresponsiveness for sometime on our production server.</div><div dir="auto"><br></div><div dir="auto">During this time we have seen that pz_list is being increased to ~3000 and recv_queue is increased to ~130</div><div dir="auto">Also, varnish is responding with response code '0' for sometime, which meant unresponsive.</div><div dir="auto"><br></div><div dir="auto">This is causing multiple 5xx on front ends.</div><div dir="auto"><br></div><div dir="auto">FYR:</div><div dir="auto">User request count during this time is normal.</div><div dir="auto"><br></div><div dir="auto">Note:</div><div dir="auto">During this time, we have confirmed that our backend servers are healthy without any issues.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">May I know what could be the reason for this behaviour at varnish?</div><div dir="auto"><br></div><div dir="auto">Please give me the direction on how to debug this issue.</div></div>
_______________________________________________<br>
varnish-misc mailing list<br>
<a href="mailto:varnish-misc@varnish-cache.org" target="_blank">varnish-misc@varnish-cache.org</a><br>
<a href="https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc" rel="noreferrer" target="_blank">https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc</a><br>
</blockquote></div>
</blockquote></div>