From guillaume at varnish-software.com Tue Apr 2 22:32:17 2019 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 2 Apr 2019 15:32:17 -0700 Subject: Hit idle send timeout when I try to stream mp3 In-Reply-To: <2F2FC97B-E61A-4D01-ADBD-85E34F44A69D@bonsaimeme.com> References: <2F2FC97B-E61A-4D01-ADBD-85E34F44A69D@bonsaimeme.com> Message-ID: Hello Daniele, Looks like you are hitting the send_timeout. Try "varnishadm param.show | grep timeout", and see if you have something around 120s. Cheers, -- Guillaume Quintard On Wed, Mar 20, 2019 at 3:32 AM Daniele Piaggesi < daniele.piaggesi at bonsaimeme.com> wrote: > Dear all > > I have a problem with my Varnish installation when I try to stream mp3 > files through Varnish Cache 4. This is the scenario. > > I have a Drupal website in which editors can upload mp3s. These mp3s are > listed in a section of website and the end user can listen using an HTML 5 > player. > The stack is: Varnish 4 <-> Nginx 1.11<-> Drupal 7 on a GNU/Linux Debian > Jessie. Varnish and Nginx are installed with Debian packages. > > My Varnish configuration is here: https://pastebin.com/8Kw1b2mL > > When I try to listen an mp3 directly through Nginx all works well: player > loads the file and I can listen the mp3. If I try to do the same through > Varnish, the player remains in loading and mp3 doesn?t start. I tried to > download the mp3 file using curl and my request goes in timeout. > > I had a look at varnishlog and this is the output: > > * << Request >> 1409738 > - Begin req 1246052 rxreq > - Timestamp Start: 1552587068.541997 0.000000 0.000000 > - Timestamp Req: 1552587068.541997 0.000000 0.000000 > - ReqStart 93.147.150.135 15330 > - ReqMethod GET > - ReqURL > /sites/default/files/audio/radio_interviews/20180927-rds-gr_rds_1700-170602593m_1.mp3 > - ReqProtocol HTTP/1.1 > - ReqHeader Host: www.xxx.it > - ReqHeader Connection: keep-alive > - ReqHeader Pragma: no-cache > - ReqHeader Cache-Control: no-cache > - ReqHeader Accept-Encoding: identity;q=1, *;q=0 > - ReqHeader User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X > 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 > Safari/537.36 > - ReqHeader chrome-proxy: frfr > - ReqHeader Accept: */* > - ReqHeader Referer: http://www.xxx.it/path/to/url > - ReqHeader Accept-Language: > en-US,en;q=0.9,es;q=0.8,fr;q=0.7,it;q=0.6 > - ReqHeader Cookie: SESScookieagreed=2; has_js=1; > _ga=GA1.2.950111086.1552579260; _gid=GA1.2.112403313.1552579260; > __atuvc=2%7C11; _gat_UA-57096474-1=1 > - ReqHeader Range: bytes=0- > - ReqHeader X-Forwarded-For: 93.147.150.135 > - VCL_call RECV > - VCL_acl NO_MATCH allowed_monitors > - ReqUnset X-Forwarded-For: 93.147.150.135 > - ReqHeader X-Forwarded-For: 93.147.150.135, 93.147.150.135 > - ReqURL > /sites/default/files/audio/radio_interviews/20180927-rds-gr_rds_1700-170602593m_1.mp3 > - ReqHeader x-range: bytes=0- > - ReqUnset Cookie: SESScookieagreed=2; has_js=1; > _ga=GA1.2.950111086.1552579260; _gid=GA1.2.112403313.1552579260; > __atuvc=2%7C11; _gat_UA-57096474-1=1 > - VCL_return hash > - ReqUnset Accept-Encoding: identity;q=1, *;q=0 > - VCL_call HASH > - ReqUnset Range: bytes=0- > - VCL_return lookup > - Hit 229378 > - VCL_call HIT > - VCL_return fetch > - VCL_Error change return(fetch) to return(miss) in vcl_hit{} > - VCL_call MISS > - VCL_return fetch > - Link bereq 1409739 fetch > - Timestamp Fetch: 1552587068.542520 0.000523 0.000523 > - RespProtocol HTTP/1.1 > - RespStatus 200 > - RespReason OK > - RespHeader Server: nginx/1.11.5 > - RespHeader Date: Thu, 14 Mar 2019 18:11:08 GMT > - RespHeader Last-Modified: Fri, 28 Sep 2018 08:26:56 GMT > - RespHeader ETag: "5bade5d0-a0b94" > - RespHeader Content-Type: audio/mpeg > - RespHeader Content-Length: 658324 > - RespHeader X-Cacheable: YES > - RespHeader X-Varnish: 1409738 > - RespHeader Age: 0 > - RespHeader Via: 1.1 varnish-v4 > - VCL_call DELIVER > - RespHeader X-Cache: MISS > - RespHeader X-Cookie: > - RespHeader grace: > - RespHeader X-Varnish-Server: www.xxx.it > - VCL_return deliver > - Timestamp Process: 1552587068.542536 0.000539 0.000016 > - RespHeader Accept-Ranges: bytes > - Debug "RES_MODE 2" > - RespHeader Connection: keep-alive > - Debug "Hit idle send timeout, wrote = 247608/658701; retrying" > - Debug "Write error, retval = -1, len = 411093, errno = > Resource temporarily unavailable" > - Timestamp Resp: 1552587188.539239 119.997242 119.996703 > - ReqAcct 733 0 733 377 658324 658701 > - End > > I?m not a Varnish ?guru?, but it seems that the error is: > > *- Debug "Hit idle send timeout, wrote = 247608/658701; > retrying"* > *- Debug "Write error, retval = -1, len = 411093, errno = > Resource temporarily unavailable"* > > I search a lot on Google but I didn?t find anything about that, except for > a timeout problem that doesn?t seem to me because timeout settings are set > to 60s (first_byte), same as Nginx. > > Any help is really appreciated. If you need some other infos, let me know. > > Thanks in advance > Daniele > Daniele Piaggesi > > Mobile: +39 393 880 78 50 > Skype: g0blin79 > E-mail: daniele.piaggesi at bmeme.com > ------------------------------------------ > Bonsaimeme S.r.l. > Via del Porto Fluviale, 9 > 00154 Roma - Italy > > Phone: +39 06 98 26 04 39 > Fax: +39 06 94 81 02 03 > ------------------------------------------- > bmeme.com > > > *** Prima di stampare, pensa all'ambiente! *** > *** Before printing think about environment and costs *** > > > Le informazioni, i dati e le notizie contenute nella presente > comunicazione e i relativi allegati sono di natura privata e come tali > possono essere riservate e sono, comunque, destinate esclusivamente ai > destinatari indicati in epigrafe. La diffusione, distribuzione e/o la > copiatura del documento trasmesso da parte di qualsiasi soggetto diverso > dal destinatario ? proibita, sia ai sensi dell?art. 616 c.p., sia ai sensi > del D.Lgs. n. 196/2003. Se avete ricevuto questo messaggio per errore, vi > preghiamo di distruggerlo e di darcene immediata comunicazione anche > inviando un messaggio all?indirizzo email: info at bonsaimeme.com. Il testo > della email potrebbe contenere opinioni personali e non necessariamente > riconducibili a quelle di Bonsaimeme S.r.l. > > --- --- --- > > This e-mail (including attachments) is intended only for the recipient(s) > named above. It may contain confidential or privileged information and > should not be read, copied or otherwise used by any other person. If you > are not the named recipient, please contact: info at bonsaimeme.com and > delete the e-mail from your system. Rif. D.L. 196/2003. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele.piaggesi at bonsaimeme.com Wed Apr 3 10:03:53 2019 From: daniele.piaggesi at bonsaimeme.com (Daniele Piaggesi) Date: Wed, 3 Apr 2019 12:03:53 +0200 Subject: Hit idle send timeout when I try to stream mp3 In-Reply-To: References: <2F2FC97B-E61A-4D01-ADBD-85E34F44A69D@bonsaimeme.com> Message-ID: <0968E189-85F4-4CF4-A76E-B2B6D32608B0@bonsaimeme.com> Hi Guillaume, Thanks for your support. This is the output of "varnishadm param.show | grep timeout? command. backend_idle_timeout 60.000 [seconds] (default) between_bytes_timeout 60.000 [seconds] (default) cli_timeout 60.000 [seconds] (default) connect_timeout 3.500 [seconds] (default) first_byte_timeout 60.000 [seconds] (default) idle_send_timeout 60.000 [seconds] (default) pipe_timeout 60.000 [seconds] (default) send_timeout 600.000 [seconds] (default) thread_pool_timeout 300.000 [seconds] (default) timeout_idle 5.000 [seconds] (default) timeout_linger 0.050 [seconds] (default) send_timeout seems to be configured at 600s? Let me know. Thanks in advance D Daniele Piaggesi Mobile: +39 393 880 78 50 Skype: g0blin79 E-mail: daniele.piaggesi at bmeme.com ------------------------------------------ Bonsaimeme S.r.l. Via del Porto Fluviale, 9 00154 Roma - Italy Phone: +39 06 98 26 04 39 Fax: +39 06 94 81 02 03 ------------------------------------------- bmeme.com *** Prima di stampare, pensa all'ambiente! *** *** Before printing think about environment and costs *** Le informazioni, i dati e le notizie contenute nella presente comunicazione e i relativi allegati sono di natura privata e come tali possono essere riservate e sono, comunque, destinate esclusivamente ai destinatari indicati in epigrafe. La diffusione, distribuzione e/o la copiatura del documento trasmesso da parte di qualsiasi soggetto diverso dal destinatario ? proibita, sia ai sensi dell?art. 616 c.p., sia ai sensi del D.Lgs. n. 196/2003. Se avete ricevuto questo messaggio per errore, vi preghiamo di distruggerlo e di darcene immediata comunicazione anche inviando un messaggio all?indirizzo email: info at bonsaimeme.com . Il testo della email potrebbe contenere opinioni personali e non necessariamente riconducibili a quelle di Bonsaimeme S.r.l. --- --- --- This e-mail (including attachments) is intended only for the recipient(s) named above. It may contain confidential or privileged information and should not be read, copied or otherwise used by any other person. If you are not the named recipient, please contact: info at bonsaimeme.com and delete the e-mail from your system. Rif. D.L. 196/2003. > On 3 Apr 2019, at 00:32, Guillaume Quintard wrote: > > Hello Daniele, > > Looks like you are hitting the send_timeout. Try "varnishadm param.show | grep timeout", and see if you have something around 120s. > > Cheers, > > -- > Guillaume Quintard > > > On Wed, Mar 20, 2019 at 3:32 AM Daniele Piaggesi > wrote: > Dear all > > I have a problem with my Varnish installation when I try to stream mp3 files through Varnish Cache 4. This is the scenario. > > I have a Drupal website in which editors can upload mp3s. These mp3s are listed in a section of website and the end user can listen using an HTML 5 player. > The stack is: Varnish 4 <-> Nginx 1.11<-> Drupal 7 on a GNU/Linux Debian Jessie. Varnish and Nginx are installed with Debian packages. > > My Varnish configuration is here: https://pastebin.com/8Kw1b2mL > > When I try to listen an mp3 directly through Nginx all works well: player loads the file and I can listen the mp3. If I try to do the same through Varnish, the player remains in loading and mp3 doesn?t start. I tried to download the mp3 file using curl and my request goes in timeout. > > I had a look at varnishlog and this is the output: > > * << Request >> 1409738 > - Begin req 1246052 rxreq > - Timestamp Start: 1552587068.541997 0.000000 0.000000 > - Timestamp Req: 1552587068.541997 0.000000 0.000000 > - ReqStart 93.147.150.135 15330 > - ReqMethod GET > - ReqURL /sites/default/files/audio/radio_interviews/20180927-rds-gr_rds_1700-170602593m_1.mp3 > - ReqProtocol HTTP/1.1 > - ReqHeader Host: www.xxx.it > - ReqHeader Connection: keep-alive > - ReqHeader Pragma: no-cache > - ReqHeader Cache-Control: no-cache > - ReqHeader Accept-Encoding: identity;q=1, *;q=0 > - ReqHeader User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36 > - ReqHeader chrome-proxy: frfr > - ReqHeader Accept: */* > - ReqHeader Referer: http://www.xxx.it/path/to/url > - ReqHeader Accept-Language: en-US,en;q=0.9,es;q=0.8,fr;q=0.7,it;q=0.6 > - ReqHeader Cookie: SESScookieagreed=2; has_js=1; _ga=GA1.2.950111086.1552579260; _gid=GA1.2.112403313.1552579260; __atuvc=2%7C11; _gat_UA-57096474-1=1 > - ReqHeader Range: bytes=0- > - ReqHeader X-Forwarded-For: 93.147.150.135 > - VCL_call RECV > - VCL_acl NO_MATCH allowed_monitors > - ReqUnset X-Forwarded-For: 93.147.150.135 > - ReqHeader X-Forwarded-For: 93.147.150.135, 93.147.150.135 > - ReqURL /sites/default/files/audio/radio_interviews/20180927-rds-gr_rds_1700-170602593m_1.mp3 > - ReqHeader x-range: bytes=0- > - ReqUnset Cookie: SESScookieagreed=2; has_js=1; _ga=GA1.2.950111086.1552579260; _gid=GA1.2.112403313.1552579260; __atuvc=2%7C11; _gat_UA-57096474-1=1 > - VCL_return hash > - ReqUnset Accept-Encoding: identity;q=1, *;q=0 > - VCL_call HASH > - ReqUnset Range: bytes=0- > - VCL_return lookup > - Hit 229378 > - VCL_call HIT > - VCL_return fetch > - VCL_Error change return(fetch) to return(miss) in vcl_hit{} > - VCL_call MISS > - VCL_return fetch > - Link bereq 1409739 fetch > - Timestamp Fetch: 1552587068.542520 0.000523 0.000523 > - RespProtocol HTTP/1.1 > - RespStatus 200 > - RespReason OK > - RespHeader Server: nginx/1.11.5 > - RespHeader Date: Thu, 14 Mar 2019 18:11:08 GMT > - RespHeader Last-Modified: Fri, 28 Sep 2018 08:26:56 GMT > - RespHeader ETag: "5bade5d0-a0b94" > - RespHeader Content-Type: audio/mpeg > - RespHeader Content-Length: 658324 > - RespHeader X-Cacheable: YES > - RespHeader X-Varnish: 1409738 > - RespHeader Age: 0 > - RespHeader Via: 1.1 varnish-v4 > - VCL_call DELIVER > - RespHeader X-Cache: MISS > - RespHeader X-Cookie: > - RespHeader grace: > - RespHeader X-Varnish-Server: www.xxx.it > - VCL_return deliver > - Timestamp Process: 1552587068.542536 0.000539 0.000016 > - RespHeader Accept-Ranges: bytes > - Debug "RES_MODE 2" > - RespHeader Connection: keep-alive > - Debug "Hit idle send timeout, wrote = 247608/658701; retrying" > - Debug "Write error, retval = -1, len = 411093, errno = Resource temporarily unavailable" > - Timestamp Resp: 1552587188.539239 119.997242 119.996703 > - ReqAcct 733 0 733 377 658324 658701 > - End > > I?m not a Varnish ?guru?, but it seems that the error is: > > - Debug "Hit idle send timeout, wrote = 247608/658701; retrying" > - Debug "Write error, retval = -1, len = 411093, errno = Resource temporarily unavailable" > > I search a lot on Google but I didn?t find anything about that, except for a timeout problem that doesn?t seem to me because timeout settings are set to 60s (first_byte), same as Nginx. > > Any help is really appreciated. If you need some other infos, let me know. > > Thanks in advance > Daniele > Daniele Piaggesi > > Mobile: +39 393 880 78 50 > Skype: g0blin79 > E-mail: daniele.piaggesi at bmeme.com > ------------------------------------------ > Bonsaimeme S.r.l. > Via del Porto Fluviale, 9 > 00154 Roma - Italy > > Phone: +39 06 98 26 04 39 > Fax: +39 06 94 81 02 03 > ------------------------------------------- > bmeme.com > > > *** Prima di stampare, pensa all'ambiente! *** > *** Before printing think about environment and costs *** > > > Le informazioni, i dati e le notizie contenute nella presente comunicazione e i relativi allegati sono di natura privata e come tali possono essere riservate e sono, comunque, destinate esclusivamente ai destinatari indicati in epigrafe. La diffusione, distribuzione e/o la copiatura del documento trasmesso da parte di qualsiasi soggetto diverso dal destinatario ? proibita, sia ai sensi dell?art. 616 c.p., sia ai sensi del D.Lgs. n. 196/2003. Se avete ricevuto questo messaggio per errore, vi preghiamo di distruggerlo e di darcene immediata comunicazione anche inviando un messaggio all?indirizzo email: info at bonsaimeme.com . Il testo della email potrebbe contenere opinioni personali e non necessariamente riconducibili a quelle di Bonsaimeme S.r.l. > > --- --- --- > > This e-mail (including attachments) is intended only for the recipient(s) named above. It may contain confidential or privileged information and should not be read, copied or otherwise used by any other person. If you are not the named recipient, please contact: info at bonsaimeme.com and delete the e-mail from your system. Rif. D.L. 196/2003. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Apr 3 18:47:29 2019 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 3 Apr 2019 11:47:29 -0700 Subject: Hit idle send timeout when I try to stream mp3 In-Reply-To: <0968E189-85F4-4CF4-A76E-B2B6D32608B0@bonsaimeme.com> References: <2F2FC97B-E61A-4D01-ADBD-85E34F44A69D@bonsaimeme.com> <0968E189-85F4-4CF4-A76E-B2B6D32608B0@bonsaimeme.com> Message-ID: Hum, could just be the client then, dropping the connection after 2 minutes... -- Guillaume Quintard On Wed, Apr 3, 2019 at 3:03 AM Daniele Piaggesi < daniele.piaggesi at bonsaimeme.com> wrote: > Hi Guillaume, > > Thanks for your support. > > This is the output of "varnishadm param.show | grep timeout? command. > > > backend_idle_timeout 60.000 [seconds] (default) > between_bytes_timeout 60.000 [seconds] (default) > cli_timeout 60.000 [seconds] (default) > connect_timeout 3.500 [seconds] (default) > first_byte_timeout 60.000 [seconds] (default) > idle_send_timeout 60.000 [seconds] (default) > pipe_timeout 60.000 [seconds] (default) > send_timeout 600.000 [seconds] (default) > thread_pool_timeout 300.000 [seconds] (default) > timeout_idle 5.000 [seconds] (default) > timeout_linger 0.050 [seconds] (default) > > send_timeout seems to be configured at 600s? > > Let me know. > > Thanks in advance > D > > Daniele Piaggesi > > Mobile: +39 393 880 78 50 > Skype: g0blin79 > E-mail: daniele.piaggesi at bmeme.com > ------------------------------------------ > Bonsaimeme S.r.l. > Via del Porto Fluviale, 9 > 00154 Roma - Italy > > Phone: +39 06 98 26 04 39 > Fax: +39 06 94 81 02 03 > ------------------------------------------- > bmeme.com > > > *** Prima di stampare, pensa all'ambiente! *** > *** Before printing think about environment and costs *** > > > Le informazioni, i dati e le notizie contenute nella presente > comunicazione e i relativi allegati sono di natura privata e come tali > possono essere riservate e sono, comunque, destinate esclusivamente ai > destinatari indicati in epigrafe. La diffusione, distribuzione e/o la > copiatura del documento trasmesso da parte di qualsiasi soggetto diverso > dal destinatario ? proibita, sia ai sensi dell?art. 616 c.p., sia ai sensi > del D.Lgs. n. 196/2003. Se avete ricevuto questo messaggio per errore, vi > preghiamo di distruggerlo e di darcene immediata comunicazione anche > inviando un messaggio all?indirizzo email: info at bonsaimeme.com. Il testo > della email potrebbe contenere opinioni personali e non necessariamente > riconducibili a quelle di Bonsaimeme S.r.l. > > --- --- --- > > This e-mail (including attachments) is intended only for the recipient(s) > named above. It may contain confidential or privileged information and > should not be read, copied or otherwise used by any other person. If you > are not the named recipient, please contact: info at bonsaimeme.com and > delete the e-mail from your system. Rif. D.L. 196/2003. > > On 3 Apr 2019, at 00:32, Guillaume Quintard < > guillaume at varnish-software.com> wrote: > > Hello Daniele, > > Looks like you are hitting the send_timeout. Try "varnishadm param.show | > grep timeout", and see if you have something around 120s. > > Cheers, > > -- > Guillaume Quintard > > > On Wed, Mar 20, 2019 at 3:32 AM Daniele Piaggesi < > daniele.piaggesi at bonsaimeme.com> wrote: > >> Dear all >> >> I have a problem with my Varnish installation when I try to stream mp3 >> files through Varnish Cache 4. This is the scenario. >> >> I have a Drupal website in which editors can upload mp3s. These mp3s are >> listed in a section of website and the end user can listen using an HTML 5 >> player. >> The stack is: Varnish 4 <-> Nginx 1.11<-> Drupal 7 on a GNU/Linux Debian >> Jessie. Varnish and Nginx are installed with Debian packages. >> >> My Varnish configuration is here: https://pastebin.com/8Kw1b2mL >> >> When I try to listen an mp3 directly through Nginx all works well: player >> loads the file and I can listen the mp3. If I try to do the same through >> Varnish, the player remains in loading and mp3 doesn?t start. I tried to >> download the mp3 file using curl and my request goes in timeout. >> >> I had a look at varnishlog and this is the output: >> >> * << Request >> 1409738 >> - Begin req 1246052 rxreq >> - Timestamp Start: 1552587068.541997 0.000000 0.000000 >> - Timestamp Req: 1552587068.541997 0.000000 0.000000 >> - ReqStart 93.147.150.135 15330 >> - ReqMethod GET >> - ReqURL >> /sites/default/files/audio/radio_interviews/20180927-rds-gr_rds_1700-170602593m_1.mp3 >> - ReqProtocol HTTP/1.1 >> - ReqHeader Host: www.xxx.it >> - ReqHeader Connection: keep-alive >> - ReqHeader Pragma: no-cache >> - ReqHeader Cache-Control: no-cache >> - ReqHeader Accept-Encoding: identity;q=1, *;q=0 >> - ReqHeader User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X >> 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 >> Safari/537.36 >> - ReqHeader chrome-proxy: frfr >> - ReqHeader Accept: */* >> - ReqHeader Referer: http://www.xxx.it/path/to/url >> - ReqHeader Accept-Language: >> en-US,en;q=0.9,es;q=0.8,fr;q=0.7,it;q=0.6 >> - ReqHeader Cookie: SESScookieagreed=2; has_js=1; >> _ga=GA1.2.950111086.1552579260; _gid=GA1.2.112403313.1552579260; >> __atuvc=2%7C11; _gat_UA-57096474-1=1 >> - ReqHeader Range: bytes=0- >> - ReqHeader X-Forwarded-For: 93.147.150.135 >> - VCL_call RECV >> - VCL_acl NO_MATCH allowed_monitors >> - ReqUnset X-Forwarded-For: 93.147.150.135 >> - ReqHeader X-Forwarded-For: 93.147.150.135, 93.147.150.135 >> - ReqURL >> /sites/default/files/audio/radio_interviews/20180927-rds-gr_rds_1700-170602593m_1.mp3 >> - ReqHeader x-range: bytes=0- >> - ReqUnset Cookie: SESScookieagreed=2; has_js=1; >> _ga=GA1.2.950111086.1552579260; _gid=GA1.2.112403313.1552579260; >> __atuvc=2%7C11; _gat_UA-57096474-1=1 >> - VCL_return hash >> - ReqUnset Accept-Encoding: identity;q=1, *;q=0 >> - VCL_call HASH >> - ReqUnset Range: bytes=0- >> - VCL_return lookup >> - Hit 229378 >> - VCL_call HIT >> - VCL_return fetch >> - VCL_Error change return(fetch) to return(miss) in vcl_hit{} >> - VCL_call MISS >> - VCL_return fetch >> - Link bereq 1409739 fetch >> - Timestamp Fetch: 1552587068.542520 0.000523 0.000523 >> - RespProtocol HTTP/1.1 >> - RespStatus 200 >> - RespReason OK >> - RespHeader Server: nginx/1.11.5 >> - RespHeader Date: Thu, 14 Mar 2019 18:11:08 GMT >> - RespHeader Last-Modified: Fri, 28 Sep 2018 08:26:56 GMT >> - RespHeader ETag: "5bade5d0-a0b94" >> - RespHeader Content-Type: audio/mpeg >> - RespHeader Content-Length: 658324 >> - RespHeader X-Cacheable: YES >> - RespHeader X-Varnish: 1409738 >> - RespHeader Age: 0 >> - RespHeader Via: 1.1 varnish-v4 >> - VCL_call DELIVER >> - RespHeader X-Cache: MISS >> - RespHeader X-Cookie: >> - RespHeader grace: >> - RespHeader X-Varnish-Server: www.xxx.it >> - VCL_return deliver >> - Timestamp Process: 1552587068.542536 0.000539 0.000016 >> - RespHeader Accept-Ranges: bytes >> - Debug "RES_MODE 2" >> - RespHeader Connection: keep-alive >> - Debug "Hit idle send timeout, wrote = 247608/658701; >> retrying" >> - Debug "Write error, retval = -1, len = 411093, errno = >> Resource temporarily unavailable" >> - Timestamp Resp: 1552587188.539239 119.997242 119.996703 >> - ReqAcct 733 0 733 377 658324 658701 >> - End >> >> I?m not a Varnish ?guru?, but it seems that the error is: >> >> *- Debug "Hit idle send timeout, wrote = 247608/658701; >> retrying"* >> *- Debug "Write error, retval = -1, len = 411093, errno = >> Resource temporarily unavailable"* >> >> I search a lot on Google but I didn?t find anything about that, except >> for a timeout problem that doesn?t seem to me because timeout settings are >> set to 60s (first_byte), same as Nginx. >> >> Any help is really appreciated. If you need some other infos, let me know. >> >> Thanks in advance >> Daniele >> Daniele Piaggesi >> >> Mobile: +39 393 880 78 50 >> Skype: g0blin79 >> E-mail: daniele.piaggesi at bmeme.com >> ------------------------------------------ >> Bonsaimeme S.r.l. >> Via del Porto Fluviale, 9 >> 00154 Roma - Italy >> >> Phone: +39 06 98 26 04 39 >> Fax: +39 06 94 81 02 03 >> ------------------------------------------- >> bmeme.com >> >> >> *** Prima di stampare, pensa all'ambiente! *** >> *** Before printing think about environment and costs *** >> >> >> Le informazioni, i dati e le notizie contenute nella presente >> comunicazione e i relativi allegati sono di natura privata e come tali >> possono essere riservate e sono, comunque, destinate esclusivamente ai >> destinatari indicati in epigrafe. La diffusione, distribuzione e/o la >> copiatura del documento trasmesso da parte di qualsiasi soggetto diverso >> dal destinatario ? proibita, sia ai sensi dell?art. 616 c.p., sia ai sensi >> del D.Lgs. n. 196/2003. Se avete ricevuto questo messaggio per errore, vi >> preghiamo di distruggerlo e di darcene immediata comunicazione anche >> inviando un messaggio all?indirizzo email: info at bonsaimeme.com. Il >> testo della email potrebbe contenere opinioni personali e non >> necessariamente riconducibili a quelle di Bonsaimeme S.r.l. >> >> --- --- --- >> >> This e-mail (including attachments) is intended only for the recipient(s) >> named above. It may contain confidential or privileged information and >> should not be read, copied or otherwise used by any other person. If you >> are not the named recipient, please contact: info at bonsaimeme.com and >> delete the e-mail from your system. Rif. D.L. 196/2003. >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hetardik.p at gmail.com Thu Apr 4 05:54:30 2019 From: hetardik.p at gmail.com (Hardik) Date: Thu, 4 Apr 2019 11:24:30 +0530 Subject: Reg: exact field for response time In-Reply-To: <263b86ff-19c3-fa8e-55df-f63ea242ed1f@uplex.de> References: <008d5d67-011f-a56a-ad38-e7d9a90aafa4@uplex.de> <263b86ff-19c3-fa8e-55df-f63ea242ed1f@uplex.de> Message-ID: Thanks a lot Geoff. I was ill and off work for few days so did not reply. I am reading second field now. In future to give only response time from varnish, will try to sum other time stamps as you said. Thank you Hardik On Wed, 20 Mar 2019 at 17:15, Geoff Simmons wrote: > On 3/20/19 07:14, Hardik wrote: > > > > what time should give to customer as a response time? Is it 2nd > field(Total > > round trip time) or 3rd field(only response time after processed fetched > > content) ? > > Timestamp Resp: 1501601912.806787* 0.048125* *0.000037* > > > > Reason of asking this question is, Only first time request will go to > > origin. After that next all the request will be served from cache. Also > > generally we do not have control over other network (customer's network) > > when request goes to origin. As per this understanding, I should give 3rd > > field to customer as per response time. Please correct me if I am wrong. > > The third field of Resp is not a useful measurement. It's the time taken > for Varnish userland code to complete network send operations -- when > the syscalls say they're done, there's essentially nothing left for > Varnish to do with the response, so Varnish writes the final timestamp. > > But return from the syscalls for network send may mean nothing more > than: the data has been placed on queues in the TCP stack. It doesn't > tell you anything about the network send, or even if the data has been > sent on the network yet at all. > > As a practical matter, if you tell your customer that "response time" > was 37 microseconds, they probably won't believe you. (I wouldn't.) > > From what you've said, it sounds like you're looking for something like: > the time taken to process the request, but not including the time for a > fetch from the origin server. Is that about right? > > For that, you'd need to do more than read one field from one Timestamp > entry -- you'll need to read at least two, maybe more, and then do some > arithmetic. > > The best measure for the total time of request processing is the 2nd > field in Timestamp:Resp, 48ms in your example above. > > The best measure for the fetch time is in the backend log, the 3rd field > of Timestamp:Beresp, maybe added to the 3rd field of > Timestamp:BerespBody. So you'd have to find the backend log > corresponding to client log. -g request can help you with that. > > Then (if this is what you're after), subtract the fetch time from the > total request processing time. > > There are some other ways to go about it, but it depends on what exactly > you want to measure as "response time". And since I may have > misunderstood what you're trying to measure, I'll stop there. > > > HTH, > Geoff > -- > ** * * UPLEX - Nils Goroll Systemoptimierung > > Scheffelstra?e 32 > 22301 Hamburg > > Tel +49 40 2880 5731 > Mob +49 176 636 90917 > Fax +49 40 42949753 > > http://uplex.de > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hetardik.p at gmail.com Thu Apr 4 06:03:07 2019 From: hetardik.p at gmail.com (Hardik) Date: Thu, 4 Apr 2019 11:33:07 +0530 Subject: reg: not getting ReqAcct tag in varnishlog In-Reply-To: References: Message-ID: Thanks Dridi. I was ill and off work for few days so did not reply. Hey Dridi, I am already using varnishncsa. And use case perspective I am much familiar. I am using below configuration for varnishncsa. This logs I am calling as raw-logs/Apache and providing to customer directly without any modification.Means no processing on this. #the format of the log lines FORMAT='-F "%h %v %u %t \"%m %U%q %H\" %s %O \"%{Referer}i\" \"%{User-agent}i\" %S [%{VCL_Log:x-product}x lg=\"%{Accept-Language}i\" fwd=\"%{X-Forwarded-For}i\" %{VCL_Log:namespace}x %{VCL_Log:x-cache-hit}x/%{VCL_Log:x-revalidate-cache}x %{VCL_Log:sc-substatus}x] Note- few parameter I have added in varnishncsa to get that information ex:%S. I have written other utility because want to parse all the fields based on billing requirements. I have tried removing "-g session" option and now log loss reduced a lot(I can say 95%). If you can help on my previous reply/doubts then will be really helpful ( because billing really difficult to change billing setup).I am only facing two issues here, 1. log loss 2. missing of ReqAcct ( which is also case in varnishncsa). If I am doing something wrong other then "-g session" option in the code I have pasted before from utility please point out. Thank you Hardik On Wed, 20 Mar 2019 at 21:02, Dridi Boukelmoune wrote: > On Wed, Mar 20, 2019 at 10:40 AM Hardik wrote: > > > > Hi Dridi, > > > > Do you need all timestamps or a specific metric? > > Regarding timestamp, want to read two tags, > > Timestamp Start: 1516269224.184112 0.000000 0.000000 > > Timestamp Resp: 1516269224.184920 0.000808 0.000087 > > > > Do you need the BereqAcct records for all transactions? Including cache > hits? > > Sorry it is my mistake. I am not reading any of the beck-end records. So > can ignore BereqAcct. > > I need fields from Req records only. > > Ok, in this case you can probably get away with just varnishncsa to > collect all you need. > > No grouping (the default -g vxid), client mode (-c) only, with a > custom -F format to grab only what you need. > > This should help reduce the churn to the point where you lose data. > > If you struggle with this, I can help you later with that, but start > by reading the following manuals: > > - varnishncsa > - vsl > - vsl-query > > For example, the format for the timestamps you wish to collect would > look like this: > > > %{VSL:Timestamp:Start[1]}x %{VSL:Timestamp:Resp[2]}x > %{VSL:Timestamp:Resp[3]}x > > Rinse and repeat for all the things you need to capture for the logs, > put them in the order you prefer and off you go. No need to write your > own utility. > > > What does FD mean here? File descriptor? From ReqStart? > > Yes, Its file descriptor. And yes reading from ReqStart till ReqAcct. > Using switch case to read needed records. > > If you already work with VXIDs, the FD becomes redundant. > > > > Dridi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hetardik.p at gmail.com Thu Apr 11 16:57:22 2019 From: hetardik.p at gmail.com (Hardik) Date: Thu, 11 Apr 2019 22:27:22 +0530 Subject: reg: not getting ReqAcct tag in varnishlog In-Reply-To: References: Message-ID: Hi Dridi / Team, Gentle reminder.. Thank you Hardik On Thu, 4 Apr 2019 at 11:33, Hardik wrote: > Thanks Dridi. > > I was ill and off work for few days so did not reply. > > Hey Dridi, I am already using varnishncsa. And use case perspective I am > much familiar. I am using below configuration for varnishncsa. This logs I > am calling as raw-logs/Apache and providing to customer directly without > any modification.Means no processing on this. > > #the format of the log lines > FORMAT='-F "%h %v %u %t \"%m %U%q %H\" %s %O \"%{Referer}i\" > \"%{User-agent}i\" %S [%{VCL_Log:x-product}x lg=\"%{Accept-Language}i\" > fwd=\"%{X-Forwarded-For}i\" %{VCL_Log:namespace}x > %{VCL_Log:x-cache-hit}x/%{VCL_Log:x-revalidate-cache}x > %{VCL_Log:sc-substatus}x] > > Note- few parameter I have added in varnishncsa to get that information > ex:%S. > > I have written other utility because want to parse all the fields based on > billing requirements. I have tried removing "-g session" option and now log > loss reduced a lot(I can say 95%). > > If you can help on my previous reply/doubts then will be really helpful ( > because billing really difficult to change billing setup).I am only facing > two issues here, 1. log loss 2. missing of ReqAcct ( which is also case in > varnishncsa). > > If I am doing something wrong other then "-g session" option in the code I > have pasted before from utility please point out. > > Thank you > Hardik > > On Wed, 20 Mar 2019 at 21:02, Dridi Boukelmoune wrote: > >> On Wed, Mar 20, 2019 at 10:40 AM Hardik wrote: >> > >> > Hi Dridi, >> > >> > Do you need all timestamps or a specific metric? >> > Regarding timestamp, want to read two tags, >> > Timestamp Start: 1516269224.184112 0.000000 0.000000 >> > Timestamp Resp: 1516269224.184920 0.000808 0.000087 >> > >> > Do you need the BereqAcct records for all transactions? Including cache >> hits? >> > Sorry it is my mistake. I am not reading any of the beck-end records. >> So can ignore BereqAcct. >> > I need fields from Req records only. >> >> Ok, in this case you can probably get away with just varnishncsa to >> collect all you need. >> >> No grouping (the default -g vxid), client mode (-c) only, with a >> custom -F format to grab only what you need. >> >> This should help reduce the churn to the point where you lose data. >> >> If you struggle with this, I can help you later with that, but start >> by reading the following manuals: >> >> - varnishncsa >> - vsl >> - vsl-query >> >> For example, the format for the timestamps you wish to collect would >> look like this: >> >> > %{VSL:Timestamp:Start[1]}x %{VSL:Timestamp:Resp[2]}x >> %{VSL:Timestamp:Resp[3]}x >> >> Rinse and repeat for all the things you need to capture for the logs, >> put them in the order you prefer and off you go. No need to write your >> own utility. >> >> > What does FD mean here? File descriptor? From ReqStart? >> > Yes, Its file descriptor. And yes reading from ReqStart till ReqAcct. >> Using switch case to read needed records. >> >> If you already work with VXIDs, the FD becomes redundant. >> >> >> >> Dridi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alexandre.Thaveau at mister-auto.com Wed Apr 17 15:17:51 2019 From: Alexandre.Thaveau at mister-auto.com (Alexandre Thaveau) Date: Wed, 17 Apr 2019 17:17:51 +0200 Subject: Varnish hit + pass + miss reaches less than 50% of all reqs Message-ID: Hello everybody, i was trying to get some stats from my varnish server using varnishstat. When using varnish stat, i see that : - MAIN.client_req should make 100% of my queries - MAIN.cache_hit represents 10% of MAIN.client_req - MAIN.cache_hitpass represents 7% of MAIN.client_req - MAIN.cache_miss represents 24% of MAIN.client_req - MAIN.cache_hit_grace is very low So all these sumed categories represent less than 50% of client_req, i think i'm missing something. The configuration is not maintained by me, here is a sample of what it returns if this can help : ------------------------------------- cat /etc/varnish/production.vcl | grep return return(synth(900, "Moved Permanently")); return(synth(901, "Moved Permanently")); return(synth(902, "Moved Permanently")); return(synth(903, "Moved Permanently")); return(pipe); return(pipe); return(pass); return(pass); return(pass); return(synth(410, "Gone")); return(pass); return(synth(850, "Moved Permanently")); return(hash); return(hash); return(pass); return(hash); return(lookup); return(retry); return(deliver); ------------------------------------- Thanks very much for your help, Regards, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alexandre.Thaveau at mister-auto.com Wed Apr 17 15:47:44 2019 From: Alexandre.Thaveau at mister-auto.com (Alexandre Thaveau) Date: Wed, 17 Apr 2019 17:47:44 +0200 Subject: Varnish hit + pass + miss reaches less than 50% of all reqs In-Reply-To: References: Message-ID: Hi Guillaume, thanks for helping ! Here it is : MAIN.uptime 1821762 1.00 Child process uptime [532/604] MAIN.sess_conn 152864621 83.91 Sessions accepted MAIN.sess_drop 0 0.00 Sessions dropped MAIN.sess_fail 0 0.00 Session accept failures MAIN.client_req_400 0 0.00 Client requests received, subject to 400 errors MAIN.client_req_417 0 0.00 Client requests received, subject to 417 errors MAIN.client_req 290152364 159.27 Good client requests received MAIN.cache_hit 7433491 4.08 Cache hits MAIN.cache_hit_grace 36319 0.02 Cache grace hits MAIN.cache_hitpass 16003020 8.78 Cache hits for pass MAIN.cache_miss 89526521 49.14 Cache misses MAIN.backend_conn 5078542 2.79 Backend conn. success MAIN.backend_unhealthy 2 0.00 Backend conn. not attempted MAIN.backend_busy 0 0.00 Backend conn. too many MAIN.backend_fail 6245 0.00 Backend conn. failures MAIN.backend_reuse 266259369 146.15 Backend conn. reuses MAIN.backend_recycle 267274817 146.71 Backend conn. recycles MAIN.backend_retry 17429 0.01 Backend conn. retry MAIN.fetch_head 1623 0.00 Fetch no body (HEAD) MAIN.fetch_length 77119616 42.33 Fetch with Length MAIN.fetch_chunked 188334324 103.38 Fetch chunked MAIN.fetch_eof 295751 0.16 Fetch EOF MAIN.fetch_bad 0 0.00 Fetch bad T-E MAIN.fetch_none 18415 0.01 Fetch no body MAIN.fetch_1xx 0 0.00 Fetch no body (1xx) MAIN.fetch_204 5427973 2.98 Fetch no body (204) MAIN.fetch_304 130058 0.07 Fetch no body (304) MAIN.fetch_failed 8591 0.00 Fetch failed (all causes) MAIN.fetch_no_thread 0 0.00 Fetch failed (no thread) MAIN.pools 2 . Number of thread pools MAIN.threads 400 . Total number of threads MAIN.threads_limited 15 0.00 Threads hit max MAIN.threads_created 35825 0.02 Threads created MAIN.threads_destroyed 35425 0.02 Threads destroyed MAIN.threads_failed 0 0.00 Thread creation failed MAIN.thread_queue_len 0 . Length of session queue MAIN.busy_sleep 160177 0.09 Number of requests sent to sleep on busy objhdr MAIN.busy_wakeup 160177 0.09 Number of requests woken after sleep on busy objhdr MAIN.busy_killed 0 0.00 Number of requests killed after sleep on busy objhdr MAIN.sess_queued 35718 0.02 Sessions queued for thread MAIN.sess_dropped 0 0.00 Sessions dropped for thread MAIN.n_object 370310 . object structs made MAIN.n_vampireobject 0 . unresurrected objects MAIN.n_objectcore 370485 . objectcore structs made MAIN.n_objecthead 376770 . objecthead structs made MAIN.n_waitinglist 388 . waitinglist structs made MAIN.n_backend 363 . Number of backends MAIN.n_expired 68570078 . Number of expired objects MAIN.n_lru_nuked 20474008 . Number of LRU nuked objects MAIN.n_lru_moved 6156256 . Number of LRU moved objects MAIN.n_lru_limited 3 0.00 Reached nuke_limit MAIN.losthdr 0 0.00 HTTP header overflows MAIN.s_sess 152864621 83.91 Total sessions seen MAIN.s_req 290152364 159.27 Total requests seen MAIN.s_pipe 216 0.00 Total pipe sessions seen MAIN.s_pass 181773529 99.78 Total pass-ed requests seen MAIN.s_fetch 271300050 148.92 Total backend fetches initiated MAIN.s_synth 11418599 6.27 Total synthethic responses made MAIN.s_req_hdrbytes 490884503465 269455.89 Request header bytes MAIN.s_req_bodybytes 6627058200 3637.72 Request body bytes MAIN.s_resp_hdrbytes 253365794945 139077.33 Response header bytes MAIN.s_resp_bodybytes 2259371209416 1240212.06 Response body bytes MAIN.s_pipe_hdrbytes 317932 0.17 Pipe request header bytes MAIN.s_pipe_in 0 0.00 Piped bytes from client MAIN.s_pipe_out 650171 0.36 Piped bytes to client MAIN.s_pipe_out 650171 0.36 Piped bytes to client MAIN.sess_closed 8450 0.00 Session Closed MAIN.sess_closed_err 51028839 28.01 Session Closed with error MAIN.sess_readahead 0 0.00 Session Read Ahead MAIN.sess_herd 208419766 114.41 Session herd MAIN.sc_rem_close 101825384 55.89 Session OK REM_CLOSE MAIN.sc_req_close 103 0.00 Session OK REQ_CLOSE MAIN.sc_req_http10 0 0.00 Session Err REQ_HTTP10 MAIN.sc_rx_bad 0 0.00 Session Err RX_BAD MAIN.sc_rx_body 0 0.00 Session Err RX_BODY MAIN.sc_rx_junk 0 0.00 Session Err RX_JUNK MAIN.sc_rx_overflow 0 0.00 Session Err RX_OVERFLOW MAIN.sc_rx_timeout 51028883 28.01 Session Err RX_TIMEOUT MAIN.sc_tx_pipe 216 0.00 Session OK TX_PIPE MAIN.sc_tx_error 0 0.00 Session Err TX_ERROR MAIN.sc_tx_eof 0 0.00 Session OK TX_EOF MAIN.sc_resp_close 6107 0.00 Session OK RESP_CLOSE MAIN.sc_overload 0 0.00 Session Err OVERLOAD MAIN.sc_pipe_overflow 0 0.00 Session Err PIPE_OVERFLOW MAIN.sc_range_short 0 0.00 Session Err RANGE_SHORT MAIN.shm_records 48096929466 26401.32 SHM records MAIN.shm_writes 1753278706 962.41 SHM writes MAIN.shm_flushes 111020519 60.94 SHM flushes due to overflow MAIN.shm_cont 6287333 3.45 SHM MTX contention MAIN.shm_cycles 24363 0.01 SHM cycles through buffer MAIN.backend_req 271344507 148.95 Backend requests made MAIN.n_vcl 10 . Number of loaded VCLs in total MAIN.n_vcl_avail 10 . Number of VCLs available MAIN.n_vcl_discard 0 . Number of discarded VCLs MAIN.bans 1 . Count of bans MAIN.bans_completed 0 . Number of bans marked 'completed' MAIN.bans_obj 0 . Number of bans using obj.* MAIN.bans_req 1 . Number of bans using req.* MAIN.bans_added 128 0.00 Bans added MAIN.bans_deleted 127 0.00 Bans deleted MAIN.bans_tested 630175 0.35 Bans tested against objects (lookup) MAIN.bans_obj_killed 112189 0.06 Objects killed by bans (lookup) MAIN.bans_lurker_tested 0 0.00 Bans tested against objects (lurker) MAIN.bans_tests_tested 1382078 0.76 Ban tests tested against objects (lookup) MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested against objects (lurker) MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by bans (lurker) MAIN.bans_dups 14 0.00 Bans superseded by other bans MAIN.bans_lurker_contention 0 0.00 Lurker gave way for lookup MAIN.bans_persisted_bytes 34768 . Bytes used by the persisted ban lists MAIN.bans_persisted_fragmentation 34468 . Extra bytes in persisted ban lists due to fragmentation MAIN.n_purges 0 . Number of purge operations executed MAIN.n_obj_purged 0 . Number of purged objects MAIN.exp_mailed 110185218 60.48 Number of objects mailed to expiry thread MAIN.exp_received 110185218 60.48 Number of objects received by expiry thread MAIN.hcb_nolock 112963041 62.01 HCB Lookups without lock MAIN.hcb_lock 86256017 47.35 HCB Lookups with lock MAIN.hcb_insert 86255995 47.35 HCB Inserts MAIN.esi_errors 0 0.00 ESI parse errors (unlock) MAIN.esi_warnings 0 0.00 ESI parse warnings (unlock) MAIN.vmods 2 . Loaded VMODs MAIN.n_gzip 129789432 71.24 Gzip operations MAIN.n_gunzip 210646709 115.63 Gunzip operations MAIN.vsm_free 965520 . Free VSM space MAIN.vsm_used 83969088 . Used VSM space MAIN.vsm_cooling 0 . Cooling VSM space MAIN.vsm_overflow 0 . Overflow VSM space MAIN.vsm_overflowed 0 0.00 Overflowed VSM space MGT.uptime 1821761 1.00 Management process uptime MGT.child_start 1 0.00 Child process started MGT.child_exit 0 0.00 Child process normal exit MGT.child_stop 0 0.00 Child process unexpected exit MGT.child_died 0 0.00 Child process died (signal) MGT.child_dump 0 0.00 Child process core dumped MGT.child_panic 0 0.00 Child process panic MEMPOOL.busyobj.live 34 . In use MEMPOOL.busyobj.pool 32 . In Pool MEMPOOL.busyobj.sz_wanted 65536 . Size requested MEMPOOL.busyobj.sz_actual 65504 . Size allocated MEMPOOL.busyobj.allocs 271336588 148.94 Allocations MEMPOOL.busyobj.frees 271336554 148.94 Frees MEMPOOL.busyobj.recycle 270933647 148.72 Recycled from pool MEMPOOL.busyobj.timeout 988916 0.54 Timed out from pool MEMPOOL.busyobj.toosmall 0 0.00 Too small to recycle MEMPOOL.busyobj.surplus 88997 0.05 Too many for pool MEMPOOL.busyobj.randry 402941 0.22 Pool ran dry MEMPOOL.req0.live 23 . In use MEMPOOL.req0.pool 24 . In Pool MEMPOOL.req0.sz_wanted 131072 . Size requested MEMPOOL.req0.sz_actual 131040 . Size allocated MEMPOOL.req0.allocs 155090050 85.13 Allocations MEMPOOL.req0.frees 155090027 85.13 Frees MEMPOOL.req0.recycle 154983843 85.07 Recycled from pool MEMPOOL.req0.timeout 1025602 0.56 Timed out from pool MEMPOOL.req0.toosmall 0 0.00 Too small to recycle MEMPOOL.req0.surplus 2995 0.00 Too many for pool MEMPOOL.req0.randry 106207 0.06 Pool ran dry MEMPOOL.sess0.live 333 . In use MEMPOOL.sess0.pool 20 . In Pool MEMPOOL.sess0.sz_wanted 512 . Size requested MEMPOOL.sess0.sz_actual 480 . Size allocated MEMPOOL.sess0.allocs 76433149 41.96 Allocations MEMPOOL.sess0.frees 76432816 41.96 Frees MEMPOOL.sess0.recycle 76232233 41.85 Recycled from pool MEMPOOL.sess0.timeout 1809683 0.99 Timed out from pool MEMPOOL.sess0.toosmall 0 0.00 Too small to recycle MEMPOOL.sess0.surplus 18419 0.01 Too many for pool MEMPOOL.sess0.randry 200916 0.11 Pool ran dry MEMPOOL.req1.live 23 . In use MEMPOOL.req1.pool 25 . In Pool MEMPOOL.req1.sz_wanted 131072 . Size requested MEMPOOL.req1.sz_actual 131040 . Size allocated MEMPOOL.req1.allocs 155148876 85.16 Allocations MEMPOOL.req1.frees 155148853 85.16 Frees MEMPOOL.req1.recycle 155041566 85.11 Recycled from pool MEMPOOL.req1.timeout 1025704 0.56 Timed out from pool MEMPOOL.req1.toosmall 0 0.00 Too small to recycle MEMPOOL.req1.surplus 2749 0.00 Too many for pool MEMPOOL.req1.randry 107310 0.06 Pool ran dry MEMPOOL.sess1.live 314 . In use MEMPOOL.sess1.pool 36 . In Pool MEMPOOL.sess1.sz_wanted 512 . Size requested MEMPOOL.sess1.sz_actual 480 . Size allocated MEMPOOL.sess1.allocs 76431491 41.95 Allocations MEMPOOL.sess1.frees 76431177 41.95 Frees MEMPOOL.sess1.recycle 76229789 41.84 Recycled from pool MEMPOOL.sess1.timeout 1811749 0.99 Timed out from pool MEMPOOL.sess1.toosmall 0 0.00 Too small to recycle MEMPOOL.sess1.surplus 18312 0.01 Too many for pool MEMPOOL.sess1.randry 201702 0.11 Pool ran dry SMA.default.c_req 104863620 57.56 Allocator requests SMA.default.c_fail 20841440 11.44 Allocator failures SMA.default.c_bytes 1035542691744 568429.19 Bytes allocated SMA.default.c_freed 1018362838340 558998.84 Bytes freed SMA.default.g_alloc 1393168 . Allocations outstanding SMA.default.g_bytes 17179853404 . Bytes outstanding SMA.default.g_space 15780 . Bytes available SMA.Transient.c_req 570410809 313.11 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 4076783572265 2237824.46 Bytes allocated SMA.Transient.c_freed 4076757189033 2237809.98 Bytes freed SMA.Transient.g_alloc 19133 . Allocations outstanding SMA.Transient.g_bytes 26383232 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available ... LCK.backend.creat 365 0.00 Created locks [0/604] LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 555822161 305.10 Lock Operations LCK.backend_tcp.creat 37 0.00 Created locks LCK.backend_tcp.destroy 0 0.00 Destroyed locks LCK.backend_tcp.locks 1076235922 590.77 Lock Operations LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 373355072 204.94 Lock Operations LCK.busyobj.creat 271334966 148.94 Created locks LCK.busyobj.destroy 271336345 148.94 Destroyed locks LCK.busyobj.locks 2741594616 1504.91 Lock Operations LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 607411 0.33 Lock Operations LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 563288168 309.20 Lock Operations LCK.hcb.creat 1 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 172158514 94.50 Lock Operations LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 296448067 162.73 Lock Operations LCK.mempool.creat 5 0.00 Created locks LCK.mempool.destroy 0 0.00 Destroyed locks LCK.mempool.locks 1487652815 816.60 Lock Operations LCK.objhdr.creat 86265523 47.35 Created locks LCK.objhdr.destroy 85889681 47.15 Destroyed locks LCK.objhdr.locks 3588940017 1970.04 Lock Operations LCK.pipestat.creat 1 0.00 Created locks LCK.pipestat.destroy 0 0.00 Destroyed locks LCK.pipestat.locks 216 0.00 Lock Operations LCK.sess.creat 152861317 83.91 Created locks LCK.sess.destroy 152862891 83.91 Destroyed locks LCK.sess.locks 37421700 20.54 Lock Operations LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.vbe.creat 1 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 607832 0.33 Lock Operations LCK.vcapace.creat 1 0.00 Created locks LCK.vcapace.destroy 0 0.00 Destroyed locks LCK.vcapace.locks 0 0.00 Lock Operations LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 543937089 298.58 Lock Operations LCK.vxid.creat 1 0.00 Created locks LCK.vxid.destroy 0 0.00 Destroyed locks LCK.vxid.locks 45733 0.03 Lock Operations LCK.waiter.creat 2 0.00 Created locks LCK.waiter.destroy 0 0.00 Destroyed locks LCK.waiter.locks 1420757858 779.88 Lock Operations LCK.wq.creat 3 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 1773506747 973.51 Lock Operations LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 759559125 416.94 Lock Operations LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 1328287853 729.12 Lock Operations I removed the vcl_root sections, i hope you won't need it Thanks again, Le mer. 17 avr. 2019 ? 17:28, Guillaume Quintard < guillaume at varnish-software.com> a ?crit : > Can you share the "varnishstat -1" output? > > I'm pretty sure the answer is in the passes and synth responses you omitted > > > On Wed, Apr 17, 2019, 16:19 Alexandre Thaveau < > Alexandre.Thaveau at mister-auto.com> wrote: > >> Hello everybody, >> >> i was trying to get some stats from my varnish server using varnishstat. >> When using varnish stat, i see that : >> - MAIN.client_req should make 100% of my queries >> - MAIN.cache_hit represents 10% of MAIN.client_req >> - MAIN.cache_hitpass represents 7% of MAIN.client_req >> - MAIN.cache_miss represents 24% of MAIN.client_req >> - MAIN.cache_hit_grace is very low >> >> So all these sumed categories represent less than 50% of client_req, i >> think i'm missing something. The configuration is not maintained by me, >> here is a sample of what it returns if this can help : >> ------------------------------------- >> cat /etc/varnish/production.vcl | grep return >> return(synth(900, "Moved Permanently")); >> return(synth(901, "Moved Permanently")); >> return(synth(902, "Moved Permanently")); >> return(synth(903, "Moved Permanently")); >> return(pipe); >> return(pipe); >> return(pass); >> return(pass); >> return(pass); >> return(synth(410, "Gone")); >> return(pass); >> return(synth(850, "Moved Permanently")); >> return(hash); >> return(hash); >> return(pass); >> return(hash); >> return(lookup); >> return(retry); >> return(deliver); >> ------------------------------------- >> >> Thanks very much for your help, >> Regards, >> Alex >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Apr 17 16:19:13 2019 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 17 Apr 2019 17:19:13 +0100 Subject: Varnish hit + pass + miss reaches less than 50% of all reqs In-Reply-To: References: Message-ID: Hi there, So: MAIN.client_req 290152364 (aaaaaaaaaaall the requests) vs MAIN.cache_hit 7433491 MAIN.cache_hit_grace 36319 (exclude these are they are already accounted for in MAIN.cache_hit) MAIN.cache_hitpass 16003020 (exclude these are they are already accounted for in MAIN.s_pass) MAIN.cache_miss 89526521 MAIN.s_synth 11418599 MAIN.s_pipe 216 MAIN.s_pass 181773529 the difference is now 8 requests, which is fairly reasonable (some requests may be in flight, and threads don't necessarily push their stats after every requests) does this clarify things a bit? -- Guillaume Quintard On Wed, Apr 17, 2019 at 4:51 PM Alexandre Thaveau < Alexandre.Thaveau at mister-auto.com> wrote: > Hi Guillaume, > > thanks for helping ! > > Here it is : > MAIN.uptime 1821762 1.00 Child process uptime > > > [532/604] > MAIN.sess_conn 152864621 83.91 Sessions accepted > MAIN.sess_drop 0 0.00 Sessions dropped > MAIN.sess_fail 0 0.00 Session accept failures > MAIN.client_req_400 0 0.00 Client requests received, > subject to 400 errors > MAIN.client_req_417 0 0.00 Client requests received, > subject to 417 errors > MAIN.client_req 290152364 159.27 Good client requests received > MAIN.cache_hit 7433491 4.08 Cache hits > MAIN.cache_hit_grace 36319 0.02 Cache grace hits > MAIN.cache_hitpass 16003020 8.78 Cache hits for pass > MAIN.cache_miss 89526521 49.14 Cache misses > MAIN.backend_conn 5078542 2.79 Backend conn. success > MAIN.backend_unhealthy 2 0.00 Backend conn. not > attempted > MAIN.backend_busy 0 0.00 Backend conn. too many > MAIN.backend_fail 6245 0.00 Backend conn. failures > MAIN.backend_reuse 266259369 146.15 Backend conn. reuses > MAIN.backend_recycle 267274817 146.71 Backend conn. recycles > MAIN.backend_retry 17429 0.01 Backend conn. retry > MAIN.fetch_head 1623 0.00 Fetch no body (HEAD) > MAIN.fetch_length 77119616 42.33 Fetch with Length > MAIN.fetch_chunked 188334324 103.38 Fetch chunked > MAIN.fetch_eof 295751 0.16 Fetch EOF > MAIN.fetch_bad 0 0.00 Fetch bad T-E > MAIN.fetch_none 18415 0.01 Fetch no body > MAIN.fetch_1xx 0 0.00 Fetch no body (1xx) > MAIN.fetch_204 5427973 2.98 Fetch no body (204) > MAIN.fetch_304 130058 0.07 Fetch no body (304) > MAIN.fetch_failed 8591 0.00 Fetch failed (all causes) > MAIN.fetch_no_thread 0 0.00 Fetch failed (no thread) > MAIN.pools 2 . Number of thread pools > MAIN.threads 400 . Total number of threads > MAIN.threads_limited 15 0.00 Threads hit max > MAIN.threads_created 35825 0.02 Threads created > MAIN.threads_destroyed 35425 0.02 Threads destroyed > MAIN.threads_failed 0 0.00 Thread creation failed > MAIN.thread_queue_len 0 . Length of session queue > MAIN.busy_sleep 160177 0.09 Number of requests sent > to sleep on busy objhdr > MAIN.busy_wakeup 160177 0.09 Number of requests woken > after sleep on busy objhdr > MAIN.busy_killed 0 0.00 Number of requests killed > after sleep on busy objhdr > MAIN.sess_queued 35718 0.02 Sessions queued for thread > MAIN.sess_dropped 0 0.00 Sessions dropped for > thread > MAIN.n_object 370310 . object structs made > MAIN.n_vampireobject 0 . unresurrected objects > MAIN.n_objectcore 370485 . objectcore structs made > MAIN.n_objecthead 376770 . objecthead structs made > MAIN.n_waitinglist 388 . waitinglist structs made > MAIN.n_backend 363 . Number of backends > MAIN.n_expired 68570078 . Number of expired objects > MAIN.n_lru_nuked 20474008 . Number of LRU nuked > objects > MAIN.n_lru_moved 6156256 . Number of LRU moved > objects > MAIN.n_lru_limited 3 0.00 Reached nuke_limit > MAIN.losthdr 0 0.00 HTTP header overflows > MAIN.s_sess 152864621 83.91 Total sessions seen > MAIN.s_req 290152364 159.27 Total requests seen > MAIN.s_pipe 216 0.00 Total pipe sessions seen > MAIN.s_pass 181773529 99.78 Total pass-ed requests > seen > MAIN.s_fetch 271300050 148.92 Total backend fetches > initiated > MAIN.s_synth 11418599 6.27 Total synthethic > responses made > MAIN.s_req_hdrbytes 490884503465 269455.89 Request header bytes > MAIN.s_req_bodybytes 6627058200 3637.72 Request body bytes > MAIN.s_resp_hdrbytes 253365794945 139077.33 Response header bytes > MAIN.s_resp_bodybytes 2259371209416 1240212.06 Response body bytes > MAIN.s_pipe_hdrbytes 317932 0.17 Pipe request header bytes > MAIN.s_pipe_in 0 0.00 Piped bytes from client > MAIN.s_pipe_out 650171 0.36 Piped bytes to client > MAIN.s_pipe_out 650171 0.36 Piped bytes to client > MAIN.sess_closed 8450 0.00 Session Closed > MAIN.sess_closed_err 51028839 28.01 Session Closed with error > MAIN.sess_readahead 0 0.00 Session Read Ahead > MAIN.sess_herd 208419766 114.41 Session herd > MAIN.sc_rem_close 101825384 55.89 Session OK REM_CLOSE > MAIN.sc_req_close 103 0.00 Session OK REQ_CLOSE > MAIN.sc_req_http10 0 0.00 Session Err REQ_HTTP10 > MAIN.sc_rx_bad 0 0.00 Session Err RX_BAD > MAIN.sc_rx_body 0 0.00 Session Err RX_BODY > MAIN.sc_rx_junk 0 0.00 Session Err RX_JUNK > MAIN.sc_rx_overflow 0 0.00 Session Err RX_OVERFLOW > MAIN.sc_rx_timeout 51028883 28.01 Session Err RX_TIMEOUT > MAIN.sc_tx_pipe 216 0.00 Session OK TX_PIPE > MAIN.sc_tx_error 0 0.00 Session Err TX_ERROR > MAIN.sc_tx_eof 0 0.00 Session OK TX_EOF > MAIN.sc_resp_close 6107 0.00 Session OK RESP_CLOSE > MAIN.sc_overload 0 0.00 Session Err OVERLOAD > MAIN.sc_pipe_overflow 0 0.00 Session Err PIPE_OVERFLOW > MAIN.sc_range_short 0 0.00 Session Err RANGE_SHORT > MAIN.shm_records 48096929466 26401.32 SHM records > MAIN.shm_writes 1753278706 962.41 SHM writes > MAIN.shm_flushes 111020519 60.94 SHM flushes due to > overflow > MAIN.shm_cont 6287333 3.45 SHM MTX contention > MAIN.shm_cycles 24363 0.01 SHM cycles through buffer > MAIN.backend_req 271344507 148.95 Backend requests made > MAIN.n_vcl 10 . Number of loaded VCLs in > total > MAIN.n_vcl_avail 10 . Number of VCLs available > MAIN.n_vcl_discard 0 . Number of discarded VCLs > MAIN.bans 1 . Count of bans > MAIN.bans_completed 0 . Number of bans marked > 'completed' > MAIN.bans_obj 0 . Number of bans using obj.* > MAIN.bans_req 1 . Number of bans using req.* > MAIN.bans_added 128 0.00 Bans added > MAIN.bans_deleted 127 0.00 Bans deleted > MAIN.bans_tested 630175 0.35 Bans tested against > objects (lookup) > MAIN.bans_obj_killed 112189 0.06 Objects killed by bans > (lookup) > MAIN.bans_lurker_tested 0 0.00 Bans tested against > objects (lurker) > MAIN.bans_tests_tested 1382078 0.76 Ban tests tested against > objects (lookup) > MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested > against objects (lurker) > MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by > bans (lurker) > MAIN.bans_dups 14 0.00 Bans superseded by > other bans > MAIN.bans_lurker_contention 0 0.00 Lurker gave way > for lookup > MAIN.bans_persisted_bytes 34768 . Bytes used by the > persisted ban lists > MAIN.bans_persisted_fragmentation 34468 . Extra bytes in > persisted ban lists due to fragmentation > MAIN.n_purges 0 . Number of > purge operations executed > MAIN.n_obj_purged 0 . Number of > purged objects > MAIN.exp_mailed 110185218 60.48 Number of > objects mailed to expiry thread > MAIN.exp_received 110185218 60.48 Number of > objects received by expiry thread > MAIN.hcb_nolock 112963041 62.01 HCB Lookups > without lock > MAIN.hcb_lock 86256017 47.35 HCB Lookups > with lock > MAIN.hcb_insert 86255995 47.35 HCB Inserts > MAIN.esi_errors 0 0.00 ESI parse > errors (unlock) > MAIN.esi_warnings 0 0.00 ESI parse > warnings (unlock) > MAIN.vmods 2 . Loaded VMODs > MAIN.n_gzip 129789432 71.24 Gzip operations > MAIN.n_gunzip 210646709 115.63 Gunzip > operations > MAIN.vsm_free 965520 . Free VSM space > MAIN.vsm_used 83969088 . Used VSM space > MAIN.vsm_cooling 0 . Cooling VSM > space > MAIN.vsm_overflow 0 . Overflow VSM > space > MAIN.vsm_overflowed 0 0.00 Overflowed VSM > space > MGT.uptime 1821761 1.00 Management > process uptime > MGT.child_start 1 0.00 Child process > started > MGT.child_exit 0 0.00 Child process > normal exit > MGT.child_stop 0 0.00 Child process > unexpected exit > MGT.child_died 0 0.00 Child process > died (signal) > MGT.child_dump 0 0.00 Child process > core dumped > MGT.child_panic 0 0.00 Child process > panic > MEMPOOL.busyobj.live 34 . In use > MEMPOOL.busyobj.pool 32 . In Pool > MEMPOOL.busyobj.sz_wanted 65536 . Size requested > MEMPOOL.busyobj.sz_actual 65504 . Size allocated > MEMPOOL.busyobj.allocs 271336588 148.94 Allocations > MEMPOOL.busyobj.frees 271336554 148.94 Frees > MEMPOOL.busyobj.recycle 270933647 148.72 Recycled from > pool > MEMPOOL.busyobj.timeout 988916 0.54 Timed out from > pool > MEMPOOL.busyobj.toosmall 0 0.00 Too small to > recycle > MEMPOOL.busyobj.surplus 88997 0.05 Too many for > pool > MEMPOOL.busyobj.randry 402941 0.22 Pool ran dry > MEMPOOL.req0.live 23 . In use > MEMPOOL.req0.pool 24 . In Pool > MEMPOOL.req0.sz_wanted 131072 . Size requested > MEMPOOL.req0.sz_actual 131040 . Size allocated > MEMPOOL.req0.allocs 155090050 85.13 Allocations > MEMPOOL.req0.frees 155090027 85.13 Frees > MEMPOOL.req0.recycle 154983843 85.07 Recycled from > pool > MEMPOOL.req0.timeout 1025602 0.56 Timed out from > pool > MEMPOOL.req0.toosmall 0 0.00 Too small to > recycle > MEMPOOL.req0.surplus 2995 0.00 Too many for > pool > MEMPOOL.req0.randry 106207 0.06 Pool ran dry > MEMPOOL.sess0.live 333 . In use > MEMPOOL.sess0.pool 20 . In Pool > MEMPOOL.sess0.sz_wanted 512 . Size requested > MEMPOOL.sess0.sz_actual 480 . Size allocated > MEMPOOL.sess0.allocs 76433149 41.96 Allocations > MEMPOOL.sess0.frees 76432816 41.96 Frees > MEMPOOL.sess0.recycle 76232233 41.85 Recycled from > pool > MEMPOOL.sess0.timeout 1809683 0.99 Timed out from > pool > MEMPOOL.sess0.toosmall 0 0.00 Too small to > recycle > MEMPOOL.sess0.surplus 18419 0.01 Too many for > pool > MEMPOOL.sess0.randry 200916 0.11 Pool ran dry > MEMPOOL.req1.live 23 . In use > MEMPOOL.req1.pool 25 . In Pool > MEMPOOL.req1.sz_wanted 131072 . Size requested > MEMPOOL.req1.sz_actual 131040 . Size allocated > MEMPOOL.req1.allocs 155148876 85.16 Allocations > MEMPOOL.req1.frees 155148853 85.16 Frees > MEMPOOL.req1.recycle 155041566 85.11 Recycled from > pool > MEMPOOL.req1.timeout 1025704 0.56 Timed out from > pool > MEMPOOL.req1.toosmall 0 0.00 Too small to > recycle > MEMPOOL.req1.surplus 2749 0.00 Too many for > pool > MEMPOOL.req1.randry 107310 0.06 Pool ran dry > MEMPOOL.sess1.live 314 . In use > MEMPOOL.sess1.pool 36 . In Pool > MEMPOOL.sess1.sz_wanted 512 . Size requested > MEMPOOL.sess1.sz_actual 480 . Size allocated > MEMPOOL.sess1.allocs 76431491 41.95 Allocations > MEMPOOL.sess1.frees 76431177 41.95 Frees > MEMPOOL.sess1.recycle 76229789 41.84 Recycled from > pool > MEMPOOL.sess1.timeout 1811749 0.99 Timed out from > pool > MEMPOOL.sess1.toosmall 0 0.00 Too small to > recycle > MEMPOOL.sess1.surplus 18312 0.01 Too many for > pool > MEMPOOL.sess1.randry 201702 0.11 Pool ran dry > SMA.default.c_req 104863620 57.56 Allocator > requests > SMA.default.c_fail 20841440 11.44 Allocator > failures > SMA.default.c_bytes 1035542691744 568429.19 Bytes > allocated > SMA.default.c_freed 1018362838340 558998.84 Bytes freed > SMA.default.g_alloc 1393168 . Allocations > outstanding > SMA.default.g_bytes 17179853404 . Bytes > outstanding > SMA.default.g_space 15780 . Bytes available > SMA.Transient.c_req 570410809 313.11 Allocator > requests > SMA.Transient.c_fail 0 0.00 Allocator > failures > SMA.Transient.c_bytes 4076783572265 2237824.46 Bytes > allocated > SMA.Transient.c_freed 4076757189033 2237809.98 Bytes freed > SMA.Transient.g_alloc 19133 . Allocations > outstanding > SMA.Transient.g_bytes 26383232 . Bytes > outstanding > SMA.Transient.g_space 0 . Bytes available > ... > LCK.backend.creat > 365 0.00 Created locks > > [0/604] > LCK.backend.destroy > 0 0.00 Destroyed locks > LCK.backend.locks > 555822161 305.10 Lock Operations > LCK.backend_tcp.creat > 37 0.00 Created locks > LCK.backend_tcp.destroy > 0 0.00 Destroyed locks > LCK.backend_tcp.locks > 1076235922 590.77 Lock Operations > LCK.ban.creat > 1 0.00 Created locks > LCK.ban.destroy > 0 0.00 Destroyed locks > LCK.ban.locks > 373355072 204.94 Lock Operations > LCK.busyobj.creat > 271334966 148.94 Created locks > LCK.busyobj.destroy > 271336345 148.94 Destroyed locks > LCK.busyobj.locks > 2741594616 1504.91 Lock Operations > LCK.cli.creat > 1 0.00 Created locks > LCK.cli.destroy > 0 0.00 Destroyed locks > LCK.cli.locks > 607411 0.33 Lock Operations > LCK.exp.creat > 1 0.00 Created locks > LCK.exp.destroy > 0 0.00 Destroyed locks > LCK.exp.locks > 563288168 309.20 Lock Operations > LCK.hcb.creat > 1 0.00 Created locks > LCK.hcb.destroy > 0 0.00 Destroyed locks > LCK.hcb.locks > 172158514 94.50 Lock Operations > LCK.lru.creat > 2 0.00 Created locks > LCK.lru.destroy > 0 0.00 Destroyed locks > LCK.lru.locks > 296448067 162.73 Lock Operations > LCK.mempool.creat > 5 0.00 Created locks > LCK.mempool.destroy > 0 0.00 Destroyed locks > LCK.mempool.locks > 1487652815 816.60 Lock Operations > LCK.objhdr.creat > 86265523 47.35 Created locks > LCK.objhdr.destroy > 85889681 47.15 Destroyed locks > LCK.objhdr.locks > 3588940017 1970.04 Lock Operations > LCK.pipestat.creat > 1 0.00 Created locks > LCK.pipestat.destroy > 0 0.00 Destroyed locks > LCK.pipestat.locks > 216 0.00 Lock Operations > LCK.sess.creat > 152861317 83.91 Created locks > LCK.sess.destroy > 152862891 83.91 Destroyed locks > LCK.sess.locks > 37421700 20.54 Lock Operations > LCK.smp.creat > 0 0.00 Created locks > LCK.smp.destroy > 0 0.00 Destroyed locks > LCK.smp.locks > 0 0.00 Lock Operations > LCK.vbe.creat > 1 0.00 Created locks > LCK.vbe.destroy > 0 0.00 Destroyed locks > LCK.vbe.locks > 607832 0.33 Lock Operations > LCK.vcapace.creat > 1 0.00 Created locks > LCK.vcapace.destroy > 0 0.00 Destroyed locks > LCK.vcapace.locks > 0 0.00 Lock Operations > LCK.vcl.creat > 1 0.00 Created locks > LCK.vcl.destroy > 0 0.00 Destroyed locks > LCK.vcl.locks > 543937089 298.58 Lock Operations > LCK.vxid.creat > 1 0.00 Created locks > LCK.vxid.destroy > 0 0.00 Destroyed locks > LCK.vxid.locks > 45733 0.03 Lock Operations > LCK.waiter.creat > 2 0.00 Created locks > LCK.waiter.destroy > 0 0.00 Destroyed locks > LCK.waiter.locks > 1420757858 779.88 Lock Operations > LCK.wq.creat > 3 0.00 Created locks > LCK.wq.destroy > 0 0.00 Destroyed locks > LCK.wq.locks > 1773506747 973.51 Lock Operations > LCK.wstat.creat > 1 0.00 Created locks > LCK.wstat.destroy > 0 0.00 Destroyed locks > LCK.wstat.locks > 759559125 416.94 Lock Operations > LCK.sma.creat > 2 0.00 Created locks > LCK.sma.destroy > 0 0.00 Destroyed locks > LCK.sma.locks > 1328287853 729.12 Lock Operations > > I removed the vcl_root sections, i hope you won't need it > > Thanks again, > > Le mer. 17 avr. 2019 ? 17:28, Guillaume Quintard < > guillaume at varnish-software.com> a ?crit : > >> Can you share the "varnishstat -1" output? >> >> I'm pretty sure the answer is in the passes and synth responses you >> omitted >> >> >> On Wed, Apr 17, 2019, 16:19 Alexandre Thaveau < >> Alexandre.Thaveau at mister-auto.com> wrote: >> >>> Hello everybody, >>> >>> i was trying to get some stats from my varnish server using varnishstat. >>> When using varnish stat, i see that : >>> - MAIN.client_req should make 100% of my queries >>> - MAIN.cache_hit represents 10% of MAIN.client_req >>> - MAIN.cache_hitpass represents 7% of MAIN.client_req >>> - MAIN.cache_miss represents 24% of MAIN.client_req >>> - MAIN.cache_hit_grace is very low >>> >>> So all these sumed categories represent less than 50% of client_req, i >>> think i'm missing something. The configuration is not maintained by me, >>> here is a sample of what it returns if this can help : >>> ------------------------------------- >>> cat /etc/varnish/production.vcl | grep return >>> return(synth(900, "Moved Permanently")); >>> return(synth(901, "Moved Permanently")); >>> return(synth(902, "Moved Permanently")); >>> return(synth(903, "Moved Permanently")); >>> return(pipe); >>> return(pipe); >>> return(pass); >>> return(pass); >>> return(pass); >>> return(synth(410, "Gone")); >>> return(pass); >>> return(synth(850, "Moved Permanently")); >>> return(hash); >>> return(hash); >>> return(pass); >>> return(hash); >>> return(lookup); >>> return(retry); >>> return(deliver); >>> ------------------------------------- >>> >>> Thanks very much for your help, >>> Regards, >>> Alex >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Thu Apr 18 06:23:27 2019 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 18 Apr 2019 08:23:27 +0200 Subject: Varnish hit + pass + miss reaches less than 50% of all reqs In-Reply-To: References: Message-ID: On Wed, Apr 17, 2019 at 6:23 PM Guillaume Quintard wrote: > > Hi there, > > So: > > MAIN.client_req 290152364 (aaaaaaaaaaall the requests) > > vs > > MAIN.cache_hit 7433491 > MAIN.cache_hit_grace 36319 (exclude these are they are already accounted for in MAIN.cache_hit) > MAIN.cache_hitpass 16003020 (exclude these are they are already accounted for in MAIN.s_pass) > MAIN.cache_miss 89526521 > MAIN.s_synth 11418599 > MAIN.s_pipe 216 > MAIN.s_pass 181773529 > > the difference is now 8 requests, which is fairly reasonable (some requests may be in flight, and threads don't necessarily push their stats after every requests) Well, you can also return(synth) from almost anywhere, including after a lookup where we bump one of the outcomes. This can create a bit of double-accounting from the point of view of "summing the rest". Dridi From Alexandre.Thaveau at mister-auto.com Thu Apr 18 07:36:04 2019 From: Alexandre.Thaveau at mister-auto.com (Alexandre Thaveau) Date: Thu, 18 Apr 2019 09:36:04 +0200 Subject: Varnish hit + pass + miss reaches less than 50% of all reqs In-Reply-To: References: Message-ID: Ok thanks very much for your help, my prometheus graph will be much more useful with these informations :) Best regards, Alex Le jeu. 18 avr. 2019 ? 08:24, Dridi Boukelmoune a ?crit : > On Wed, Apr 17, 2019 at 6:23 PM Guillaume Quintard > wrote: > > > > Hi there, > > > > So: > > > > MAIN.client_req 290152364 (aaaaaaaaaaall the requests) > > > > vs > > > > MAIN.cache_hit 7433491 > > MAIN.cache_hit_grace 36319 (exclude these are they are already > accounted for in MAIN.cache_hit) > > MAIN.cache_hitpass 16003020 (exclude these are they are already > accounted for in MAIN.s_pass) > > MAIN.cache_miss 89526521 > > MAIN.s_synth 11418599 > > MAIN.s_pipe 216 > > MAIN.s_pass 181773529 > > > > the difference is now 8 requests, which is fairly reasonable (some > requests may be in flight, and threads don't necessarily push their stats > after every requests) > > Well, you can also return(synth) from almost anywhere, including after > a lookup where we bump one of the outcomes. This can create a bit of > double-accounting from the point of view of "summing the rest". > > Dridi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alexandre.Thaveau at mister-auto.com Thu Apr 18 10:44:14 2019 From: Alexandre.Thaveau at mister-auto.com (Alexandre Thaveau) Date: Thu, 18 Apr 2019 12:44:14 +0200 Subject: Monitoring of cached/uncached/pass bandwitdh Message-ID: Hi Everybody, i submitted another thread yesterday on the ML where i was working on hit/miss/pass rate statistics on my varnish server. I would also like to know if it was possible to monitor varnish bandwitdth with the same categorization : - varnish bandwith in HIT category - varnish bandwith in MISS category - varnish bandwith in PASS category I already had a look on varnishstat, my prometheus exporter and google but did no find anything about this. Thanks again ! Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri Apr 19 10:46:41 2019 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 19 Apr 2019 12:46:41 +0200 Subject: Monitoring of cached/uncached/pass bandwitdh In-Reply-To: References: Message-ID: Hi, this is data that needs to be post-processed, I'm pretty sure you can do this via kibana and the like. There's a Varnish Enterprise solution with Varnish Custom Statistic that allow you to see bandwitdh per HIT/MISS/PASS but also url/domain and the like. in short: it's an external tool, and I don't know any free one, sorry -- Guillaume Quintard On Thu, Apr 18, 2019 at 12:46 PM Alexandre Thaveau < Alexandre.Thaveau at mister-auto.com> wrote: > Hi Everybody, > > i submitted another thread yesterday on the ML where i was working on > hit/miss/pass rate statistics on my varnish server. I would also like to > know if it was possible to monitor varnish bandwitdth with the same > categorization : > - varnish bandwith in HIT category > - varnish bandwith in MISS category > - varnish bandwith in PASS category > > I already had a look on varnishstat, my prometheus exporter and google but > did no find anything about this. > > Thanks again ! > Alex > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: