From guillaume.quintard at gmail.com Sun Jul 3 15:14:06 2022 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Sun, 3 Jul 2022 17:14:06 +0200 Subject: Feedback needed: vmod_reqwest Message-ID: Hi all, In January, I wrote here about *vmod_reqwest* and today I'm coming back with a major update and a request for the community. Little refresher for those who don't know/remember what vmod_request is about: https://github.com/gquintard/vmod_reqwest. In short it does *dynamic backends* and HTTP requests from VCL *(? la vmod_curl*). Some random buzzwords to make you click on the link: *HTTPS, HTTP/2, gzip, brotli, parallel requests, sync/async*, cryptocurrency. The main benefit of this release is the *probe support.* vmod_reqwest is now capable of handling probes the same way native backends do, but combined with dynamic backends, it allows you one pretty neat trick: you can probe one backend to set the health of another . The API is fairly complete and ergonomic I believe, but I would love to get more hands and eyes on this to break it/make it better. If some of you have opinions and/or want to take it for a spin, there are build explanations in the README , as well as a Dockerfile [1] that will build onto the official image without polluting it. Let me know what you think of it! [1]: thanks @thomersch for the help and push on the Docker front -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Jul 6 07:06:47 2022 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 6 Jul 2022 07:06:47 +0000 Subject: Feedback needed: vmod_reqwest In-Reply-To: References: Message-ID: On Sun, Jul 3, 2022 at 3:16 PM Guillaume Quintard wrote: > > Hi all, > > In January, I wrote here about vmod_reqwest and today I'm coming back with a major update and a request for the community. > > Little refresher for those who don't know/remember what vmod_request is about: https://github.com/gquintard/vmod_reqwest. > In short it does dynamic backends and HTTP requests from VCL (? la vmod_curl). > Some random buzzwords to make you click on the link: HTTPS, HTTP/2, gzip, brotli, parallel requests, sync/async, cryptocurrency. I didn't find how to scam people with NFTs in the manual, should I open a github issue? > The main benefit of this release is the probe support. vmod_reqwest is now capable of handling probes the same way native backends do, but combined with dynamic backends, it allows you one pretty neat trick: you can probe one backend to set the health of another. > > The API is fairly complete and ergonomic I believe, but I would love to get more hands and eyes on this to break it/make it better. If some of you have opinions and/or want to take it for a spin, there are build explanations in the README, as well as a Dockerfile [1] that will build onto the official image without polluting it. > > Let me know what you think of it! I really like the idea of a optional path prefix being automatically prepended to the value of bereq.url directly at the backend layer :thumbsup: In general, I agree, the API looks rather well thought out, even though it does suffer bloated constructor syndrome. Did you put only timeout and connect_timeout to lower the number of arguments or weren't you able to implement ftbo and bbto with reqwest? I suspect both :p Also it says this: > In practice, when contacting a backend, you will need to `unset bereq.http.accept-encoding;`, as Varnish sets it automatically. Probably a nice spot to mention https://varnish-cache.org/docs/7.0/reference/varnishd.html#http-gzip-support to explain why one would be set. On the other hand, if you disable gzip support you may also be forwarding the client's accept-encoding header if it survived all the way to the backend fetch. I may fork a vext_respounce [1] when extensions become capable of registering backend implementations ;) Cheers Dridi [1] I won't have time to actually do it From guillaume.quintard at gmail.com Wed Jul 6 15:54:23 2022 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Wed, 6 Jul 2022 08:54:23 -0700 Subject: Feedback needed: vmod_reqwest In-Reply-To: References: Message-ID: On Wed, Jul 6, 2022 at 12:07 AM Dridi Boukelmoune wrote: > I didn't find how to scam people with NFTs in the manual, should I > open a github issue? > No no no, it's just to weed out the weak investors, send me 5 bitcoins and I'll show where it is. > > In general, I agree, the API looks rather well thought out, even > though it does suffer bloated constructor syndrome. Yes, I realized later that I could use the event method of the backend to finalize the object, however, I'm not sure this: new client = reqwest.client(); client.set_base_url("http://www.example.com"); client.set_follow(5); client.set_brotli(true); client.set_probe(p1); client.set_connect_timeout(5s); is more readable, or practical than: new client = reqwest.client( base_url = "http://www.example.com", follow = 5, auto_brotli = true, probe = p1, connect_timeout = 5s ); (consider this a question to you all, if you have an opinion, voice it!) Did you put only > timeout and connect_timeout to lower the number of arguments or > weren't you able to implement ftbo and bbto with reqwest? I suspect > both :p > Definitely the latter, once you pass the 6-7 arguments threshod, the sky's the limit. > > Also it says this: > > > In practice, when contacting a backend, you will need to `unset > bereq.http.accept-encoding;`, as Varnish sets it automatically. > > Probably a nice spot to mention > > https://varnish-cache.org/docs/7.0/reference/varnishd.html#http-gzip-support > to explain why one would be set. > > On the other hand, if you disable gzip support you may also be > forwarding the client's accept-encoding header if it survived all the > way to the backend fetch > Good points, I can update the docs. I'm wondering though if it's better to special-case the AE header handling in the vmod and try to be smart, or just let the user do it in VCL... -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfelipe.sfilho at gmail.com Tue Jul 12 21:47:51 2022 From: jfelipe.sfilho at gmail.com (Felipe Santiago) Date: Tue, 12 Jul 2022 23:47:51 +0200 Subject: How to handle errors when using esi tags? Message-ID: Hi, I've been trying to use to execute a subrequest to /bar in case /foo fails, however I didn't manage to make it work. Do you support the alt attribute? If my backend returns a 4xx or 5xx, is that considered an error? I also found in the documentation some references on how to do that using the esi:remove, but I didn't have success either. Any suggestions? Thanks in advance. Felipe Santiago -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Jul 13 09:37:02 2022 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 13 Jul 2022 09:37:02 +0000 Subject: How to handle errors when using esi tags? In-Reply-To: References: Message-ID: On Tue, Jul 12, 2022 at 9:50 PM Felipe Santiago wrote: > > Hi, > > I've been trying to use to execute a subrequest to /bar in case /foo fails, however I didn't manage to make it work. Do you support the alt attribute? If my backend returns a 4xx or 5xx, is that considered an error? Hi, We don't support alternate URLs for ESI includes, and actually I'm wondering how we could do it. A 4xx or 5xx response is considered a response, so unless you abandon the backend fetch, such a backend response will be included in the client response. Since Varnish 7.1 abort ESI delivery on include error: https://varnish-cache.org/docs/7.1/whats-new/changes-7.1.html#other-changes-in-varnishd > I also found in the documentation some references on how to do that using the esi:remove, but I didn't have success either. Any suggestions? The tag serves a completely different purpose. Dridi From tom.anheyer at berlinonline.de Tue Jul 26 07:01:28 2022 From: tom.anheyer at berlinonline.de (Tom Anheyer | BerlinOnline) Date: Tue, 26 Jul 2022 07:01:28 +0000 Subject: Using varnish and vouch-proxy together Message-ID: Hello, I try to use vouch-proxy and varnish (v7) together to build a authorisation proxy. vouch-proxy is written to work with nginx ngx_http_auth_request_module https://github.com/vouch/vouch-proxy https://nginx.org/en/docs/http/ngx_http_auth_request_module.html Idea: inspired from https://web.archive.org/web/20121124064818/https://adayinthelifeof.nl/2012/07/06/using-varnish-to-offload-and-cache-your-oauth-requests/ - use varnish request restart feature - intercept original client request and make a GET request to vouch-proxy validate endpoint - when validated restore the original request and do a restart in detail: # vcl_recv # restarts == 0 # save req method, url, Content-Length, Content-Type in var # method := GET # url := /validate # backend := vouch-proxy # remove Content-Length, Content-Type # restarts > 0 # check vouch-proxy headers (roles, groups) # # vcl_deliver # resp == vouch-proxy,GET,/validate,200 # restore req method, url, Content-Length, Content-Type from var # forward vouch-proxy response headers to req # restart (original) req see attached common-vouch-proxy.vcl It works for client requests without request body (GET, HEAD, ?) but not for POST, PUT, ?. POST, PUT run in timeouts, so I think the request body is lost in the restarted request. Why is the body gone after restart? I think it should work with the curl vmod but this is not integrated yet. Thank you very much in advance tom -- Tom Anheyer Senior Developer BerlinOnline Stadtportal GmbH & Co. KG Stefan-Heym-Platz 1 10365 Berlin Germany Tel.: +49 30 2327-5210 Fax: +49 30 5771180-95 E-Mail: tom.anheyer at berlinonline.de berlin.de | berlinonline.net Amtsgericht Berlin-Charlottenburg, HRA 31951 Sitz der Gesellschaft: Berlin, Deutschland USt-IdNr.: DE219483549 Pers?nlich haftender Gesellschafter: BerlinOnline Stadtportalbeteiligungsges. mbH Amtsgericht Berlin-Charlottenburg, HRB 79077 Sitz der Gesellschaft: Berlin, Deutschland Gesch?ftsf?hrung: Olf Dziadek, Andreas M?ngel Amtierender Vorsitzender des Aufsichtsrates: Lothar Sattler -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: common-vouch-proxy.vcl URL: From guillaume.quintard at gmail.com Tue Jul 26 07:50:21 2022 From: guillaume.quintard at gmail.com (Guillaume Quintard) Date: Tue, 26 Jul 2022 00:50:21 -0700 Subject: Using varnish and vouch-proxy together In-Reply-To: References: Message-ID: Hi! You are correct, the request body is, by default, not cached. To correct this, you need to use the cache_req_body() function from the std vmod: https://varnish-cache.org/docs/trunk/reference/vmod_std.html#std-cache-req-body I haven't looked at the vcl yet (or the article), but since you mention vmod_curl, maybe you can try with vmod_reqwest instead ( https://github.com/gquintard/vmod_reqwest). It's a bit different, but hopefully more powerful, and I'd love to get some feedback on it. Hope that helps! On Tue, Jul 26, 2022, 00:03 Tom Anheyer | BerlinOnline < tom.anheyer at berlinonline.de> wrote: > Hello, > > I try to use vouch-proxy and varnish (v7) together to build a authorisation > proxy. vouch-proxy is written to work with nginx > ngx_http_auth_request_module > > https://github.com/vouch/vouch-proxy > https://nginx.org/en/docs/http/ngx_http_auth_request_module.html > > Idea: > > inspired from > > https://web.archive.org/web/20121124064818/https://adayinthelifeof.nl/2012/07/06/using-varnish-to-offload-and-cache-your-oauth-requests/ > > - use varnish request restart feature > - intercept original client request and make a GET request to vouch-proxy > validate endpoint > - when validated restore the original request and do a restart > > in detail: > > # vcl_recv > # restarts == 0 > # save req method, url, Content-Length, Content-Type in var > # method := GET > # url := /validate > # backend := vouch-proxy > # remove Content-Length, Content-Type > # restarts > 0 > # check vouch-proxy headers (roles, groups) > # > # vcl_deliver > # resp == vouch-proxy,GET,/validate,200 > # restore req method, url, Content-Length, Content-Type from var > # forward vouch-proxy response headers to req > # restart (original) req > > see attached common-vouch-proxy.vcl > > It works for client requests without request body (GET, HEAD, ?) but not > for > POST, PUT, ?. POST, PUT run in timeouts, so I think the request body is > lost in > the restarted request. Why is the body gone after restart? > > I think it should work with the curl vmod but this is not integrated yet. > > Thank you very much in advance > tom > > -- > Tom Anheyer > Senior Developer > > BerlinOnline Stadtportal GmbH & Co. KG > Stefan-Heym-Platz 1 > 10365 Berlin > Germany > > Tel.: +49 30 2327-5210 > Fax: +49 30 5771180-95 > E-Mail: tom.anheyer at berlinonline.de > > berlin.de | berlinonline.net > > Amtsgericht Berlin-Charlottenburg, HRA 31951 > Sitz der Gesellschaft: Berlin, > Deutschland > USt-IdNr.: DE219483549 > > Pers?nlich haftender Gesellschafter: > BerlinOnline Stadtportalbeteiligungsges. mbH > Amtsgericht Berlin-Charlottenburg, HRB 79077 > Sitz der Gesellschaft: Berlin, Deutschland > > Gesch?ftsf?hrung: Olf Dziadek, Andreas M?ngel > Amtierender Vorsitzender des Aufsichtsrates: Lothar Sattler > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.anheyer at berlinonline.de Tue Jul 26 12:13:35 2022 From: tom.anheyer at berlinonline.de (Tom Anheyer | BerlinOnline) Date: Tue, 26 Jul 2022 12:13:35 +0000 Subject: Using varnish and vouch-proxy together In-Reply-To: References: Message-ID: <3795877cc7c31f2b746134357d5efc90bbae0e66.camel@BerlinOnline.de> Hi Guillaume, Thumbs up for your cache_req_body() hint. It works perfect now. Thank you very much for your help. TIL: POST, PUT, ? requests are not restart-able by default. tom Am Dienstag, dem 26.07.2022 um 00:50 -0700 schrieb Guillaume Quintard: Hi! You are correct, the request body is, by default, not cached. To correct this, you need to use the cache_req_body() function from the std vmod: https://varnish-cache.org/docs/trunk/reference/vmod_std.html#std-cache-req-body I haven't looked at the vcl yet (or the article), but since you mention vmod_curl, maybe you can try with vmod_reqwest instead (https://github.com/gquintard/vmod_reqwest). It's a bit different, but hopefully more powerful, and I'd love to get some feedback on it. Hope that helps! On Tue, Jul 26, 2022, 00:03 Tom Anheyer | BerlinOnline > wrote: Hello, I try to use vouch-proxy and varnish (v7) together to build a authorisation proxy. vouch-proxy is written to work with nginx ngx_http_auth_request_module https://github.com/vouch/vouch-proxy https://nginx.org/en/docs/http/ngx_http_auth_request_module.html Idea: inspired from https://web.archive.org/web/20121124064818/https://adayinthelifeof.nl/2012/07/06/using-varnish-to-offload-and-cache-your-oauth-requests/ - use varnish request restart feature - intercept original client request and make a GET request to vouch-proxy validate endpoint - when validated restore the original request and do a restart in detail: # vcl_recv # restarts == 0 # save req method, url, Content-Length, Content-Type in var # method := GET # url := /validate # backend := vouch-proxy # remove Content-Length, Content-Type # restarts > 0 # check vouch-proxy headers (roles, groups) # # vcl_deliver # resp == vouch-proxy,GET,/validate,200 # restore req method, url, Content-Length, Content-Type from var # forward vouch-proxy response headers to req # restart (original) req see attached common-vouch-proxy.vcl It works for client requests without request body (GET, HEAD, ?) but not for POST, PUT, ?. POST, PUT run in timeouts, so I think the request body is lost in the restarted request. Why is the body gone after restart? I think it should work with the curl vmod but this is not integrated yet. Thank you very much in advance tom -- Tom Anheyer Senior Developer BerlinOnline Stadtportal GmbH & Co. KG Stefan-Heym-Platz 1 10365 Berlin Germany Tel.: +49 30 2327-5210 Fax: +49 30 5771180-95 E-Mail: tom.anheyer at berlinonline.de berlin.de | berlinonline.net Amtsgericht Berlin-Charlottenburg, HRA 31951 Sitz der Gesellschaft: Berlin, Deutschland USt-IdNr.: DE219483549 Pers?nlich haftender Gesellschafter: BerlinOnline Stadtportalbeteiligungsges. mbH Amtsgericht Berlin-Charlottenburg, HRB 79077 Sitz der Gesellschaft: Berlin, Deutschland Gesch?ftsf?hrung: Olf Dziadek, Andreas M?ngel Amtierender Vorsitzender des Aufsichtsrates: Lothar Sattler -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdtemccna at gmail.com Thu Jul 28 07:52:04 2022 From: tdtemccna at gmail.com (Turritopsis Dohrnii Teo En Ming) Date: Thu, 28 Jul 2022 15:52:04 +0800 Subject: How do I install and configure Varnish Cache web application accelerator/caching HTTP reverse proxy in front of Apache HTTP web server? Message-ID: Subject: How do I install and configure Varnish Cache web application accelerator/caching HTTP reverse proxy in front of Apache HTTP web server? Good day from Singapore, How do I install and configure Varnish Cache web application accelerator/caching HTTP reverse proxy in front of Apache HTTP web server? Though I have graduated with a Diploma in Computer Networking (3 Distinctions and 1 A) from Singapore Polytechnic in 2017, I am not that good and experienced with networking. The course curriculum is based on CCNA Routing and Switching. Hence I would appreciate any advice. If you can show me a network topology diagram of the Apache HTTP web server, Varnish Cache, firewall, router and network switch, it would greatly help me in my understanding. Thank you. Regards, Mr. Turritopsis Dohrnii Teo En Ming Targeted Individual in Singapore 28 July 2022 Thu Blogs: https://tdtemcerts.blogspot.com https://tdtemcerts.wordpress.com From afassl at progis.de Thu Jul 28 08:16:56 2022 From: afassl at progis.de (Andreas Fassl) Date: Thu, 28 Jul 2022 10:16:56 +0200 Subject: How do I install and configure Varnish Cache web application accelerator/caching HTTP reverse proxy in front of Apache HTTP web server? In-Reply-To: References: Message-ID: Hi, there are quite a lot of blogs and how tos available, both on the varnish website and on other ressources. Here is one with diagrams and some details. https://plumrocket.com/learn/ssl-varnish-apache Best regards Andreas > Turritopsis Dohrnii Teo En Ming > 28. Juli 2022 um 09:52 > Subject: How do I install and configure Varnish Cache web application > accelerator/caching HTTP reverse proxy in front of Apache HTTP web > server? > > Good day from Singapore, > > How do I install and configure Varnish Cache web application > accelerator/caching HTTP reverse proxy in front of Apache HTTP web > server? > > Though I have graduated with a Diploma in Computer Networking (3 > Distinctions and 1 A) from Singapore Polytechnic in 2017, I am not > that good and experienced with networking. The course curriculum is > based on CCNA Routing and Switching. > > Hence I would appreciate any advice. If you can show me a network > topology diagram of the Apache HTTP web server, Varnish Cache, > firewall, router and network switch, it would greatly help me in my > understanding. > > Thank you. > > Regards, > > Mr. Turritopsis Dohrnii Teo En Ming > Targeted Individual in Singapore > 28 July 2022 Thu > Blogs: > https://tdtemcerts.blogspot.com > https://tdtemcerts.wordpress.com > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- ???????????????????????????????????????? ? ? ?? proGIS Software & Beratung Inh. Dipl.-Ing. Andreas Fassl Hohenzollernring 88 50672 K?ln Tel: ?? 0221 - 8888 109 - 0 Fax:?? 0221 - 8888 109 - 99 web: http://www.progis.de ???????????????????????????????????????????????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdtemccna at gmail.com Thu Jul 28 09:09:57 2022 From: tdtemccna at gmail.com (Turritopsis Dohrnii Teo En Ming) Date: Thu, 28 Jul 2022 17:09:57 +0800 Subject: How do I install and configure Varnish Cache web application accelerator/caching HTTP reverse proxy in front of Apache HTTP web server? In-Reply-To: References: Message-ID: Hi Andreas, Thanks for the link. Mr. Turritopsis Dohrnii Teo En Ming Targeted Individual in Singapore On Thu, 28 Jul 2022 at 16:16, Andreas Fassl wrote: > > Hi, > > there are quite a lot of blogs and how tos available, both on the varnish website and on other ressources. > > Here is one with diagrams and some details. > https://plumrocket.com/learn/ssl-varnish-apache > > Best regards > Andreas > > > Turritopsis Dohrnii Teo En Ming 28. Juli 2022 um 09:52 > Subject: How do I install and configure Varnish Cache web application > accelerator/caching HTTP reverse proxy in front of Apache HTTP web > server? > > Good day from Singapore, > > How do I install and configure Varnish Cache web application > accelerator/caching HTTP reverse proxy in front of Apache HTTP web > server? > > Though I have graduated with a Diploma in Computer Networking (3 > Distinctions and 1 A) from Singapore Polytechnic in 2017, I am not > that good and experienced with networking. The course curriculum is > based on CCNA Routing and Switching. > > Hence I would appreciate any advice. If you can show me a network > topology diagram of the Apache HTTP web server, Varnish Cache, > firewall, router and network switch, it would greatly help me in my > understanding. > > Thank you. > > Regards, > > Mr. Turritopsis Dohrnii Teo En Ming > Targeted Individual in Singapore > 28 July 2022 Thu > Blogs: > https://tdtemcerts.blogspot.com > https://tdtemcerts.wordpress.com > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -- > > proGIS Software & Beratung > Inh. Dipl.-Ing. Andreas Fassl > > Hohenzollernring 88 > 50672 K?ln > > Tel: 0221 - 8888 109 - 0 > Fax: 0221 - 8888 109 - 99 > web: http://www.progis.de > > From lee.hambley at gmail.com Thu Jul 28 12:04:15 2022 From: lee.hambley at gmail.com (Lee Hambley) Date: Thu, 28 Jul 2022 14:04:15 +0200 Subject: Varnish attempting to serve disconnected clients Message-ID: Dear List, I'm debugging a problem with our Varnish, and our QA folks found an interesting case. Ultimately this breadcrumb trail was discovered looking into our varnishes having an enormous number of open connections in CLOSE_WAIT status, in a way that doesn't appear to be ideal connection reuse, rather dead connections that we need to close out; non the less we're focused on a slightly more specific sub-issue of that issue right now: We have noticed that if a request is eligible to be coalesced, but the impatient client disconnects before the request is served, Varnish continues to try and serve that request by going to the backend even after the client is disconnected. I suspect in our case, we can disable request coalescing, but I didn't want to miss an opportunity to report a possible bug, or learn something about a corner of Varnish I don't know well... here's our setup: - With Varnish 6.6.1 - Given a toy python script backend which answers 200OK to the health check, but answers after 10s to the other requests with an `HTTP 500` error; [source linked below] - Given the following config [below] - started with `/usr/local/sbin/varnishd -n /usr/local/var/varnish -F -f $PWD/foo.vcl -a test=:21601,HTTP` - When running `for i in $(seq 10); do curl -m 1 localhost:21601/ &; done` (ampersand for background is important) - Varnish makes 1 request to the backend, coalescing the others - Clients all disconnect thanks to `curl -m 1` (`--max-time`) (or `0.1`, no difference, naturally) - First request completed with `HTTP 500`, Varnish continues to retry requests for disconnected clients. (logs without health checks below) In the real set-up Varnish actually only handles the next request, then the next, then the next one each 10 seconds, I didn't take the time to reproduce that in this set-up as I believe it's a bug that Varnish continues to do work for disconnected clients. I guess in my toy we benefit from hit-for-pass, and in our real world setup that's not true. I can somehow imagine this as a feature (populate the cache even though the client went away) but queueing hundreds or thousands of requests, and nibbling away at them one-by-one even after clients are long-since hung up is causing resource exhaustion for us; we can tune the configs significantly now that we know the issue, but we'd love to get some opinionated feedback on what would be an idiomatic approach to this. - Are we doing something wrong? - Should varnish still go to the backend to serve disconnected clients? - Is this a bug, should I report it somewhere more formally and attach the repro case a bit more diligently? Warm regards everyone, thanks for Varnish, an active list, and active support on StackOverflow and such. [config] vcl 4.0; backend foo { .host = "127.0.0.1"; .port = "3100"; .probe = { .url = "/healthz"; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; } } sub vcl_recv { return (hash); } [toy python server logs] python3 ./server.py socket binded to port 3100 2022-07-28 13:45:05.589997 socket is listening 2022-07-28 13:45:17.336371 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 32771\r\n\r\n' ^^ first request 2022-07-28 13:45:27.425837 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 163841\r\n\r\n' 2022-07-28 13:45:27.437121 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 229377\r\n\r\n' 2022-07-28 13:45:27.437913 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 131075\r\n\r\n' 2022-07-28 13:45:27.437999 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 196609\r\n\r\n' 2022-07-28 13:45:27.438363 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 98311\r\n\r\n' 2022-07-28 13:45:27.437782 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 65545\r\n\r\n' 2022-07-28 13:45:27.438033 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 327681\r\n\r\n' 2022-07-28 13:45:27.439453 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 294913\r\n\r\n' 2022-07-28 13:45:27.438401 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For : ::1\r\nX-Varnish: 262145\r\n\r\n' ^^ other requests [toy python server code] https://gist.github.com/leehambley/fa634f91936b1d30422e3af96ba2eec5 Lee Hambley http://lee.hambley.name/ +49 (0) 170 298 5667 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Thu Jul 28 14:21:12 2022 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 28 Jul 2022 14:21:12 +0000 Subject: Varnish attempting to serve disconnected clients In-Reply-To: References: Message-ID: On Thu, Jul 28, 2022 at 12:07 PM Lee Hambley wrote: > > Dear List, > > I'm debugging a problem with our Varnish, and our QA folks found an interesting case. > > Ultimately this breadcrumb trail was discovered looking into our varnishes having an enormous number of open connections in CLOSE_WAIT status, in a way that doesn't appear to be ideal connection reuse, rather dead connections that we need to close out; non the less we're focused on a slightly more specific sub-issue of that issue right now: > > We have noticed that if a request is eligible to be coalesced, but the impatient client disconnects before the request is served, Varnish continues to try and serve that request by going to the backend even after the client is disconnected. Hi Lee, Reading up to this point I'm convinced that you have a pretty good understanding of the problem (but I will read the rest, don't worry). This is something we fixed a while ago in Varnish Enterprise, but it took several painful attempts to get it right. While this problem may look straightforward, it comes with a flurry of details. The basic idea is to implement a way to "walk away" from the waiting list, because in this context "there is no rush". The problem is that your client could be stuck in a variety of places, like for example 3 levels down a parallel ESI tree over an HTTP/2 connection. Another problem is that you short-circuit the normal delivery path, so you need to make sure that the client task and session are accurately tore down. This is something we wanted to study more before submitting, in case we could come up with a less complicated solution, but either way we also need time to work on porting this nontrivial patch. > I suspect in our case, we can disable request coalescing, but I didn't want to miss an opportunity to report a possible bug, or learn something about a corner of Varnish I don't know well... here's our setup: Thanks a lot, very much appreciated! > - With Varnish 6.6.1 > - Given a toy python script backend which answers 200OK to the health check, but answers after 10s to the other requests with an `HTTP 500` error; [source linked below] > - Given the following config [below] > - started with `/usr/local/sbin/varnishd -n /usr/local/var/varnish -F -f $PWD/foo.vcl -a test=:21601,HTTP` > - When running `for i in $(seq 10); do curl -m 1 localhost:21601/ &; done` (ampersand for background is important) > - Varnish makes 1 request to the backend, coalescing the others > - Clients all disconnect thanks to `curl -m 1` (`--max-time`) (or `0.1`, no difference, naturally) > - First request completed with `HTTP 500`, Varnish continues to retry requests for disconnected clients. (logs without health checks below) > > In the real set-up Varnish actually only handles the next request, then the next, then the next one each 10 seconds, I didn't take the time to reproduce that in this set-up as I believe it's a bug that Varnish continues to do work for disconnected clients. I guess in my toy we benefit from hit-for-pass, and in our real world setup that's not true. Yes, either hit-for-pass (return(pass)), hit-for-miss (beresp.uncacheable) or a very small TTL. > I can somehow imagine this as a feature (populate the cache even though the client went away) but queueing hundreds or thousands of requests, and nibbling away at them one-by-one even after clients are long-since hung up is causing resource exhaustion for us; we can tune the configs significantly now that we know the issue, but we'd love to get some opinionated feedback on what would be an idiomatic approach to this. It's the lack of (hitpass, hitmiss or regular) object that causes waiting list serialization, we don't need to implement a new feature in that regard. > - Are we doing something wrong? Probably a zero TTL, otherwise running into the waiting list nepotism misbehavior which is not your fault. > - Should varnish still go to the backend to serve disconnected clients? No, that would be the walkaway feature. > - Is this a bug, should I report it somewhere more formally and attach the repro case a bit more diligently? We don't need a bug report, and I believe we have at least one reproducer in the enterprise test suite. > Warm regards everyone, thanks for Varnish, an active list, and active support on StackOverflow and such. > > [config] > vcl 4.0; > backend foo { > .host = "127.0.0.1"; > .port = "3100"; > .probe = { > .url = "/healthz"; > .interval = 5s; > .timeout = 1s; > .window = 5; > .threshold = 3; > } > } > sub vcl_recv { > return (hash); > } > > [toy python server logs] > python3 ./server.py > socket binded to port 3100 > 2022-07-28 13:45:05.589997 socket is listening > 2022-07-28 13:45:17.336371 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 32771\r\n\r\n' > ^^ first request > 2022-07-28 13:45:27.425837 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 163841\r\n\r\n' > 2022-07-28 13:45:27.437121 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 229377\r\n\r\n' > 2022-07-28 13:45:27.437913 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 131075\r\n\r\n' > 2022-07-28 13:45:27.437999 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 196609\r\n\r\n' > 2022-07-28 13:45:27.438363 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 98311\r\n\r\n' > 2022-07-28 13:45:27.437782 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 65545\r\n\r\n' > 2022-07-28 13:45:27.438033 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 327681\r\n\r\n' > 2022-07-28 13:45:27.439453 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 294913\r\n\r\n' > 2022-07-28 13:45:27.438401 from client: b'GET / HTTP/1.1\r\nHost: localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: ::1\r\nX-Varnish: 262145\r\n\r\n' > ^^ other requests > [toy python server code] > https://gist.github.com/leehambley/fa634f91936b1d30422e3af96ba2eec5 > > Lee Hambley > http://lee.hambley.name/ > +49 (0) 170 298 5667 > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc