From scan-admin at coverity.com Sun Oct 6 11:36:21 2019 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 06 Oct 2019 11:36:21 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5d99d1b5715f1_6a302ae0c3490f4c8773f@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/wf/click?upn=08onrYu34A-2BWcWUl-2F-2BfV0V05UPxvVjWch-2Bd2MGckcRaZSCEJOPR4AEUn0hVASTtlJ23U2ffwbN1LtJbHcOCfQg-3D-3D_wrU9d1VlqIiuL6N0zVMze4Ep-2FR7u99vtLlE-2BlH3ENQAiPHWuFYNYnAzw1RXZM3TaGxaFN-2B3EJDYa-2FoiWUPIxVQpkkXSOvNdzy49UuMxY5jkSUVTHSXlxjvdGFD0oPGcmfr2Wqb5ROX1K7tH-2B3-2BS3aESDUEMmXKBesy7hr46YCH9bqD9tYfjMiM6IbKmExZ7UDjf6yJtQ8cSjkCQPfb5yJ49-2Fvc7fy5GVJDmeXdxPjEI-3D Build ID: 275535 Analysis Summary: New defects found: 6 Defects eliminated: 11 If you have difficulty understanding any defects, email us at scan-admin at coverity.com, or post your question to StackOverflow at https://u2389337.ct.sendgrid.net/wf/click?upn=OgIsEqWzmIl4S-2FzEUMxLXL-2BukuZt9UUdRZhgmgzAKchwAzH1nH3073xDEXNRgHN6zzUI-2FRfbrE6mNOeeukHUQw-3D-3D_wrU9d1VlqIiuL6N0zVMze4Ep-2FR7u99vtLlE-2BlH3ENQAiPHWuFYNYnAzw1RXZM3TaGxaFN-2B3EJDYa-2FoiWUPIxVYt3KG9To1YY4qKup-2FlXf3EWUebIY40ro-2B-2BHNXIR7EKf4y9YXOVNcEpRuSh2-2FKlJkfz6175uFbGA65Kulamz-2Fw1IeIk-2BORIYJegucwET1YlhAc1xT4qSuP0bvPhgCFXFNm1BmqIuK0sSOG1d0fSQ9jY-3D From martin at uplex.de Thu Oct 10 10:19:22 2019 From: martin at uplex.de (Martin Gaitzsch) Date: Thu, 10 Oct 2019 12:19:22 +0200 Subject: backend-304 magic header merge in backend_response Message-ID: Good morning! Yesterday I spent quite a while to understand the following behavior of varnish (simplified): first request ------------- hash, backend response 200, no cache-control, cacheable, ttl 2m, 6h grace and set "Cache-Control: no-cache" to make the client return to varnish <...2m ttl passes...> another request --------------- triggers bg_fetch because of grace backend response 304, again no cache-control from backend BUT: the "OBJ".http.Cache-Control is merged to the response! So in backend_response beresp.http.Cache-Control is "no-cache" - which is not intuitive and also triggers code like this in backend_response (also see builitin.vcl): if (!beresp.http.Surrogate-Control && beresp.http.Cache-Control ~ "(?i:no-cache|no-store|private)") { # Mark as "Hit-For-Miss" for the next 2 minutes set beresp.ttl = 120s; set beresp.uncacheable = true; return (deliver); } ...and that's not what I intended because my very useful grace-object gets lost! (which we need during deployment 'downtimes') Putting (! beresp.was_304) around the if-statement is not a solution because this will fail in other situations. Valid workarounds are: 1. deleting Cache-Control in backend_response and only set it in deliver downsides: - split "object preparation code" to backend_response and deliver, which make vcl more complex than needed - needs to be done on every single response again instead of once in backend_response or 2. saving the original Cache-Control to an additional header and do additional evaluations based on this header downsides: - need to delete this header again in deliver on every response - real backend response headers of the current response still not available for decisions; the merged headers are not sufficient to conclude to the real backend response headers Both options are not very appealing to me. How about giving access to obj.* in backend_response and putting the real backend response headers to beresp.* like the name indicates? This would be a clean transparent and more intuitive solution which does not need header merge-magic. Best Martin From scan-admin at coverity.com Sun Oct 13 11:36:20 2019 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 13 Oct 2019 11:36:20 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5da30c3419b07_1f142ae0c3490f4c877c8@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/wf/click?upn=08onrYu34A-2BWcWUl-2F-2BfV0V05UPxvVjWch-2Bd2MGckcRaZSCEJOPR4AEUn0hVASTtlJ23U2ffwbN1LtJbHcOCfQg-3D-3D_wrU9d1VlqIiuL6N0zVMze4Ep-2FR7u99vtLlE-2BlH3ENQC29joy7X2nh239Fh0u1JCTXgbrWSf6tc2AV3tqWN0LAB-2FrdaHX-2BgXHjpm3CsTHx7B7JGAudAxlWjxedJbWZ9DEd6AMyHDyOWHSbPR6iyQnG86FUIrQxPyF4-2Fm4-2B21foSMiTDTRUpnaZMk3-2FJvF7Q-2BGQa05AX4vIllz99UUjIBd7aX4xN9xXskqTaCo2Wjb6yk-3D Build ID: 276603 Analysis Summary: New defects found: 6 Defects eliminated: 11 If you have difficulty understanding any defects, email us at scan-admin at coverity.com, or post your question to StackOverflow at https://u2389337.ct.sendgrid.net/wf/click?upn=OgIsEqWzmIl4S-2FzEUMxLXL-2BukuZt9UUdRZhgmgzAKchwAzH1nH3073xDEXNRgHN6zzUI-2FRfbrE6mNOeeukHUQw-3D-3D_wrU9d1VlqIiuL6N0zVMze4Ep-2FR7u99vtLlE-2BlH3ENQC29joy7X2nh239Fh0u1JCTXgbrWSf6tc2AV3tqWN0LAJ1w4xAEzTTzxMruAMia2rg4fEjYdXOg9RmDtaHJ-2BnfroCNEzd2D5a45fpot1EcgJYbYsn6YLOvdUkLpLuwio6dL8E1CiXzGA4f17DZqB0kkdc9x1MJnshJiqrmKexfvte7yDuA1-2FyYY2Xwib7SW8nw-3D From santoshab000 at gmail.com Mon Oct 14 14:13:23 2019 From: santoshab000 at gmail.com (Santosh Abraham) Date: Mon, 14 Oct 2019 07:13:23 -0700 Subject: BAN variant of varnish cache Message-ID: Using *BAN*, is it possible to invalidate a particular variant of a varnish cache? We use locale in http vary header to keep different variants of the cache, but the *BAN* configured is invalidating all the cache. *BAN configuration * sub vcl_backend_response { set beresp.http.x-ban-locale = bereq.http.X-BOLT-SITE-LOCALE; set beresp.http.x-url = bereq.url; } sub vcl_deliver { unset resp.http.x-ban-locale; unset resp.http.x-url; } sub vcl_recv { if (req.method == "BAN") { if(req.http.x-ban-regex) { ban("obj.http.x-ban-locale == " +req.http.X-BOLT-SITE-LOCALE +" && obj.http.x-url ~ "+req.http.x-ban-regex); } else { ban("obj.http.x-ban-locale == " +req.http.X-BOLT-SITE-LOCALE +" && obj.http.x-url == "+req.url); } return(synth(200, "Ban Added")); } } *CURL command to add the BAN* curl -v -XBAN -H 'x-ban-regex:^/boltapi/v1/ads*(\?.*)*$' -H 'X-BOLT-SITE-LOCALE:en_ZA' -H 'Vary:en_ZA' {hostname}:{port} -------------- next part -------------- An HTML attachment was scrubbed... URL: From scan-admin at coverity.com Sun Oct 20 11:37:11 2019 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 20 Oct 2019 11:37:11 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5dac46e7a4808_38c12ae0c3490f4c877d7@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/wf/click?upn=08onrYu34A-2BWcWUl-2F-2BfV0V05UPxvVjWch-2Bd2MGckcRaZSCEJOPR4AEUn0hVASTtlJ23U2ffwbN1LtJbHcOCfQg-3D-3D_wrU9d1VlqIiuL6N0zVMze4Ep-2FR7u99vtLlE-2BlH3ENQDfiKitf3Jjcc8pIaPz5HWCM2LcO53-2FTHpEYf6pmjmEkZrygLLE0Nwx3FFFtTYspHVbefRr9mFgV-2FHzLMdYWk9CwtxSc801-2Fe7POCSQLeHRY7bdX6MjWbhlWjOAwWgZhcBIKRNxupNDm-2BGRZzuqqlO7x2vzxifRtbvrxmy9StHSnS43a4viWxLc-2FHTdIu8yR5Y-3D Build ID: 277624 Analysis Summary: New defects found: 6 Defects eliminated: 11 If you have difficulty understanding any defects, email us at scan-admin at coverity.com, or post your question to StackOverflow at https://u2389337.ct.sendgrid.net/wf/click?upn=OgIsEqWzmIl4S-2FzEUMxLXL-2BukuZt9UUdRZhgmgzAKchwAzH1nH3073xDEXNRgHN6zzUI-2FRfbrE6mNOeeukHUQw-3D-3D_wrU9d1VlqIiuL6N0zVMze4Ep-2FR7u99vtLlE-2BlH3ENQDfiKitf3Jjcc8pIaPz5HWCM2LcO53-2FTHpEYf6pmjmEkezLBLZp1-2BW3AUBXkCGFjoNen24F-2FpfvBL3dKPA4JhCArwZN-2B-2FAZzQpjDiLrzDwAasLzx-2B-2FDYr8nB0A-2FieNIhUHwrKXvGV-2BrYYWcpfXlkwEt8qo2vDq2BytO6eMosOOD5Xs8Q91-2FC1-2B8pSkHH93rd5A-3D From slink at schokola.de Tue Oct 22 16:24:11 2019 From: slink at schokola.de (Nils Goroll) Date: Tue, 22 Oct 2019 18:24:11 +0200 Subject: backend-304 magic header merge in backend_response In-Reply-To: References: Message-ID: <3835bbb0-f4c2-81be-ca1d-a2803d9e09df@schokola.de> FTR, this is being continued in a github issue: https://github.com/varnishcache/varnish-cache/issues/3102 On 10/10/2019 12:19, Martin Gaitzsch wrote: > Good morning! > > Yesterday I spent quite a while to understand the following behavior of > varnish (simplified): > > first request > ------------- > hash, backend response 200, no cache-control, cacheable, ttl 2m, 6h > grace and set "Cache-Control: no-cache" to make the client return to varnish > > <...2m ttl passes...> > > another request > --------------- > triggers bg_fetch because of grace > backend response 304, again no cache-control from backend > BUT: the "OBJ".http.Cache-Control is merged to the response! So in > backend_response beresp.http.Cache-Control is "no-cache" - which is not > intuitive and also triggers code like this in backend_response (also see > builitin.vcl): > > if (!beresp.http.Surrogate-Control && beresp.http.Cache-Control ~ > "(?i:no-cache|no-store|private)") { > # Mark as "Hit-For-Miss" for the next 2 minutes > set beresp.ttl = 120s; > set beresp.uncacheable = true; > return (deliver); > } > > ...and that's not what I intended because my very useful grace-object > gets lost! (which we need during deployment 'downtimes') > > Putting (! beresp.was_304) around the if-statement is not a solution > because this will fail in other situations. Valid workarounds are: > > 1. deleting Cache-Control in backend_response and only set it in deliver > downsides: > - split "object preparation code" to backend_response and deliver, > which make vcl more complex than needed > - needs to be done on every single response again instead of once in > backend_response > > or > 2. saving the original Cache-Control to an additional header and do > additional evaluations based on this header > downsides: > - need to delete this header again in deliver on every response > - real backend response headers of the current response still not > available for decisions; the merged headers are not sufficient to > conclude to the real backend response headers > > > Both options are not very appealing to me. How about giving access to > obj.* in backend_response and putting the real backend response headers > to beresp.* like the name indicates? This would be a clean transparent > and more intuitive solution which does not need header merge-magic. > > Best > Martin > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev From phk at phk.freebsd.dk Thu Oct 24 09:10:06 2019 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 24 Oct 2019 09:10:06 +0000 Subject: #3093 proposed patch Message-ID: <79306.1571908206@critter.freebsd.dk> I'm on the train with shitty connectivity, so I will not attempt a pull request, but this is a proposed DTRT patch for 3093 (Note that the vtc in 3093 fails, because this DTRT) Comments & Tests welcome. Spotted along the way: Should we alloc std.cache_req_body() also in vcl_backend_fetch{} ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. -------------- next part -------------- A non-text attachment was scrubbed... Name: r3093.patch Type: text/x-diff Size: 2650 bytes Desc: r3093.patch URL: From martin at varnish-software.com Thu Oct 24 11:43:33 2019 From: martin at varnish-software.com (Martin Blix Grydeland) Date: Thu, 24 Oct 2019 13:43:33 +0200 Subject: #3093 proposed patch In-Reply-To: <79306.1571908206@critter.freebsd.dk> References: <79306.1571908206@critter.freebsd.dk> Message-ID: That's cool, you managed to get that done in a lot fewer code lines than I would have thought would be needed. But I do not think that this patch alone will be enough to fix #3093. The new state that prevents the original problem will only be caught is req->req_body_status==REQ_BODY_CACHED, and that will only be caught if 'std.cache_req_body()' was called. That is not a prerequisite for the bug (even though it was used in the test case), and if it wasn't called, I believe we could then again reach a state where we on retry send a bereq with C-L, but send no req-body-bytes, eventually timing out. For this reason I think we need the code from the initial patch as well, that is keeping track of if there was a request body in the first place, and not allowing retries if it isn't available when we need it. Spotted along the way: Should we alloc std.cache_req_body() also > in vcl_backend_fetch{} ? Wouldn't that be too late for a lot of cases? On a pure background fetch, the client would've continued on its merry way by the time vcl_backend_fetch is run. -Martin On Thu, 24 Oct 2019 at 11:11, Poul-Henning Kamp wrote: > I'm on the train with shitty connectivity, so I will not attempt > a pull request, but this is a proposed DTRT patch for 3093 > > (Note that the vtc in 3093 fails, because this DTRT) > > Comments & Tests welcome. > > Spotted along the way: Should we alloc std.cache_req_body() also > in vcl_backend_fetch{} ? > > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev > -- *Martin Blix Grydeland* Senior Developer | Varnish Software -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Thu Oct 24 11:47:20 2019 From: slink at schokola.de (Nils Goroll) Date: Thu, 24 Oct 2019 13:47:20 +0200 Subject: #3093 proposed patch In-Reply-To: <79306.1571908206@critter.freebsd.dk> References: <79306.1571908206@critter.freebsd.dk> Message-ID: <1218a323-07f6-7a37-ecc6-0caf3bccde32@schokola.de> i am not only on a shitty train connection, but also otherwise occupied today. Hope to have time to look closer tomorrow From phk at phk.freebsd.dk Thu Oct 24 11:54:13 2019 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 24 Oct 2019 11:54:13 +0000 Subject: #3093 proposed patch In-Reply-To: References: <79306.1571908206@critter.freebsd.dk> Message-ID: <81854.1571918053@critter.freebsd.dk> -------- In message , Martin Blix Grydeland writes: >But I do not think that this patch alone will be enough to fix #3093. My patch was only meant as a proof of concept for the "DTRT" part of the topic/discussion. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From scan-admin at coverity.com Sun Oct 27 11:37:14 2019 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 27 Oct 2019 11:37:14 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5db58169e53f6_bd82ae0c3490f4c87730@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/wf/click?upn=08onrYu34A-2BWcWUl-2F-2BfV0V05UPxvVjWch-2Bd2MGckcRaZSCEJOPR4AEUn0hVASTtlJ23U2ffwbN1LtJbHcOCfQg-3D-3D_wrU9d1VlqIiuL6N0zVMze4Ep-2FR7u99vtLlE-2BlH3ENQDM97ACN7YfllfWHZ9PxDr3-2F4CGsUG2trKbd0U-2BAULbOtO8h35cDGYSHVJAytCWA9tT59oKhTi0TI6VYaR6vR7GIVEPST3RAaAbSh2mYlvf5IQk3ZqiyGmFT1XmMHPzMrEaNYPduMypwl254op-2FRwzWHxTiWasO6aPGFgC1pnqzzZvczPVsJ1EEcBjYfzV-2Bpjc-3D Build ID: 278768 Analysis Summary: New defects found: 0 Defects eliminated: 11 From phk at phk.freebsd.dk Wed Oct 30 08:14:42 2019 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 30 Oct 2019 08:14:42 +0000 Subject: apology... Message-ID: <58236.1572423282@critter.freebsd.dk> I want to apologize for not being at the bugwash on monday, I had fully planned to attend but something amazing happened and stole my time. Long story short: The Vice-President of University of Haute-Alasace drowe 1000km to Denmark to help us get our Rational R1000/400 computer working and when I should have been bug-washing, the machine was booting up. I promise to not make it a habit. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.