From scan-admin at coverity.com Mon May 5 12:25:11 2025 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Mon, 05 May 2025 12:25:11 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <6818ae2780536_4ebab2b75a4db599434291@prd-scan-dashboard-0.mail> An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue May 6 13:13:22 2025 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 06 May 2025 13:13:22 +0000 Subject: vtest and varnish-cache repo relationship In-Reply-To: <8eba56a5-1719-4f28-84dd-43e36456a8fb@uplex.de> References: <84a4eb3c-6ec7-412d-bc66-897520ef7f1f@uplex.de> <202505051244.545CiRZ5008036@critter.freebsd.dk> <8eba56a5-1719-4f28-84dd-43e36456a8fb@uplex.de> Message-ID: <202505061313.546DDM5G029159@critter.freebsd.dk> This started as a private conversation, but I think it belongs on varnish-dev now: I'm not really sure I know what the "previous plan" was any more, but I want this sorted out for 8.0. My conversation with dridi@ yesterday revealed that we may have misunderstood each other somewhat, as there are two different issues involving shared libraries. The first issue, let us call it "vtest+lib" is to make vtest a package which installs an include, static and/or dynamic libraries, and a binary which uses the shlib, so that the varnish-cache project, and possibly haproxy, can link our specific test-binary against the library of choice. This only really makes sense if we go the full monty with the vtest repos: Publishing releases, making packages, pushing them into distros etc. The alternative is a simple source dependency, using some variant of git submodules or shell-scripts to do the dirty deed. The second issue is that vtest uses a lot of stuff copied from varnish-cache's lib directory. IMO that is just an annoyance. To fix it "properly", the vtest repo could pull that stuff out of the varnish-cache repo, but that would pretty much defeat the entire reason why we created the vtest repo to begin with. Alternatively, we could make a third repos containing "varnish-lib" which then both varnish-cache and vtest repos could depend on, as well as any other repos which have copied stuff from there over the years. There is some good stuff in varnish-cache/lib, but personally I do not feel it is enough for us to start yet another repository. If we do create a varnish-lib repos, then the next question again becomes "package or source dependency". (I thought what I nixed at some previous occation was the "varnish-lib" idea, but dridi@ thought it was the "vtest+lib" idea.) Finally there is an alternative where we do "vtest+lib", let the vtest repos take over responsibility for the lib stuff and have varnish-cache repos get it from there. That would add no further dependencies, eliminate the duplication of the lib-stuff, and give any third parties a reasonably sized package to depend on if they want to use the lib-stuff. So to me the core question seems to be: Are we willing/going to make vtest+lib packages ? And of course, since HAproxy also uses vtest, it is not something we alone decide... (Willy cc'ed) Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From jw at uplex.de Tue May 6 14:43:28 2025 From: jw at uplex.de (Julian Wiesener) Date: Tue, 06 May 2025 17:43:28 +0300 Subject: vtest and varnish-cache repo relationship In-Reply-To: <202505061313.546DDM5G029159@critter.freebsd.dk> References: <84a4eb3c-6ec7-412d-bc66-897520ef7f1f@uplex.de> <202505051244.545CiRZ5008036@critter.freebsd.dk> <8eba56a5-1719-4f28-84dd-43e36456a8fb@uplex.de> <202505061313.546DDM5G029159@critter.freebsd.dk> Message-ID: <5cd8771fe119bd40365f660314afe3ecba322d78.camel@uplex.de> Hi, as i'm working on VTest HTTP/3, this discussion is of particular intrest to me. Still WIP, i have not shared any code, thus i assume most of you are not aware of my efforts, so thanks for considering my opinion. I used ngtcp2/nghttp3 and implemented the needed VUDP* equalents for what is VTCP* in Varnish as well as its VTest copy (see lib/vtcp.h). To me it would make much sense to have it in a shared library, basically the "varnish-lib" idea. Of course it would mean, that it would be OK, to have (at least optional) dependencies on ngtcp2, nghttp3 and a SSL Library*. IIRC HAproxy already have an other HTTP/3 implementation, so i would assume they would not be intrested, in using "varnish-lib" outside of VTest, but there might still be room for collaboration in the future. For Varnish, i think it would make sense to use the poposed lib for HTTP/3, i would keep the server init (TLS Key reading etc.) out of the shared lib, so Varnish can still use other means (keyless). * ATM i use WolfSSL as my OS comes with a LibreSSL without Quic support, however, new enough OpenSSL, its derivates and GNUtls can be used with ngtcp2 Julian -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg tel +49 40 60945064 http://uplex.de/ From w at 1wt.eu Sat May 10 17:37:34 2025 From: w at 1wt.eu (Willy Tarreau) Date: Sat, 10 May 2025 19:37:34 +0200 Subject: vtest and varnish-cache repo relationship In-Reply-To: <202505061313.546DDM5G029159@critter.freebsd.dk> References: <84a4eb3c-6ec7-412d-bc66-897520ef7f1f@uplex.de> <202505051244.545CiRZ5008036@critter.freebsd.dk> <8eba56a5-1719-4f28-84dd-43e36456a8fb@uplex.de> <202505061313.546DDM5G029159@critter.freebsd.dk> Message-ID: <20250510173734.GA31666@1wt.eu> Hi Poul-Henning! On Tue, May 06, 2025 at 01:13:22PM +0000, Poul-Henning Kamp wrote: > This started as a private conversation, but I think it belongs > on varnish-dev now: > > I'm not really sure I know what the "previous plan" was any more, but > I want this sorted out for 8.0. > > My conversation with dridi@ yesterday revealed that we may have > misunderstood each other somewhat, as there are two different issues > involving shared libraries. > > The first issue, let us call it "vtest+lib" is to make vtest a > package which installs an include, static and/or dynamic libraries, > and a binary which uses the shlib, so that the varnish-cache project, > and possibly haproxy, can link our specific test-binary against the > library of choice. > > This only really makes sense if we go the full monty with the vtest > repos: Publishing releases, making packages, pushing them into > distros etc. The alternative is a simple source dependency, using > some variant of git submodules or shell-scripts to do the dirty > deed. > > The second issue is that vtest uses a lot of stuff copied from > varnish-cache's lib directory. > > IMO that is just an annoyance. > > To fix it "properly", the vtest repo could pull that stuff out of > the varnish-cache repo, but that would pretty much defeat the entire > reason why we created the vtest repo to begin with. > > Alternatively, we could make a third repos containing "varnish-lib" > which then both varnish-cache and vtest repos could depend on, as > well as any other repos which have copied stuff from there over the > years. > > There is some good stuff in varnish-cache/lib, but personally I > do not feel it is enough for us to start yet another repository. > > If we do create a varnish-lib repos, then the next question again > becomes "package or source dependency". > > (I thought what I nixed at some previous occation was the "varnish-lib" > idea, but dridi@ thought it was the "vtest+lib" idea.) > > Finally there is an alternative where we do "vtest+lib", let the > vtest repos take over responsibility for the lib stuff and have > varnish-cache repos get it from there. That would add no further > dependencies, eliminate the duplication of the lib-stuff, and > give any third parties a reasonably sized package to depend on > if they want to use the lib-stuff. > > So to me the core question seems to be: Are we willing/going to > make vtest+lib packages ? > > And of course, since HAproxy also uses vtest, it is not something > we alone decide... (Willy cc'ed) Hmmm I'm seeing that you're facing the same issues that everyone faces when starting to reuse code. IMHO, going the "varnish-lib" way is going to be annoying. I've experienced the same with some of my personal projects that are used in haproxy (libslz, ebtree etc): in practice, haproxy is almost the only user, which means that bugs are most likely to be detected there, and missing features as well. I hesitated between adding an external dependency and integrating the code. I started with the external dep and it was too painful (even for end users), so I ended up integrating a copy of the libs (they're tiny) in the haproxy project. So this means that haproxy contributors willing to patch certain areas are sometimes confused about how to proceed: patch directly into haproxy or send the patch to the upstream lib and hope that it will flow down. But I generally have no problem supporting bidirectional patching, because such code changes very slowly, so if I have two patches to port or backport every year, that's no big deal. Still it's something you need to keep in mind. In your case, I sense that this "varnish-lib" would be the library providing about everything. I think you'd face a huge overhead by maintaining it out of the project, for little gains. What *could* possibly work would be to make it easy to extract such code parts from varnish into an external varnish-lib project that is only updated when needed. Then vtest would depend on that. This would mean that you do have a copy, but not a copy burried into vtest, instead a copy that constitutes an autonomous project. It's much easier to keep up to date, and its sole purpose is to permit other tools to continue to work. So basically a full export from time to time does the job *if needed*. And it's not *that* unusual. For example in the linux kernel, one can to "make install-headers" and get a full directory of all the system headers, that is consumed by whatever libc you use. In any case I don't think that varnish using vtest as an external dependency would be a good idea. It would first create an extra maintenance burden for varnish, and second, create difficulties when other projects start to adopt vtest because they will not realize that the changes they propose can have an impact on varnish itself. Just my two cents, Willy From w at 1wt.eu Sat May 10 17:45:51 2025 From: w at 1wt.eu (Willy Tarreau) Date: Sat, 10 May 2025 19:45:51 +0200 Subject: vtest and varnish-cache repo relationship In-Reply-To: <5cd8771fe119bd40365f660314afe3ecba322d78.camel@uplex.de> References: <84a4eb3c-6ec7-412d-bc66-897520ef7f1f@uplex.de> <202505051244.545CiRZ5008036@critter.freebsd.dk> <8eba56a5-1719-4f28-84dd-43e36456a8fb@uplex.de> <202505061313.546DDM5G029159@critter.freebsd.dk> <5cd8771fe119bd40365f660314afe3ecba322d78.camel@uplex.de> Message-ID: <20250510174551.GB31666@1wt.eu> Hi Julian, On Tue, May 06, 2025 at 05:43:28PM +0300, Julian Wiesener wrote: > Hi, > > as i'm working on VTest HTTP/3, this discussion is of particular intrest to me. Oh that's cool! > Still WIP, i have not shared any code, thus i assume most of you are not > aware of my efforts, so thanks for considering my opinion. > > I used ngtcp2/nghttp3 and implemented the needed VUDP* equalents for what is > VTCP* in Varnish as well as its VTest copy (see lib/vtcp.h). > To me it would make much sense to have it in a shared library, basically the > "varnish-lib" idea. > Of course it would mean, that it would be OK, to have (at least optional) > dependencies on ngtcp2, nghttp3 and a SSL Library*. > IIRC HAproxy already have an other HTTP/3 implementation, so i would assume > they would not be intrested, in using "varnish-lib" outside of VTest, > but there might still be room for collaboration in the future. We do indeed have out own H3/QUIC implementation, but it's independent on vtest. For us, vtest is a totally standalone tool. We simply update it from time to time. Also it totally makes sense to me to use ngtcp2 and nghttp3 for vtest, because these libs are widely used and generally considered as a reference implementation, something that vtest would definitely benefit from. > For Varnish, i think it would make sense to use the poposed lib for HTTP/3, i > would keep the server init (TLS Key reading etc.) out of the shared lib, so > Varnish can still use other means (keyless). I can't speak for that part :-) > * ATM i use WolfSSL as my OS comes with a LibreSSL without Quic support, > however, new enough OpenSSL, its derivates and GNUtls can be used with ngtcp2 You may want to have a look at aws-lc. It's really cool. It's a fork of BoringSSL but with a stable API. As such, it's compatible with the openssl API, and supports QUIC via the de-facto standard API that all libs now support, and is fast. Plus it builds easily and relatively quickly (not as fast as wolfssl though). On the other hand, wolfssl is so light and builds so fast that it can also be a source dependency for the vtest project. One just needs to make sure to properly configure it for the local machine. Hoping this helps, Willy From dridi at varni.sh Mon May 12 09:55:51 2025 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 12 May 2025 09:55:51 +0000 Subject: vtest and varnish-cache repo relationship In-Reply-To: <20250510174551.GB31666@1wt.eu> References: <84a4eb3c-6ec7-412d-bc66-897520ef7f1f@uplex.de> <202505051244.545CiRZ5008036@critter.freebsd.dk> <8eba56a5-1719-4f28-84dd-43e36456a8fb@uplex.de> <202505061313.546DDM5G029159@critter.freebsd.dk> <5cd8771fe119bd40365f660314afe3ecba322d78.camel@uplex.de> <20250510174551.GB31666@1wt.eu> Message-ID: Hello Willy, > We do indeed have out own H3/QUIC implementation, but it's independent > on vtest. For us, vtest is a totally standalone tool. We simply update > it from time to time. Also it totally makes sense to me to use ngtcp2 > and nghttp3 for vtest, because these libs are widely used and generally > considered as a reference implementation, something that vtest would > definitely benefit from. How bad would it be on the haproxy side to maintain haproxy support in a shared library? varnishtest -m libvtc_haproxy.so ... The idea being that your library gets to register its specific commands callbacks (haproxy, maybe others?) and rely otherwise on built-in commands like client and server. > In any case I don't think that varnish using vtest as an external > dependency would be a good idea. It would first create an extra > maintenance burden for varnish, and second, create difficulties when > other projects start to adopt vtest because they will not realize that > the changes they propose can have an impact on varnish itself. I agree and I worry about how much this would compound when LTS branches are involved, in particular when we need to produce a fix fast and we can't just reuse the test case without first dealing with vtest back-ports one way or another. Dridi From w at 1wt.eu Mon May 12 12:01:57 2025 From: w at 1wt.eu (Willy Tarreau) Date: Mon, 12 May 2025 14:01:57 +0200 Subject: vtest and varnish-cache repo relationship In-Reply-To: References: <84a4eb3c-6ec7-412d-bc66-897520ef7f1f@uplex.de> <202505051244.545CiRZ5008036@critter.freebsd.dk> <8eba56a5-1719-4f28-84dd-43e36456a8fb@uplex.de> <202505061313.546DDM5G029159@critter.freebsd.dk> <5cd8771fe119bd40365f660314afe3ecba322d78.camel@uplex.de> <20250510174551.GB31666@1wt.eu> Message-ID: <20250512120157.GB11425@1wt.eu> Hi Dridi! On Mon, May 12, 2025 at 09:55:51AM +0000, Dridi Boukelmoune wrote: > Hello Willy, > > > We do indeed have out own H3/QUIC implementation, but it's independent > > on vtest. For us, vtest is a totally standalone tool. We simply update > > it from time to time. Also it totally makes sense to me to use ngtcp2 > > and nghttp3 for vtest, because these libs are widely used and generally > > considered as a reference implementation, something that vtest would > > definitely benefit from. > > How bad would it be on the haproxy side to maintain haproxy support in > a shared library? > > varnishtest -m libvtc_haproxy.so ... > > The idea being that your library gets to register its specific > commands callbacks (haproxy, maybe others?) and rely otherwise on > built-in commands like client and server. I see. Honestly I have no idea. I should discuss this with other devs. It's possible that for certain things it would ease the implementation of new features, but my concern is that while at the moment vtest is just a rolling release with no version, as soon as you start to expose some compatibility layer for external libs, you're necessarily forced to be a bit more careful not to break what's exposed so that external code continues to work. On the other hand I don't have the feeling that the current state of vtest makes it die under the pull requests, so probably that the overall effort can remain lower with the per-component support merged into it. Maybe we could have a mixed model: a mechanism to load external libs, with an incentive for getting stable code merged into the project to help it stay up to date with internal API evolutions. This way, fast moving projects could prefer to just rely on their external libs even if it means regularly rebasing, while reasonably stable ones might prefer to make sure that their support continues to work smoothly. Because I'm really convinced that ultimately what matters for testing is that it represents the least effort for *everyone*. Willy From scan-admin at coverity.com Mon May 12 12:36:41 2025 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Mon, 12 May 2025 12:36:41 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <6821eb58984b2_b93792b75a4db59943420@prd-scan-dashboard-0.mail> An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed May 14 09:03:43 2025 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 14 May 2025 09:03:43 +0000 Subject: vtest and varnish-cache repo relationship In-Reply-To: <20250512120157.GB11425@1wt.eu> References: <84a4eb3c-6ec7-412d-bc66-897520ef7f1f@uplex.de> <202505051244.545CiRZ5008036@critter.freebsd.dk> <8eba56a5-1719-4f28-84dd-43e36456a8fb@uplex.de> <202505061313.546DDM5G029159@critter.freebsd.dk> <5cd8771fe119bd40365f660314afe3ecba322d78.camel@uplex.de> <20250510174551.GB31666@1wt.eu> <20250512120157.GB11425@1wt.eu> Message-ID: <202505140903.54E93hLA030558@critter.freebsd.dk> Thanks for your perspective Willy, The reason this comes up now, is that currently we maintain vtest (badly) in both the dedicated vtest repository and in the V-C respository, and that is not ideal. Slightly complicating things even more is that VTest is built on top of library code which lives in V-C repo, but is (also) copied to the VT repo. I'm trying to find are more sensible way to do things, preferably one which is easier for everybody. I dont think anybody is eager to turn VT into a "full project" with releases, backwards compat and all that, and I dont thing anybody would be happier even if we did. So the main question for us in V-C is of we continue as now, or use the VT repo as a sub-respository, so we only maintain VT one place. There is a secondary (and IMO minor) issues, such as if we should recreate the VT repo to retain the full history (The current VT repo was created by checking in a snapshot). The actual code would be the same, but obviously commit ID's would change. Any thoughts on this ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From nils.goroll at uplex.de Wed May 14 13:43:09 2025 From: nils.goroll at uplex.de (Nils Goroll) Date: Wed, 14 May 2025 15:43:09 +0200 Subject: Please put down your VDD topics Message-ID: <134fc9d6-f0a3-4089-8d0b-1613e475793d@uplex.de> https://etherpad.wikimedia.org/p/VDD25Q2 Thank you, Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x1DCD8F57A3868BD7.asc Type: application/pgp-keys Size: 3943 bytes Desc: OpenPGP public key URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From scan-admin at coverity.com Mon May 19 10:40:58 2025 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Mon, 19 May 2025 10:40:58 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <682b0ab9c6efa_5f4c62d3baf4ed9b04269@prd-scan-dashboard-0.mail> An HTML attachment was scrubbed... URL: From scan-admin at coverity.com Mon May 26 12:46:04 2025 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Mon, 26 May 2025 12:46:04 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <6834628c4f111_d172e2d3baf4ed9b042b8@prd-scan-dashboard-0.mail> An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed May 28 06:53:14 2025 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 28 May 2025 06:53:14 +0000 Subject: Fleshed out ideas from VDD25Q2 Message-ID: <202505280653.54S6rEaf022850@critter.freebsd.dk> This is my personal attempt to flesh out some of the things we discussed at VDD25Q2 in a bit more detail. A) More modular VCL ------------------- Points of pain: "Having everyting in one VCL file" "Slow VCL compiles" "Useless backend state/statistics reporting." A) Diagnosis: In complex setups, you either end with a tangled VCL file that does many different things in a lot of conditional clauses, or you end up with a less tangled VCL file that tries to determine which of multiple VCL files should handle this particular request. If you do the latter, you have to repeat the backend declarations in many of the VCL files which causes fragmented backend statistics. A) Concrete proposals: A.1) Make it possible to import and export backends(=directors) between VCLs. To do this, we must discard one original dogma: "There is *exactly* one active VCL at any moment in time. We did that for good reasons, I will argue that it allowed us to deliver the very valuable and successful feature of "truly instant reconfiguration", but on review, it is now a limiting factor. Strictly speaking we already broke that dogma with return(vcl) but we hid that so well, that we did not have to change a single word in the documentation. Letting it go (more) has consequences, most obviously, we will need some way to decide which of multiple active VCL's we throw the incoming requests at, but as long as "the other active VCLs" do not contain a vcl_recv{}, that is obvious. Sharing backends/directors and ACLs across VCLs means we need some way to make sure all other threads from other VCLs are out of this one, before we can cool and unload it. That is CS-101 stuff multi- thread material, but performance cannot be ignored. But the immediately obvious follow-up question is: Why can't I also export&import SUBs ? I wont go into the details (compatibility with the vcl_method they are called from), but that runs into an equally old dogma: "If you can vcl.load a VCL, you can vcl.use that VCL." This one already has a footnote attached to it, relating to VMODs being able to veto going from cold to hot, but otherwise it still holds. This originated in a desire to have a preloaded, ever-ready "emergency VCL" so that when the newspaper backend monster keeled over, there would be a single /reliable/ switch to throw. Is that "killer-feature" or "really, I didn't know that..." ? Right now I truly dont know the answer, so for that reason alone sharing SUBs is "desirably but for further study" at this time, So for now: I think we should implement export/import of backends and ACLs, since I think they "come for free", but not commit to sharing SUBs. (See in A.2 for CLI implications.) A.1) Thoughts about implementation Exporting things must be explicit, we do not want VCLs to be able to grab random stuff from other VCLs, both as a matter of sanity, and to keep the list of exported object small. For the same reasons as for return(vcl), the imports have to go through labels, otherwise "the other" VCL cannot be replaced. Exporting the backends from a single VCL, instead of replicating their definition in new versions of the active VCL or in multi-app/tennant VCLs, means that the statistics and state will not be fragmented. We may want more, (see below,) but it will be a step in the right direction. A.1) Summary: Low to medium complexity, good and concrete benefits which would be a selling point for 8.0. A.2) Add a central switchboard. I think the final version of the idea we came up with, was something like this mock-up: vcl 4.2; vcl_scope { req.http.host suffix "example.com"; req.url prefix "/hello_world"; return(mine(100)); } These "selectors" will be merged into a single decision data structure which a central dispatcher uses to decide where the requests goes. I think we also had consensus for adding an escape mechanism along the lines of: vcl 4.2; sub vcl_match { if (client.ip ~ inhouse_acl && req.url ~ "editor") { return (mine(100)); } return (notme); } Such functions cannot be merged, but must be executed serially, which rules them out as the only method, but there seems to be solid use-cases for having a few, for instance purges, inhouse vs. outside, log4j detection etc. So far, so good. We need CLI commands to do this, including a "vcl.unuse" which we never had before and a "vcl.substitute" to atomically do a vcl.unuse + vcl.use. If we're adding two new CLI commands, we gain nothing from overloading "vcl.use" as the third, so we should add three all new CLI, something like: vcl.insert - add a vcl to the switchboard vcl.remove - remove a vcl from the switchboard vcl.replace - atomic add+remove That eliminates the need for a setting to enable this new "switchboard mode": We power up the swichboard on first vcl.insert and power it down onlast vcl.remove. That again means that even people who do not use the switchboard would be able to "vcl.insert log4j_mitigator.vcl" without editing their VCL. (killer-feature ?) But that only works if the switchboard defaults to their usual VCL, when none of the vcl.insert'ed VCL's match. So I think the final result looks like: There is *exactly* one active VCL at any moment in time, requests go here, unless the switchboard dispatches them, (But "active" now means something slightly different.) There can be any number of "library VCLs" loaded with "vcl.library" containing only backend/directors and ACLs (for now). There can be any number of "subscriber VCLs" loaded with "vcl.insert" which go though the switchboard. A.2) Thoughts about implementation How are conflicting selectors resolved ? In the above examples I put "mine(100)" as a way to assign priorities. Better ideas ? I'm slightly concerned about the rebuilding/reconfiguration of the merged decision data structure when there are many VCL's. Nobody argued for using regular expressions, which I suspect was partly a healthy respect for implementing the merge, and partly because those fields are not just strings (%xx, case-insensitity, I18N DNS etc.) It seem obvious to allow multiple selectors on each of the two fields, and to give them "or" semantic, so that a single vcl_scope{} can match multiple domains and/or multiple urls. But assuming the two fields (host+url) inside the selector have "and" semantics, I think we should also allow multiple vcl_scope{} per VCL, so that a single VCL can handle: vcl_scope { req.http.host suffix "example.com"; req.url prefix "/hello_world"; return(mine(100)); } vcl_scope { req.http.host suffix "exampl?.fr"; req.url prefix "/bonjour_monde"; return(mine(100)); } vcl_scope { req.http.host suffix "example.de"; req.url prefix "/guten_heute_leute"; return(mine(100)); } and if that still cannot do what people want, there is the vcl_match{} escape-mechanism. I wonder if host+url is too restricive? I can imagine, but dont know the relevance of, also selecting on user-agent and particular cookies being present or absent, but with the escape-mechanism, we can collect real-world experience before we decide that. A.2) Summary: This one goes all over the place, VCC, CLI, locking, and using somebody's exam-results in CS date structures in real life. I cant imagine this is realistic for 8.0, and I dont see any ways to be "a little bit pregnant" with it. But if my outline holds up to scrutiny, it is additive and will not have to wait for 9.0. B) "Plain backends are too plain" --------------------------------- Points of pain: "DNS answers with multiple IPs" "DNS response frozen at vcl.load time" "Probing backends with rapidly changing IPs" "Fragmented (connection pool) statistics" B) Diagnosis: In 2006 backends were real backends and Kubernetes was not a real word. Until we have a "discover" service which checks if the DNS response has changed - we are stuck with freezing the DNS response at vcl.load time. But we could stop being anal about DNS responses with multiple IP numbers, which would at least allow people to work around that limitation by reloading their VCL every N minutes. B) Concrete proposals: Have VCC accept DNS responses with multiple IPs, use them round-robin. B) Thoughts about implementation I'm not sure "use them round-robin" cuts it. For instance if we get both IP4 and IP6 but have no IP6 connectivity. A better default policy may be "once you find one that works, stick with it, until it stops working. Do we probe all the IP's ? Should we compile it into a round-robin director, to avoid code duplication ? B) Summary: Once the questions are answered, this should be pretty straight forward, and not be difficult to complete before 8.0. (Famous Last Words?) C) VSL roll-overs ----------------- Points of pain: "Extra memory copies in clients to 'evacuate' requests in danger of being overwritten" "Complexity in clients to monitor danger of overwrites." C) Diagnosis In 2006 wire-speed was 100 Mbit/sec, and if your VSL clients were not fast enough, that was not our problem. C) Concrete proposal Instead of one big SHM segment, varnishd creates N distinct files which occupy the same amont of space, and announces them in the VSM. Varnishd picks an available segment and updates it's open and "do not use past" timestamps in VSM. When that segment is full, repeat the process. VSL-Clients monitor the index and process the files in timestamp sequence. When the client opens a segment, it links a unique filename to that file, so the inode link-count increases, and it removes that filename again when the client no longer needs any data in that segment. Clients should arm "atexit(3)" handlers to nuke the unique filenames when they end. Varnishd considers a segment available if it's previous "do not use past" timestamp is expired, and the inode link count is one. C) Thoughts about implementation This proposal eliminates VSL overwrites entirely, but adds some new failure modes: VSL-Client dies without removing the timestamp which holds the inode link, leaving segment(s) locked until those stray files are removed. If the VSL-Clients unique names are preditcable from their PID, varnishd could patrol such files with kill(0). When clients are too slow or get stuck, varnishd may run out of available segments, and varnishd will serve traffic without logging it. Counters should record how many transactions and VSL records were not written, and the VSM needs to communicate to clients that there is a hole in the VSL stream, otherwise the clients may never release the prior segments they hold on to. A parameter can change the default, so varnishd instead stops serving traffic if it cannot be logged. Here the "do not use past" timestamp can be used as configurable minimum duration of VSL "look-back". The inode link-count trick is neat, but involving the filesystem and that may be too expensive. Once VIPC is in, we can use that and eliminate the "stray files" problem. In light of the "make all cli JSON" discussion, maybe this should be the first customer of VIPC ? C) Summary Very limited amount of code involved, this might make it into 8.0. Feedback kindly requested... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From nils.goroll at uplex.de Sat May 31 12:33:55 2025 From: nils.goroll at uplex.de (Nils Goroll) Date: Sat, 31 May 2025 14:33:55 +0200 Subject: VDD2025Q2 notes Message-ID: ... are now available at https://github.com/varnishcache/varnish-cache/wiki/VDD25Q2 - this is a slightly edited version of the etherpad Thank you for the productive two days to everyone participating and hello from the ferry back to Germany. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x1DCD8F57A3868BD7.asc Type: application/pgp-keys Size: 3943 bytes Desc: OpenPGP public key URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: