From phk at phk.freebsd.dk Wed Dec 6 20:50:40 2017 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 06 Dec 2017 20:50:40 +0000 Subject: HELP Wanted: Redirecting repo-traffic to packagecloud.io Message-ID: <69931.1512593440@critter.freebsd.dk> It seems that no response codes will deter linux package-management tools and the amount of traffic we had beating up on the project server that way was totally swamping any other traffic. That's not a problem of course, we run varnish. But it is a LOT of users not getting good service. I spent some hours trying to find out where to send them on packagecloud.io and came up with this: if (req.url ~ "repomd.xml") { set req.http.urlx = regsub(req.url, ".*varnish-([345])[.](.)/el(.*)", "/varnishcache/varnish\1\2/el/\3"); return (synth(301,"packagecloud")); } if (req.url ~ "^/(ubuntu|debian)/dists/[^/]*/I*n*Release$") { /* * We don't seem to have files on packageio which lists all * releases, so point these at release5.2 */ set req.http.urlx = regsub(req.url, "/(ubuntu|debian)/dists/(.*)/(I*n*Release)", "/varnishcache/varnish52/\1/dists/\2/\3"); return (synth(301,"packagecloud")); } I'm still seeing a lot of "deep" urls being requested (see below) but I'm hoping these will migrate to packagecloud now that the top-level URLs point them there. I would appreciate if somebody with more linux package clue than me (ie: any non-zero amount of clue will be helpful) can sanity-check what I've done. Poul-Henning 12.47 ReqURL /ubuntu/dists/trusty/InRelease 10.23 ReqURL /redhat/varnish-3.0/el5/x86_64/repodata/repomd.xml 8.45 ReqURL /redhat/varnish-4.0/el6/repodata/repomd.xml 6.69 ReqURL /redhat/varnish-4.1/el6/x86_64/repodata/0db82826c0003726ea45ca2ff822454afb55e209-primary.xml.gz 6.57 ReqURL /redhat/varnish-3.0/el6/x86_64/repodata/repomd.xml 6.54 ReqURL / 6.07 ReqURL /redhat/varnish-3.0/el6/repodata/repomd.xml 5.51 ReqURL /redhat/varnish-3.0/el5/x86_64/repodata/46939bc1312d4686f78d539762f83d0d3bb5f5cf-primary.xml.gz 5.15 ReqURL /redhat/varnish-3.0/el6/x86_64/repodata/0e86f304b9f8c0e214d431f95100ad4668e0ad9e-primary.xml.gz 4.63 ReqURL /ubuntu/dists/lucid/Release 4.60 ReqURL /ubuntu/dists/lucid/Release.gpg 4.45 ReqURL /redhat/varnish-4.0/el6/x86_64/repodata/repomd.xml 4.30 ReqURL /debian/dists/wheezy/Release 4.30 ReqURL /debian/dists/wheezy/Release.gpg 3.84 ReqURL /redhat/varnish-4.1/el6/x86_64/repodata/repomd.xml 3.78 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-i386/Packages.gz 3.77 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-i386/Packages.lzma 3.74 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-i386/Packages.bz2 3.48 ReqURL /redhat/varnish-4.0/el6/x86_64/repodata/b3cc34523cf82349fbdce79b0588b471b842a408-primary.xml.gz 3.47 ReqURL /ubuntu/dists/precise/Release 3.46 ReqURL /ubuntu/dists/precise/Release.gpg 3.18 ReqURL /ubuntu/dists/lucid/varnish-3.0/i18n/Translation-en 3.17 ReqURL /ubuntu/dists/lucid/varnish-3.0/i18n/Translation-en.gz 3.17 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-i386/Packages 3.16 ReqURL /ubuntu/dists/lucid/varnish-3.0/i18n/Translation-en.lzma 3.14 ReqURL /ubuntu/dists/lucid/varnish-3.0/i18n/Translation-en.xz 3.13 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-i386/Packages.xz 3.12 ReqURL /ubuntu/dists/lucid/varnish-3.0/i18n/Translation-en.bz2 3.10 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-i386/Packages.diff/Index 3.01 ReqURL /debian/dists/jessie/InRelease 2.97 ReqURL /debian/dists/wheezy/varnish-3.0/binary-amd64/Packages 2.96 ReqURL /debian/dists/wheezy/varnish-3.0/i18n/Translation-en 2.96 ReqURL /debian/dists/wheezy/varnish-3.0/binary-amd64/Packages.gz 2.96 ReqURL /debian/dists/wheezy/varnish-3.0/i18n/Translation-en.gz 2.92 ReqURL /debian/dists/wheezy/varnish-3.0/i18n/Translation-en.lzma 2.92 ReqURL /debian/dists/wheezy/varnish-3.0/binary-amd64/Packages.lzma 2.92 ReqURL /debian/dists/wheezy/varnish-3.0/binary-amd64/Packages.xz 2.92 ReqURL /debian/dists/wheezy/varnish-3.0/i18n/Translation-en.xz 2.90 ReqURL /ubuntu/GPG-key.txt 2.89 ReqURL /debian/dists/wheezy/varnish-3.0/binary-amd64/Packages.bz2 2.89 ReqURL /debian/dists/wheezy/varnish-3.0/i18n/Translation-en.bz2 2.87 ReqURL /debian/GPG-key.txt 2.78 ReqURL /debian/dists/wheezy/varnish-3.0/binary-amd64/Packages.diff/Index 2.53 ReqURL /redhat/varnish-4.0/el7/repodata/repomd.xml 2.44 ReqURL /redhat/varnish-4.0/el7/x86_64/repodata/repomd.xml 2.40 ReqURL /redhat/varnish-4.0/el7/x86_64/repodata/0ffac79c8b21f896830f3e6a81b4e9f8430199d3-primary.xml.gz 2.37 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-amd64/Packages.gz 2.36 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-amd64/Packages.lzma 2.36 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-amd64/Packages 2.34 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-amd64/Packages.bz2 2.33 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-amd64/Packages.xz 2.31 ReqURL /ubuntu/dists/lucid/varnish-3.0/binary-amd64/Packages.diff/Index -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From reza at varnish-software.com Sun Dec 10 17:36:06 2017 From: reza at varnish-software.com (Reza Naghibi) Date: Sun, 10 Dec 2017 12:36:06 -0500 Subject: VFP and VDP configurations Message-ID: Was thinking about how configurable processors could work, so just throwing out some ideas. Going to focus on VFPs. Basically, the user adds VFP processors via VCL as usual with an optional position. These VFPs are put on a candidate list. Then: - Each VFP defines a string which states its input and output format - When Varnish constructs the VFP chain, it starts at beresp and uses that as the first output, and then it chains together VFPs matching inputs with outputs. It uses the candidate list and priorities to guide this construction, but it will move things around to get a best fit. - Varnish has access to builtin VFPs. These VFPs are always available and are used to fill in any gaps when it cannot find a way to match and output and input when constructing the chain. So from VCL, here is how we add VFPs: VOID add_vfp(VFP init, ENUM position = DEFAULT); VFP is "struct vfp" and any VMOD can return that, thus registering itself as a VFP. This contains all the callback and its input and output requirements. position is: DEFAULT, FRONT, MIDDLE, LAST, FETCH, STEVEDORE DEFAULT lets the VMOD recommend a position, otherwise it falls back to LAST. FETCH and STEVEDORE are special positions which tells Varnish to put the VFP in front or last, regardless of actual FRONT and LAST. So this would be our current list of VFPs with the format (input)name(output): (text,plain,none)esi(esitext) (text,plain,none)esi_gzip(gzip) (text,plain,none)gzip(gzip,gz) (gzip,gz)gunzip(text,plain,none) gzip and gunzip have a prefered position of STEVEDORE. This means they will behave the same as beresp.do_gzip and beresp.do_gunzip when added by the user. Also, gzip and gunzip are builtin, so they never need to be explicitly added if they are needed by other other VFPs. (From here on out I will simplify text, plain, and none to text). Also, when a VFP is successfully added from the candidate list to the actual chain, it is initialized. During that initialization, it can see beresp and all the VFPs in front of it and the other candidates. It can then add new VFPs to the candidate list, remove itself, remove other VFPs, or delete itself or other VFPs. Orphaned VFPs get put back on the candidate list. So for example, anytime the builtin gunzip VFP is added, it will add gzip as a STEVEDORE VFP candidate (unless a gunzip VFP is already there). This means content will always maintain its encoding going to storage, but the user can override. Example: import myvfp; sub vcl_backend_response { add_vfp(myvfp.init()); add_vfp(esi); } So we start at beresp.http.Content-Encoding to figure out the output of beresp. We can also optionally look at Content-Type. So in this example, we have a gzip response: VFP chain: beresp(gzip) VFP candidates: (text)myvfp(text), (text)esi(esitext) VFP builtin: (gzip)gunzip(text), (text)gzip(gunzip) The algorithm for building the chain attempts to place candidates in order from the candidates to the actual chain by matching output to input. There is some flexibility in that it can reorder the candidates if that allows a match. FETCH and STEVEDORE need to always be first and last, if possible. Finally, if it cannot match anymore candidates, it then starts considering the builtins and the process repeats until its not possible to add anymore VFPs. This means its possible some VFPs cannot be added if there input cannot be generated from the beresp. So the above example: VFP chain: beresp(gzip) VFP candidates: (text)myvfp(text), (text)esi(esitext) VFP builtin: (gzip)gunzip(text), (text)gzip(gunzip) Neither myvfp or esi can be placed since they do not match gzip. Varnish then goes thru the builtins and it finds gunzip will allow a match to happen and adds it: VFP chain: beresp(gzip) > (gzip)gunzip(text) VFP candidates: (text)myvfp(text), (text)esi(esitext) VFP builtin: (gzip)gunzip(text), (text)gzip(gunzip) When gunzip gets initialized, it will add gzip to a stevedore position: VFP chain: beresp(gzip) > (gzip)gunzip(text) VFP candidates: (text)myvfp(text), (text)esi(esitext), STEVEDORE:(text)gzip(gunzip) VFP builtin: (gzip)gunzip(text), (text)gzip(gunzip) Next, all the VFPs are now added since their outputs and inputs match up, giving us the final configuration: VFP chain: beresp(gzip) > (gzip)gunzip(text) > (text)myvfp(text) > (text)esi(esitext) VFP candidates: STEVEDORE:(text)gzip(gunzip) VFP builtin: (gzip)gunzip(text), (text)gzip(gunzip) gzip cannot be used since esi outputs a special text format, esitext, which prevents any further processing. ESI could have had a little bit of intelligence as it knows it has a gzip counterpart. It could have seen that a gzip output VFP is in the candidate list, deleted itself, and added esi_gzip back to the candidates. This would have given us: VFP chain: beresp(gzip) > (gzip)gunzip(text) > (text)myvfp(text) > (text)esi_gzip(gzip) Brotli example Lets say we have vmod brotli and it has these VFP: (text)brotli(brotli,br) (brotli,br)unbrotli(text) Also, during init, these 2 VFPs are added to the builtin. So now Varnish has these builtins: VFP builtin: (gzip)gunzip(text), (text)gzip(gunzip), (text)brotli(brotli,br), (brotli,br)unbrotli(text) Varnish can use these anywhere to make the VFP chain work. So in the previous example (minus esi), we could still get our VFPs working when the beresp is brotli. unbrotli will queue brotli at the STEVEDORE and content will go into cache as brotli and our VFPs still got text: VFP chain: beresp(br) > (br)unbrotli(text) > (text)myvfp(text) > (text)brotli(br) Transient buffer example We could build a theoretical transient VFP vmod which buffers the VFP input and passes it on as transient storage as 1 large contiguous buffer. It would look like: (text)buffer(buffertext) And this would be added as a builtin. We could then have a regex substitution vmod like this: (buffertext)regex(text) And our VCL would look like: sub vcl_backend_response { add_vfp(regex.vfp()); regex.add(".*", "new title"); regex.add("host", "newhost"); } This will give us: VFP chain: beresp(gzip) VFP candidates: (buffertext)regex(text) VFP builtin: (gzip)gunzip(text), (text)gzip(gunzip), (text)brotli(br), (br)unbrotli(text), (text)buffer(buffertext) Since regex cannot be placed on gzip, we find the gunzip > buffer combination gives us what we need. gunzip adds gzip and we end up with this: VFP chain: beresp(gzip) > (gzip)gunzip(text) > (text)buffer(buffertext) > (buffertext)regex(text) > (text)gzip(gunzip) Anyway, I could go on with all kinds of other cool examples, but hopefully I got my idea across. Thank you for reading thru this long email! -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Thu Dec 14 12:38:47 2017 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 14 Dec 2017 12:38:47 +0000 Subject: VCL_STRANDS Message-ID: <96709.1513255127@critter.freebsd.dk> I have gone over the VCC and added an alternate way of passing a uncomposed string to functions, as an alternative to STRING_LIST. STRANDS is basically a STRING_LIST which gets stuffed into a on-stack struct, so that more than one STRANDS argument can be passed to a (VMOD-)function, something which is not possible with STRING_LIST because it uses the var-args mechanism. One place where this is now used is in string comparisons in VCL, this may save significant workspace for some users. While at it, I have also added support for <, <=, >= and > string comparisons. In the process I have done major surgery on string-handling in VCC, cleaning it up in the process, and therefore I kindly ask everybody to be on the lookout for things which changed or fails now. Feedback from VMOD writers welcome... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Sun Dec 17 18:42:34 2017 From: slink at schokola.de (Nils Goroll) Date: Sun, 17 Dec 2017 19:42:34 +0100 Subject: VFP and VDP configurations In-Reply-To: References: Message-ID: <981619d7-bd55-66d6-a728-901291156595@schokola.de> Hi, at first, I found Rezas concept appealing and there are some aspects which I think we should take from it: - take the protocol-vpfs v1f_* h2_body out of the game for vcl - format specifiers: - have: (gzip), (plain) *1), (esi) - ideas: (br), (buffertext) *2) esi being a format which can contain gzip segments, but that's would be opaque to other vfps - the notion of format conversion(s) that a vfp can handle, e.g. - have: esi: (plain)->(esi), (gzip)->(esi) gzip: (plain)->(gzip) ungzip: (gzip)->(plain) - ideas: br: (plain)->(br) unbr: (br)->(plain) re: (plain)->(plain) But reflecting on it, I am not so sure about runtime resolution and these aspects in particular: - "algorithm (...) can reorder the candidates if that allows a match." - "(A VFP) can (...) add new VFPs to the candidate list, remove itself, remove other VFPs, or delete itself or other VFPs I wonder how we would even guarantee that this algorithm ever terminates. So I think we really need to have VCL compile time checking of all possible outcomes: - Either by keeping track of all possible filter chain states at each point during VCL compilation - or by restricting ourselves to setting all of the filter chain at once. The latter will probably lead to largish decision trees in VCL for advanced cases, but I think we should start with this simple and safe solution with the format/conversion check. Nils *1) "(text)" in reza's concept *2) not sure if this is a good idea, maybe multi segment regexen are the better idea From dridi at varni.sh Mon Dec 18 10:07:40 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 18 Dec 2017 11:07:40 +0100 Subject: VFP and VDP configurations In-Reply-To: References: Message-ID: > So from VCL, here is how we add VFPs: > > VOID add_vfp(VFP init, ENUM position = DEFAULT); > > VFP is "struct vfp" and any VMOD can return that, thus registering itself as > a VFP. This contains all the callback and its input and output requirements. > > position is: DEFAULT, FRONT, MIDDLE, LAST, FETCH, STEVEDORE > > DEFAULT lets the VMOD recommend a position, otherwise it falls back to LAST. > FETCH and STEVEDORE are special positions which tells Varnish to put the VFP > in front or last, regardless of actual FRONT and LAST. I think the position should be mapped closer to HTTP semantics: $Enum { content, assembly, encoding, transfer, }; The `content` value would map to Accept/Content-Type headers, working on the original body. The order shouldn't matter (otherwise you are changing the content type) and you could for example chain operations: - js-minification - js-obfuscation You should expect the same results regardless of the order, of course the simplest would be to keep the order set in VCL. The `content` step would feed from storage where the body is buffered. The `assembly` value would map to ESI-like features, and would feed from the content, with built-in support for Varnish's subset of ESI. The `encoding` value would map to Accept-Encoding/Content-Encoding headers. With built-in support for gzip and opening support for other encodings. It would feed from the contents after an optional assembly. The `transfer` value would map to Transfer-Encoding headers, with built-in support for chunked encoding. ZeGermans could implement trailers this way. Would this step make sense in h2? If not, should Varnish just ignore them? Now problems arise if you have an `encoding` step in a VFP (eg. gzip'd in storage) and use `content` or `assembly` steps in a VDP for that same object, or a different encoding altogether. But in your proposal you don't seem bothered by this prospect. Neither am I, because that's only a classic memory vs cpu trade off. But it might be hard to implement the current ESI+gzip optimization if we go this route (or a good reason to go back to upstream zlib). Dridi From dridi at varni.sh Mon Dec 18 13:58:48 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 18 Dec 2017 14:58:48 +0100 Subject: VFP and VDP configurations In-Reply-To: <981619d7-bd55-66d6-a728-901291156595@schokola.de> References: <981619d7-bd55-66d6-a728-901291156595@schokola.de> Message-ID: > - format specifiers: > > - have: > > (gzip), (plain) *1), (esi) > > - ideas: > > (br), (buffertext) *2) > > esi being a format which can contain gzip segments, but that's would > be opaque to other vfps [...] > *1) "(text)" in reza's concept Or "identity" to match HTTP vocabulary. > *2) not sure if this is a good idea, maybe multi segment regexen are the better > idea For a lack of better place to comment, in my previous message I put `content` before `assembly`. On second thought it should be the other way around, otherwise tags break the content type. Dridi From reza at varnish-software.com Mon Dec 18 16:22:41 2017 From: reza at varnish-software.com (Reza Naghibi) Date: Mon, 18 Dec 2017 11:22:41 -0500 Subject: VFP and VDP configurations In-Reply-To: References: <981619d7-bd55-66d6-a728-901291156595@schokola.de> Message-ID: > take the protocol-vpfs v1f_* h2_body out of the game for vcl Those will be builtin on the delivery side. So I didnt really dive into VDPs, but it works similar to VFPs in that the client expects a certain kind of response, so its upto the VDP chain to produce a matching output. So if the client wants an H2 range response gzipped, then that chain needs to be put together starting at resp in the stevedore and ending at the client. So its different, but the same structure and rules apply. > I wonder how we would even guarantee that this algorithm ever terminates. Right, since processors can modify the chain as its being built and change things mid flight, this could definitely happen. So the only thing to do here is have a loop counter and break out after a certain amount of attempts at creating the best fit chain. Its kind of like a graph search where when you hit every node, the node can change the graph ahead of you or optionally move you back positions. So in this case, its very possibly to get stuck in an unavoidable loop. > I think the position should be mapped closer to HTTP semantics I think this makes too many assumptions? For example, where would security processors go? Knowing what I know about whats possible with these things, I think the processor universe might be bigger than the 4 categories you listed out. I think this brings up an important point, which is that for us to be successful here, we really need to bring forward some new processors to be our seeds for building this new framework. This will drive the requirements that we need. I think there will be a lot of uncertainty if we build this based on theoretical processors. I think its alright if these new processors are simple and our new framework starts off simple as well. This can then evolve as we learn more. For me, I have written a handful of processors already, so a lot of what I am proposing here comes from past experience. -- Reza Naghibi Varnish Software On Mon, Dec 18, 2017 at 8:58 AM, Dridi Boukelmoune wrote: > > - format specifiers: > > > > - have: > > > > (gzip), (plain) *1), (esi) > > > > - ideas: > > > > (br), (buffertext) *2) > > > > esi being a format which can contain gzip segments, but that's > would > > be opaque to other vfps > [...] > > *1) "(text)" in reza's concept > > Or "identity" to match HTTP vocabulary. > > > *2) not sure if this is a good idea, maybe multi segment regexen are the > better > > idea > > For a lack of better place to comment, in my previous message I put > `content` before `assembly`. On second thought it should be the other > way around, otherwise tags break the content type. > > Dridi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Mon Dec 18 17:06:42 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 18 Dec 2017 18:06:42 +0100 Subject: VFP and VDP configurations In-Reply-To: References: <981619d7-bd55-66d6-a728-901291156595@schokola.de> Message-ID: >> I think the position should be mapped closer to HTTP semantics > > I think this makes too many assumptions? For example, where would security > processors go? Knowing what I know about whats possible with these things, I > think the processor universe might be bigger than the 4 categories you > listed out. I'm a bit perplex regarding theoretical security processors... > I think this brings up an important point, which is that for us to be > successful here, we really need to bring forward some new processors to be > our seeds for building this new framework. This will drive the requirements > that we need. I think there will be a lot of uncertainty if we build this > based on theoretical processors. ...since you explicitly advise against designing for theory. With the 4 categories I listed I can fit real-life processors in all of them: - assembly: esi, edgestash, probably other kinds of include-able templates - content: minification, obfuscation, regsub, exif cleanup, resizing, watermarking - encoding: gzip, br - transfer: identity, chunked, trailers My examples were VDP-oriented (from storage to proto) but would work the other way around too (except assembly that I can't picture in a VFP). You can map encoding and transfer processors to headers: imagining that both gzip and brotli processors are registered, core code could pick one or none based on good old content negotiation. Now where would I put security processors? The only places where it would make sense to me is content. But then again, please define security (I see two cases off the top of my head, both would run on content). > I think its alright if these new processors > are simple and our new framework starts off simple as well. This can then > evolve as we learn more. For me, I have written a handful of processors > already, so a lot of what I am proposing here comes from past experience. Sure, with the ongoing work to clarify vmod ABIs this one should definitely start as "strict" until we get to something stable. However on the VCL side it is not that simple, because we don't want to break "vcl x.y" if we can avoid it. We could mimic the feature/debug parameters: set beresp.deliver = "[+-]value(,...)*"; A + would append a processor to the right step (depending on where it was registered), a - would remove it from the pipeline, and a lack of prefix would replace the list altogether. That would create an equivalent for the `do_*` properties, or even better the `do_*` properties could be syntactic sugar: set beresp.do_esi = true; set beresp.do_br = true; # same as set beresp.deliver = "+esi,br"; Dridi From geoff at uplex.de Tue Dec 19 12:29:49 2017 From: geoff at uplex.de (Geoff Simmons) Date: Tue, 19 Dec 2017 13:29:49 +0100 Subject: VFP and VDP configurations In-Reply-To: References: Message-ID: On 12/10/2017 06:36 PM, Reza Naghibi wrote: > Basically, the user adds VFP > processors via VCL as usual with an optional position. What did you mean here by "as usual"? A user doesn't have a means to add VFPs or VDPs via VCL -- I thought that this discussion is about how that would work. > - Varnish has access to builtin VFPs. These VFPs are always available > and are used to fill in any gaps when it cannot find a way to match and > output and input when constructing the chain. Are we considering ways for a VFP/VDP defined in a VMOD to replace one of the builtins? ... assuming that the same thoughts apply to VDPs, and that esi and gzip/gunzip are among the builtin VDPs ... As we've talked about before, I'd like to take a shot at a VDP for parallel ESIs. It seems to me that the pesi VDP wouldn't be worked into the chain, but would rather substitute the builtin esi VDP in the chain. So it would be something along the lines of: sub vcl_deliver { replace_vdp(esi, pesi.vdp()); # ... } Best, Geoff -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From reza at varnish-software.com Tue Dec 19 16:18:21 2017 From: reza at varnish-software.com (Reza Naghibi) Date: Tue, 19 Dec 2017 11:18:21 -0500 Subject: VFP and VDP configurations In-Reply-To: References: Message-ID: > A user doesn't have a means to add VFPs or VDPs via VCL Well, I guess I meant like this: beresp.do_esi = true In effect, the above statement adds the ESI VDP to the beresp. ESI would not be part of the "builtin" VDPs in this new scheme, rather, its just a plain old user VDP. Builtins, as I have defined it, are VDPs which are always available to be used transparent to the user. So in your case, switching out ESI would be done like this: import my_parallel_esi; sub vcl_backend_response { add_vfp(my_parallel_esi.init()); // Do not use beresp.do_esi } Because no other ESI VDP was added, my_parallel_esi will be the only one to run. > Are we considering ways for a VFP/VDP defined in a VMOD to replace one of the builtins? I see no reason why not. -- Reza Naghibi Varnish Software On Tue, Dec 19, 2017 at 7:29 AM, Geoff Simmons wrote: > On 12/10/2017 06:36 PM, Reza Naghibi wrote: > > Basically, the user adds VFP > > processors via VCL as usual with an optional position. > > What did you mean here by "as usual"? A user doesn't have a means to add > VFPs or VDPs via VCL -- I thought that this discussion is about how that > would work. > > > - Varnish has access to builtin VFPs. These VFPs are always available > > and are used to fill in any gaps when it cannot find a way to match > and > > output and input when constructing the chain. > > Are we considering ways for a VFP/VDP defined in a VMOD to replace one > of the builtins? > > ... assuming that the same thoughts apply to VDPs, and that esi and > gzip/gunzip are among the builtin VDPs ... > > As we've talked about before, I'd like to take a shot at a VDP for > parallel ESIs. It seems to me that the pesi VDP wouldn't be worked into > the chain, but would rather substitute the builtin esi VDP in the chain. > > So it would be something along the lines of: > > sub vcl_deliver { > replace_vdp(esi, pesi.vdp()); > # ... > } > > > Best, > Geoff > -- > ** * * UPLEX - Nils Goroll Systemoptimierung > > Scheffelstra?e 32 > 22301 Hamburg > > Tel +49 40 2880 5731 > Mob +49 176 636 90917 > Fax +49 40 42949753 > > http://uplex.de > > > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: