From n.premkumar.me at gmail.com Wed Jan 3 14:37:10 2018 From: n.premkumar.me at gmail.com (Prem Kumar) Date: Wed, 3 Jan 2018 20:07:10 +0530 Subject: Varnish subroutines return type different from 3.x Message-ID: Hi All, Is there is any reason why subroutines return type changes from "static int" to "void" in varnish. During compilation of VCL to C code, all the function return type as void instead of static int before. Can you please help if there a way I can return status code from subroutine?. lib/libvcc/vcc_parse.c vcc_ParseFunction(struct vcc *tl) { Fh(tl, 0, "void %s(VRT_CTX);\n", sym->rname); Fc(tl, 1, "\nvoid __match_proto__(vcl_func_t)\n"); Fc(tl, 1, "%s(VRT_CTX)\n", sym->rname); -Prem -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Jan 3 14:42:04 2018 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 3 Jan 2018 15:42:04 +0100 Subject: Varnish subroutines return type different from 3.x In-Reply-To: References: Message-ID: Hi, Pretty sure there was a reason, possibly paving the way to have subroutines return VCL types, some time in the future. Can I ask you what's the goal behind this? I feel like you're headed toward an overkill solution, and we can probably avoid that. Cheers, -- Guillaume Quintard On Wed, Jan 3, 2018 at 3:37 PM, Prem Kumar wrote: > Hi All, > > Is there is any reason why subroutines return type changes from "static > int" to "void" in varnish. > During compilation of VCL to C code, all the function return type as void > instead of static int before. > > Can you please help if there a way I can return status code from > subroutine?. > > lib/libvcc/vcc_parse.c > vcc_ParseFunction(struct vcc *tl) > > { > > Fh(tl, 0, "void %s(VRT_CTX);\n", sym->rname); > Fc(tl, 1, "\nvoid __match_proto__(vcl_func_t)\n"); > Fc(tl, 1, "%s(VRT_CTX)\n", sym->rname); > > > > > -Prem > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.premkumar.me at gmail.com Wed Jan 3 16:47:19 2018 From: n.premkumar.me at gmail.com (Prem Kumar) Date: Wed, 03 Jan 2018 16:47:19 +0000 Subject: Varnish subroutines return type different from 3.x In-Reply-To: References: Message-ID: Ok.the scenario is if separate vcl recv/hash/.. subroutines configured for 2 domain URLs,first domain url will be always called and call 2nd domain. The first domain will serve only specific url and return 0 if not. If zero is returned I will use 2nd domain?s recv with header with index 2. Otherwise header to 1. i would like to get return error from respective recv/hash/.. subroutine to main handler which will set header . -prem On Wed, 3 Jan 2018 at 8:12 PM, Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hi, > > Pretty sure there was a reason, possibly paving the way to have > subroutines return VCL types, some time in the future. > > Can I ask you what's the goal behind this? I feel like you're headed > toward an overkill solution, and we can probably avoid that. > > Cheers, > > -- > Guillaume Quintard > > On Wed, Jan 3, 2018 at 3:37 PM, Prem Kumar > wrote: > >> Hi All, >> >> Is there is any reason why subroutines return type changes from "static >> int" to "void" in varnish. >> During compilation of VCL to C code, all the function return type as >> void instead of static int before. >> >> Can you please help if there a way I can return status code from >> subroutine?. >> >> lib/libvcc/vcc_parse.c >> vcc_ParseFunction(struct vcc *tl) >> >> { >> >> Fh(tl, 0, "void %s(VRT_CTX);\n", sym->rname); >> Fc(tl, 1, "\nvoid __match_proto__(vcl_func_t)\n"); >> Fc(tl, 1, "%s(VRT_CTX)\n", sym->rname); >> >> >> >> >> -Prem >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Jan 3 16:59:26 2018 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 3 Jan 2018 17:59:26 +0100 Subject: Varnish subroutines return type different from 3.x In-Reply-To: References: Message-ID: Then I would use vmod_var and store a value in the subroutines and check it once they've returned, that way you won't fuss with the C code. -- Guillaume Quintard On Wed, Jan 3, 2018 at 5:47 PM, Prem Kumar wrote: > Ok.the scenario is if separate vcl recv/hash/.. subroutines configured for > 2 domain URLs,first domain url will be always called and call 2nd domain. > The first domain will serve only specific url and return 0 if not. If zero > is returned I will use 2nd domain?s recv with header with index 2. > Otherwise header to 1. i would like to get return error from respective > recv/hash/.. subroutine to main handler which will set header . > > -prem > > On Wed, 3 Jan 2018 at 8:12 PM, Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> Hi, >> >> Pretty sure there was a reason, possibly paving the way to have >> subroutines return VCL types, some time in the future. >> >> Can I ask you what's the goal behind this? I feel like you're headed >> toward an overkill solution, and we can probably avoid that. >> >> Cheers, >> >> -- >> Guillaume Quintard >> >> On Wed, Jan 3, 2018 at 3:37 PM, Prem Kumar >> wrote: >> >>> Hi All, >>> >>> Is there is any reason why subroutines return type changes from "static >>> int" to "void" in varnish. >>> During compilation of VCL to C code, all the function return type as >>> void instead of static int before. >>> >>> Can you please help if there a way I can return status code from >>> subroutine?. >>> >>> lib/libvcc/vcc_parse.c >>> vcc_ParseFunction(struct vcc *tl) >>> >>> { >>> >>> Fh(tl, 0, "void %s(VRT_CTX);\n", sym->rname); >>> Fc(tl, 1, "\nvoid __match_proto__(vcl_func_t)\n"); >>> Fc(tl, 1, "%s(VRT_CTX)\n", sym->rname); >>> >>> >>> >>> >>> -Prem >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From havardf at met.no Fri Jan 12 09:13:21 2018 From: havardf at met.no (=?UTF-8?Q?H=C3=A5vard_Alsaker_Futs=C3=A6ter?=) Date: Fri, 12 Jan 2018 10:13:21 +0100 Subject: Strange issue with probes In-Reply-To: References: Message-ID: Hi! I have a problem with backend probes not beeing sent, that seems very similar to what Luca Gervasi has reported(see below). Luca: Did you ever figure out a fix? I run varnish version 5.1.3-1~xenial on ubuntu/xenial with kernel 4.4.0-109-generic. The behaviour I see is that I start up varnish, varnish sends one backend probe for each backend, bringing them up to Healthy status. After that, it seems like there are no more backend probes sent. The backend replies correctly when I use curl, and anyway there is not indication that a probe fails. They are simply not sent in the first place, as far as I can tell. Here is an example of my backend configuration: .host = "foo1"; .port = "8080"; .connect_timeout = 0.4s; .first_byte_timeout = 12s; .between_bytes_timeout = 1s; .max_connections = 40; .probe = { .url = "/healthcheck"; .timeout = 1s; .interval = 6s; .window = 5; .threshold = 3; } } My backend.list looks like this: boot.foo1 probe Healthy 4/5 Fri, 12 Jan 2018 08:50:38 GMT boot.foo2 probe Healthy 4/5 Fri, 12 Jan 2018 08:50:38 GMT and here is an extract from backend.list -p: boot.foo1 probe Healthy 4/5 Current states good: 4 threshold: 3 window: 5 Average response time of good probes: 0.002105 Oldest ================================================== Newest --------------------------------------------------------------44 Good IPv4 --------------------------------------------------------------XX Good Xmit --------------------------------------------------------------RR Good Recv ------------------------------------------------------------HHHH Happy And running "varnishlog -g raw -i Backend_health" gives me no results back. I also tested the exact same config on 5.2,1~xenial, but I get the same problem there. I don't really understand what is going on here, or how I should proceed. Any help would be greatly appreciated! Best regards, H?vard Futs?ter 2017-10-19 8:44 GMT+02:00 Luca Gervasi : > Hi, > i have a strange issue where varnish suddenly stops sending probes thus > declaring a backend healthy or sick till a next restart and i'm unable to > determine why. Please note that my backend is able to receive my probes > (and actually receives it), and i'm able to get a response every time i go > with a curl -H "Host: healthcheck" 10.32.161.89/balance_me, so i'll > consider my backend ultimately "good" and "able to respond". > > Thanks a lot for every hint! > Luca > > This is my backend configuration: > > probe backend_check { > .request = "GET /balance_me HTTP/1.1" > "Host: healthcheck" > "Connection: close"; > .timeout = 1s; > .interval = 2s; > .window = 5; > .threshold = 2; > } > backend othaph { > .host = "10.32.161.89"; > .port = "80"; > .connect_timeout = 1s; > .first_byte_timeout = 20s; > .between_bytes_timeout = 20s; > .probe = backend_check; > } > > This is my "varnishadm backend.list" > boot.othaph probe Healthy 3/5 > > This is the total log of 20 minutes of "varnishlog -g raw -i > Backend_health" (please note that above it shows 3/5 while i have only 2 > probes sent, apparently) > 0 Backend_health - boot.othaph Back healthy 4--X-RH 2 2 5 > 0.067021 0.033510 HTTP/1.1 200 OK > 0 Backend_health - boot.othaph Still healthy 4--X-RH 3 2 5 > 0.015176 0.027399 HTTP/1.1 200 OK > > And this is my "varnishadm backend.list -p" > Backend name Admin Probe > boot.othaph probe Healthy 3/5 > Current states good: 3 threshold: 2 window: 5 > Average response time of good probes: 0.027399 > Oldest ================================================== Newest > --------------------------------------------------------------44 Good > IPv4 > --------------------------------------------------------------XX Good > Xmit > --------------------------------------------------------------RR Good > Recv > -------------------------------------------------------------HHH Happy > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Fri Jan 12 09:42:01 2018 From: dridi at varni.sh (Dridi Boukelmoune) Date: Fri, 12 Jan 2018 10:42:01 +0100 Subject: Strange issue with probes In-Reply-To: References: Message-ID: Hello H?vard, Luca, On Fri, Jan 12, 2018 at 10:13 AM, H?vard Alsaker Futs?ter wrote: > Hi! I have a problem with backend probes not beeing sent, that seems very > similar to what Luca Gervasi has reported(see below). > Luca: Did you ever figure out a fix? Guillaume brought this to my attention, but I haven't looked at it yet. What's surprising is that I don't recall changes in this area until _after_ the 5.2 release (so no changes for the whole 5.x series). > I run varnish version 5.1.3-1~xenial on ubuntu/xenial with kernel > 4.4.0-109-generic. > > The behaviour I see is that I start up varnish, varnish sends one backend > probe for each backend, bringing them up to Healthy status. After that, it > seems like there are no more backend probes sent. > > The backend replies correctly when I use curl, and anyway there is not > indication that a probe fails. They are simply not sent in the first place, > as far as I can tell. > > Here is an example of my backend configuration: > .host = "foo1"; > .port = "8080"; > .connect_timeout = 0.4s; > .first_byte_timeout = 12s; > .between_bytes_timeout = 1s; > .max_connections = 40; > .probe = { > .url = "/healthcheck"; > .timeout = 1s; > .interval = 6s; > .window = 5; > .threshold = 3; > } > } > > My backend.list looks like this: > boot.foo1 probe Healthy 4/5 Fri, 12 Jan 2018 > 08:50:38 GMT > boot.foo2 probe Healthy 4/5 Fri, 12 Jan 2018 > 08:50:38 GMT > > and here is an extract from backend.list -p: > boot.foo1 probe Healthy 4/5 > Current states good: 4 threshold: 3 window: 5 > Average response time of good probes: 0.002105 > Oldest ================================================== Newest > --------------------------------------------------------------44 Good IPv4 > --------------------------------------------------------------XX Good Xmit > --------------------------------------------------------------RR Good Recv > ------------------------------------------------------------HHHH Happy > > > And running "varnishlog -g raw -i Backend_health" gives me no results back. > > I also tested the exact same config on 5.2,1~xenial, but I get the same > problem there. > > I don't really understand what is going on here, or how I should proceed. > Any help would be greatly appreciated! The workaround is to override the probe's status with the `backend.set_health` command if you can't rely on probes (or in general if you wish to rely on external monitoring to change the status of a backend). If any of you two has a github account, please open an issue. Otherwise let me know and I will open one myself. Dridi From havardf at met.no Tue Jan 16 11:00:25 2018 From: havardf at met.no (=?UTF-8?Q?H=C3=A5vard_Alsaker_Futs=C3=A6ter?=) Date: Tue, 16 Jan 2018 12:00:25 +0100 Subject: Strange issue with probes In-Reply-To: References: Message-ID: Hi Dridi! 2018-01-12 10:42 GMT+01:00 Dridi Boukelmoune : > Hello H?vard, Luca, > > On Fri, Jan 12, 2018 at 10:13 AM, H?vard Alsaker Futs?ter > wrote: > > Hi! I have a problem with backend probes not beeing sent, that seems very > > similar to what Luca Gervasi has reported(see below). > > Luca: Did you ever figure out a fix? > > Guillaume brought this to my attention, but I haven't looked at it > yet. What's surprising is that I don't recall changes in this area > until _after_ the 5.2 release (so no changes for the whole 5.x > series). > > > I run varnish version 5.1.3-1~xenial on ubuntu/xenial with kernel > > 4.4.0-109-generic. > > > > The behaviour I see is that I start up varnish, varnish sends one backend > > probe for each backend, bringing them up to Healthy status. After that, > it > > seems like there are no more backend probes sent. > > > > The backend replies correctly when I use curl, and anyway there is not > > indication that a probe fails. They are simply not sent in the first > place, > > as far as I can tell. > > > > Here is an example of my backend configuration: > > .host = "foo1"; > > .port = "8080"; > > .connect_timeout = 0.4s; > > .first_byte_timeout = 12s; > > .between_bytes_timeout = 1s; > > .max_connections = 40; > > .probe = { > > .url = "/healthcheck"; > > .timeout = 1s; > > .interval = 6s; > > .window = 5; > > .threshold = 3; > > } > > } > > > > My backend.list looks like this: > > boot.foo1 probe Healthy 4/5 Fri, 12 Jan 2018 > > 08:50:38 GMT > > boot.foo2 probe Healthy 4/5 Fri, 12 Jan 2018 > > 08:50:38 GMT > > > > and here is an extract from backend.list -p: > > boot.foo1 probe Healthy 4/5 > > Current states good: 4 threshold: 3 window: 5 > > Average response time of good probes: 0.002105 > > Oldest ================================================== Newest > > --------------------------------------------------------------44 Good > IPv4 > > --------------------------------------------------------------XX Good > Xmit > > --------------------------------------------------------------RR Good > Recv > > ------------------------------------------------------------HHHH Happy > > > > > > And running "varnishlog -g raw -i Backend_health" gives me no results > back. > > > > I also tested the exact same config on 5.2,1~xenial, but I get the same > > problem there. > > > > I don't really understand what is going on here, or how I should proceed. > > Any help would be greatly appreciated! > > The workaround is to override the probe's status with the > `backend.set_health` command if you can't rely on probes (or in > general if you wish to rely on external monitoring to change the > status of a backend). > > If any of you two has a github account, please open an issue. > Otherwise let me know and I will open one myself. > > Thanks for the response! I have opened an issue about this now. Best regards, H?vard > Dridi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandros.kechagias at gmail.com Tue Jan 23 16:12:07 2018 From: alexandros.kechagias at gmail.com (Alexandros Kechagias) Date: Tue, 23 Jan 2018 17:12:07 +0100 Subject: sqlish queries for cache invalidation? Message-ID: Hi Experts, I am looking for a way to invalidate a cache based on a set of different "keys" that can be combined with each other with operators like "AND", "OR" or "NOT". So I am looking for something that basically behaves like the xkey mod from varnish-modules[1] with the only difference that it can invalidate cache objects in an SQLish way. For example: xkey.NewPurge("userfoo OR projectbar AND NOT instancebaz") Do you know a way i could do that in varnish? I am capable of programming and writing VCL. [1] https://github.com/varnish/varnish-modules/blob/master/src/vmod_xkey.vcc Thanks for your time Alexandros -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Tue Jan 23 17:04:27 2018 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 23 Jan 2018 18:04:27 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: On Tue, Jan 23, 2018 at 5:12 PM, Alexandros Kechagias wrote: > Hi Experts, > I am looking for a way to invalidate a cache based on a set of different > "keys" that can be combined with each other with operators like "AND", "OR" > or "NOT". > > So I am looking for something that basically behaves like the xkey mod from > varnish-modules[1] with the only difference that it can invalidate cache > objects in an SQLish way. > > For example: > xkey.NewPurge("userfoo OR projectbar AND NOT instancebaz") > > Do you know a way i could do that in varnish? I am capable of programming > and writing VCL. Hi, Not really an expert, but I'll share my two cents anyway. I would recommend bans if it had all the boolean operations you are looking for, but I think as of today only AND aka && is supported (and NOT semantics by using negative operators like != or !~). You could emulate an OR operator to some extent with regular|expressions but that's limited to one header at a time. That's the closest I'm aware of. Dridi From slink at schokola.de Tue Jan 23 18:43:48 2018 From: slink at schokola.de (Nils Goroll) Date: Tue, 23 Jan 2018 19:43:48 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: Regarding bans: On 23/01/18 18:04, Dridi Boukelmoune wrote: > "keys" that can be combined with each other with operators like "AND", "OR" > or "NOT". OR is just the same as several bans. As any boolean expression can be converted to a disjunction (= OR semantics), the existing '&&' operator with the existing comparison operators * ``==``: ** and ** are equal strings (case sensitive) * ``!=``: ** and ** are unequal strings (case sensitive) * ``~``: ** matches the regular expression ** * ``!~``:** does not match the regular expression ** should suffice to implement arbitrary logic based on * ``req.url``: The request url * ``req.http.*``: Any request header * ``obj.status``: The cache object status * ``obj.http.*``: Any cache object header Nils P.S. On a related issue, I got an open PR to add obj.ttl, obj.age, obj.grace and obj.keep at https://github.com/varnishcache/varnish-cache/pull/2462 From alexandros.kechagias at gmail.com Fri Jan 26 15:12:53 2018 From: alexandros.kechagias at gmail.com (Alexandros Kechagias) Date: Fri, 26 Jan 2018 16:12:53 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: Hi there, thanks for the replies. I see, I didn't give you enough details for you to be able to help me. Sorry for that, I had a little bit of tunnel vision and also the title is not optimal. The reason I am using xkey, is that I am tagging content that comes from the Backend, with tags/names that I use afterwards to invalidate everything that has to do with the tags/names. The problem is that in my scenario the URLs can't always reliably reflect their content. So I decided to use tags, so the backend can tell varnish how to group the content. For Example: 1. User wants the pages called : - mysite.mars/foo - mysite.mars/bar - mysite.mars/baz 2. Varnish asks the backend for the sites and tags them with xkey accordingly to data that comes from the backend through beresp.http.xkey Let's say beresp.http.xkey gives me back the following keys/tags for each site (I will visualize them with brackets) - mysite.mars/foo [project1] [inst6] - mysite.mars/bar [project2] [inst6] - mysite.mars/baz [project2] [inst6] 3. Now I know that the purpose of xkey is that i can say: xkey.purge("inst6") That would delete all the caches. xkey.purge("project2") Would delete the last two. But I have a different problem. I want to be able to delete "inst6" only from "project2". So something like: xkey.purge("project2" && "inst6") would be nice. Has someone an idea how I can solve this problem with already existing varnish modules or with a VCL algorithm of a reasonable complexity? If I can't find anything I would have to write a module myself or maybe look for some different caching technology. If you could also drop me a hint there, that would also be nice. Alexandros -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Fri Jan 26 16:07:03 2018 From: dridi at varni.sh (Dridi Boukelmoune) Date: Fri, 26 Jan 2018 17:07:03 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: On Fri, Jan 26, 2018 at 4:12 PM, Alexandros Kechagias wrote: > Hi there, > thanks for the replies. I see, I didn't give you enough details for you to > be able to help me. Sorry for that, I had a little bit of tunnel vision and > also the title is not optimal. > > The reason I am using xkey, is that I am tagging content that comes from the > Backend, with tags/names that I use afterwards to invalidate everything that > has to do with the tags/names. The problem is that in my scenario the URLs > can't always reliably reflect their content. So I decided to use tags, so > the backend can tell varnish how to group the content. > > For Example: > 1. User wants the pages called : > - mysite.mars/foo > - mysite.mars/bar > - mysite.mars/baz > > 2. Varnish asks the backend for the sites and tags them with xkey > accordingly to data that comes from the backend through beresp.http.xkey > Let's say beresp.http.xkey gives me back the following keys/tags for each > site (I will visualize them with brackets) > - mysite.mars/foo [project1] [inst6] > - mysite.mars/bar [project2] [inst6] > - mysite.mars/baz [project2] [inst6] > > 3. Now I know that the purpose of xkey is that i can say: > xkey.purge("inst6") > That would delete all the caches. > xkey.purge("project2") > Would delete the last two. > > But I have a different problem. > I want to be able to delete "inst6" only from "project2". > So something like: > > xkey.purge("project2" && "inst6") > > would be nice. > > Has someone an idea how I can solve this problem with already existing > varnish modules or with a VCL algorithm of a reasonable complexity? > If I can't find anything I would have to write a module myself or maybe look > for some different caching technology. If you could also drop me a hint > there, that would also be nice. > > Alexandros Hi, I actually understood you well, and basically vmod-xkey sits in-between native purge and ban. vmod-xkey invalidates based on criteria (like bans) with purge-like behavior/performance. So if you need complex expressions and are ready to give up the real-time nature of xkey purges, you can reuse whatever headers (xkey supports a couple) that contain your invalidation keys and issue bans instead: ban obj.http.xkey ~ project2 && obj.http.xkey ~ inst6 Like I said, xkey gives up some of the ban flexibility and moves the cursor to real-time processing while bans are deferred by design. What I'm trying to say here is if you need complex invalidation schemes combinating multiple criteria, then you have access to whatever was present when beresp was cached. Today xkey accepts multiple keys at once, but invalidates their union, the opposite of what you want. Dridi From slink at schokola.de Fri Jan 26 17:15:42 2018 From: slink at schokola.de (Nils Goroll) Date: Fri, 26 Jan 2018 18:15:42 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: Hi, On 26/01/18 17:07, Dridi Boukelmoune wrote: > So if you need complex expressions and are ready to give up the > real-time nature of xkey purges, you can reuse whatever headers (xkey > supports a couple) that contain your invalidation keys and issue bans > instead: > > ban obj.http.xkey ~ project2 && obj.http.xkey ~ inst6 > > Like I said, xkey gives up some of the ban flexibility and moves the > cursor to real-time processing while bans are deferred by design. Dridi, I completely agree with your response, except for one thing: IMHO, bans are in no way less real time than purges: While the ban *lurker* processing is deferred, actual ban checks at lookup time happen as immediately as whatever purges. Other than that, purges act on objectheads ("cache keys") affect all variants under that cache key. bans are checked per object (variant). Nils From miguel_3_gonzalez at yahoo.es Sat Jan 27 19:37:47 2018 From: miguel_3_gonzalez at yahoo.es (=?UTF-8?Q?Miguel_Gonz=c3=a1lez?=) Date: Sat, 27 Jan 2018 20:37:47 +0100 Subject: meltdown cache encryption Message-ID: <63dbc1a1-0504-bf2a-6e0e-0f21a5957ff2@yahoo.es> Dear all, I received recently an invitation for a webinar from Varnish about cache encryption in Varnish Total Encryption. I am concerned about how Varnish Cache is going to deal with this. Any plan to implement this in the open source version? Are we covered if we use any kind of SSL termination with a SSL proxy? Regards, Miguel --- This email has been checked for viruses by AVG. http://www.avg.com From dridi at varni.sh Mon Jan 29 10:06:35 2018 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 29 Jan 2018 11:06:35 +0100 Subject: meltdown cache encryption In-Reply-To: <63dbc1a1-0504-bf2a-6e0e-0f21a5957ff2@yahoo.es> References: <63dbc1a1-0504-bf2a-6e0e-0f21a5957ff2@yahoo.es> Message-ID: On Sat, Jan 27, 2018 at 8:37 PM, Miguel Gonz?lez wrote: > Dear all, > > I received recently an invitation for a webinar from Varnish about > cache encryption in Varnish Total Encryption. > > I am concerned about how Varnish Cache is going to deal with this. Any > plan to implement this in the open source version? Are we covered if we > use any kind of SSL termination with a SSL proxy? Hi Miguel, There are no plans to open source Varnish Total Encryption, and using HTTPS by the means of a proxy on the same server as Varnish won't help either. To mitigate Meltdown and Spectre, you need an updated kernel and Linux doesn't completely mitigate Spectre yet (a recent GCC release address the second Spectre variant with the "retpoline" patches). You should mostly be worried about Meltdown and Spectre if you are running Varnish on shared machines provided by a hosting company (aka cloud provider). In this case Varnish Total Encryption would make it very hard to read the contents of your cache, but wouldn't protect the rest of your system (any other service running on your virtual machine). If you are caching more than just "public" resources with Varnish, that's a pretty good protection. Dridi From dridi at varni.sh Mon Jan 29 10:17:38 2018 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 29 Jan 2018 11:17:38 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: On Fri, Jan 26, 2018 at 6:15 PM, Nils Goroll wrote: > Dridi, I completely agree with your response, except for one thing: IMHO, bans > are in no way less real time than purges: While the ban *lurker* processing is > deferred, actual ban checks at lookup time happen as immediately as whatever purges. What I meant by deferred is that objects don't become candidates for eviction right away, so when storage runs low you may LRU-nuke objects still alive instead of banned objects that haven't been processed yet (lurk-able or not). I think, but like I said I'm no expert in this area. > Other than that, purges act on objectheads ("cache keys") affect all variants > under that cache key. bans are checked per object (variant). Good point, and to clarify xkey operates on objects like bans, so not all variants may be xkey-purged at once. Dridi From alexandros.kechagias at gmail.com Mon Jan 29 10:29:51 2018 From: alexandros.kechagias at gmail.com (Alexandros Kechagias) Date: Mon, 29 Jan 2018 11:29:51 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: > So if you need complex expressions and are ready to give up the > real-time nature of xkey purges, you can reuse whatever headers (xkey > supports a couple) that contain your invalidation keys and issue bans > instead: > > ban obj.http.xkey ~ project2 && obj.http.xkey ~ inst6 That's what i am going to do. > * ``==``: ** and ** are equal strings (case sensitive) > * ``!=``: ** and ** are unequal strings (case sensitive) > * ``~``: ** matches the regular expression ** > * ``!~``:** does not match the regular expression ** > >should suffice to implement arbitrary logic based on > > * ``req.url``: The request url > * ``req.http.*``: Any request header > * ``obj.status``: The cache object status > * ``obj.http.*``: Any cache object header Ok, now i get that too. I think I have to improve my understanding of varnish. I didn't knew that I can access the all the data that xkey uses to tag the caches. Thanks a bunch for your time guys, this really helped. Alexandros -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel_3_gonzalez at yahoo.es Mon Jan 29 17:53:21 2018 From: miguel_3_gonzalez at yahoo.es (=?UTF-8?Q?Miguel_Gonz=c3=a1lez?=) Date: Mon, 29 Jan 2018 18:53:21 +0100 Subject: meltdown cache encryption In-Reply-To: References: <63dbc1a1-0504-bf2a-6e0e-0f21a5957ff2@yahoo.es> Message-ID: > There are no plans to open source Varnish Total Encryption, and using > HTTPS by the means of a proxy on the same server as Varnish won't help > either. To mitigate Meltdown and Spectre, you need an updated kernel > and Linux doesn't completely mitigate Spectre yet (a recent GCC > release address the second Spectre variant with the "retpoline" patches). when is expected those issues are solved? With OS issues mitigated, Varnish would be safe? > > You should mostly be worried about Meltdown and Spectre if you are > running Varnish on shared machines provided by a hosting company (aka > cloud provider). I do myself host several sites, should I be worried then? Thanks for answering! Miguel --- This email has been checked for viruses by AVG. http://www.avg.com From dridi at varni.sh Mon Jan 29 18:49:34 2018 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 29 Jan 2018 19:49:34 +0100 Subject: meltdown cache encryption In-Reply-To: References: <63dbc1a1-0504-bf2a-6e0e-0f21a5957ff2@yahoo.es> Message-ID: On Mon, Jan 29, 2018 at 6:53 PM, Miguel Gonz?lez wrote: > >> There are no plans to open source Varnish Total Encryption, and using >> HTTPS by the means of a proxy on the same server as Varnish won't help >> either. To mitigate Meltdown and Spectre, you need an updated kernel >> and Linux doesn't completely mitigate Spectre yet (a recent GCC >> release address the second Spectre variant with the "retpoline" patches). > > when is expected those issues are solved? With OS issues mitigated, > Varnish would be safe? I'm loosely and remotely following what's happening on the Linux side so I may not be up to date but I believe that Meltdown and Spectre variant 1 are fixed/mitigated in latest releases. You should check what your Linux distribution has done in this area, but I believe all major vendors have "kernel" and "microcode" updates ready at this point. In that case I believe Varnish would be safe, except for Spectre variant 2 that I think is almost ready but not there yet. Varnish Total Encryption not only helps mitigate Meltdown and Spectre that could happen on a "neighbor's VM", but goes the extra mile too. >> You should mostly be worried about Meltdown and Spectre if you are >> running Varnish on shared machines provided by a hosting company (aka >> cloud provider). > > I do myself host several sites, should I be worried then? Get in touch with the hosting company, they'll know better than me about their business ;) Dridi From miguel_3_gonzalez at yahoo.es Mon Jan 29 18:59:45 2018 From: miguel_3_gonzalez at yahoo.es (=?UTF-8?Q?Miguel_Gonz=c3=a1lez?=) Date: Mon, 29 Jan 2018 19:59:45 +0100 Subject: meltdown cache encryption In-Reply-To: References: <63dbc1a1-0504-bf2a-6e0e-0f21a5957ff2@yahoo.es> Message-ID: <5e3cc2ff-72e1-879a-a376-b6395481735f@yahoo.es> > > I'm loosely and remotely following what's happening on the Linux side > so I may not be up to date but I believe that Meltdown and Spectre > variant 1 are fixed/mitigated in latest releases. You should check > what your Linux distribution has done in this area, but I believe all > major vendors have "kernel" and "microcode" updates ready at this > point. > > In that case I believe Varnish would be safe, except for Spectre > variant 2 that I think is almost ready but not there yet. Varnish > Total Encryption not only helps mitigate Meltdown and Spectre that > could happen on a "neighbor's VM", but goes the extra mile too. Thanks for the info. > >>> You should mostly be worried about Meltdown and Spectre if you are >>> running Varnish on shared machines provided by a hosting company (aka >>> cloud provider). >> >> I do myself host several sites, should I be worried then? > > Get in touch with the hosting company, they'll know better than me > about their business ;) I mean I have my own VPS running Varnish on a dedicated server I own :) Where you meaning that someone could get information on cloud instances where Varnish is run for several cloud instances? I am not quite grasping what you mean with "neighbor?s VM". Thanks! Miguel --- This email has been checked for viruses by AVG. http://www.avg.com From dridi at varni.sh Mon Jan 29 19:22:28 2018 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 29 Jan 2018 20:22:28 +0100 Subject: meltdown cache encryption In-Reply-To: <5e3cc2ff-72e1-879a-a376-b6395481735f@yahoo.es> References: <63dbc1a1-0504-bf2a-6e0e-0f21a5957ff2@yahoo.es> <5e3cc2ff-72e1-879a-a376-b6395481735f@yahoo.es> Message-ID: > I mean I have my own VPS running Varnish on a dedicated server I own :) > Where you meaning that someone could get information on cloud instances > where Varnish is run for several cloud instances? I am not quite > grasping what you mean with "neighbor?s VM". If my understanding is correct, you could read the host memory from a guest VM, so on shared hardware a rogue VM could dump memory from other guests. If you have VPSs, I suppose you should be fine. Your hosting company would know better. Dridi From alexandros.kechagias at gmail.com Tue Jan 30 16:06:44 2018 From: alexandros.kechagias at gmail.com (Alexandros Kechagias) Date: Tue, 30 Jan 2018 17:06:44 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: Hi there, I have a question about your recommendation of banning caches with obj.http.xkey. I am also getting the following error from the compiler: 'obj.http.xkey': Not available in method 'vcl_recv' Also according to the documentation [1] I can only access "obj.*" inside vcl_hit[2] or vcl_deliver[3] and I feel like this is the wrong place to use ban to ban objects. I wanted to add this functionality into vcl_recv like: if (req.method == "BAN") { # ACL purgers go here ban(obj.http.xkey ~ req.http.banrgx); return (synth(200, "Banned !")); } Any suggestions? I use varnish-cache 5.2.1 [1] variables in vcl subroutines : https://book.varnish-software.com/4.0/chapters/VCL_Basics.html#variables-in-vcl-subroutines [2] vcl_hit : https://book.varnish-software.com/4.0/chapters/VCL_Subroutines.html#vcl-vcl-hit [3] vcl_deliver : https://book.varnish-software.com/4.0/chapters/VCL_Subroutines.html#vcl-vcl-deliver Thanks Alexandros From guillaume at varnish-software.com Tue Jan 30 16:10:21 2018 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 30 Jan 2018 17:10:21 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: ban() takes a string :-) ban("obj.http.xkey ~ " + req.http.banrgx); -- Guillaume Quintard On Tue, Jan 30, 2018 at 5:06 PM, Alexandros Kechagias < alexandros.kechagias at gmail.com> wrote: > Hi there, > I have a question about your recommendation of banning caches with > obj.http.xkey. > > I am also getting the following error from the compiler: > 'obj.http.xkey': Not available in method 'vcl_recv' > > Also according to the documentation [1] I can only access "obj.*" > inside vcl_hit[2] or vcl_deliver[3] > and I feel like this is the wrong place to use ban to ban objects. > > I wanted to add this functionality into vcl_recv like: > > if (req.method == "BAN") { > # ACL purgers go here > ban(obj.http.xkey ~ req.http.banrgx); > return (synth(200, "Banned !")); > } > > Any suggestions? > > I use varnish-cache 5.2.1 > > [1] variables in vcl subroutines : > https://book.varnish-software.com/4.0/chapters/VCL_Basics. > html#variables-in-vcl-subroutines > [2] vcl_hit : https://book.varnish-software.com/4.0/chapters/VCL_ > Subroutines.html#vcl-vcl-hit > [3] vcl_deliver : > https://book.varnish-software.com/4.0/chapters/VCL_ > Subroutines.html#vcl-vcl-deliver > > Thanks > Alexandros > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandros.kechagias at gmail.com Tue Jan 30 17:49:03 2018 From: alexandros.kechagias at gmail.com (Alexandros Kechagias) Date: Tue, 30 Jan 2018 18:49:03 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: > ban() takes a string :-) D'oh! Thanks Guillaume, it works now. :-) The documentation says that: "Bans are checked when we hit an object in the cache, but before we deliver it." [1] I guess that's why obj.* is only available in vcl_hit and vcl_deliver, right? [1] http://varnish-cache.org/docs/4.0/users-guide/purging.html#bans From guillaume at varnish-software.com Tue Jan 30 18:08:57 2018 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 30 Jan 2018 19:08:57 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: Technically, bans are evaluated outside of vcl, after vcl_hash. We grab a tentative obj, then test it, if it matches, we destroy the objcontinue searching, otherwise we go to vcl_hit -- Guillaume Quintard On Jan 30, 2018 18:49, "Alexandros Kechagias" < alexandros.kechagias at gmail.com> wrote: > > ban() takes a string :-) > > D'oh! Thanks Guillaume, it works now. :-) > > The documentation says that: > "Bans are checked when we hit an object in the cache, but before we > deliver it." [1] > I guess that's why obj.* is only available in vcl_hit and vcl_deliver, > right? > > [1] http://varnish-cache.org/docs/4.0/users-guide/purging.html#bans > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandros.kechagias at gmail.com Wed Jan 31 11:53:48 2018 From: alexandros.kechagias at gmail.com (Alexandros Kechagias) Date: Wed, 31 Jan 2018 12:53:48 +0100 Subject: sqlish queries for cache invalidation? In-Reply-To: References: Message-ID: Ok, I see, there's still a lot to learn for me. :-) Thanks for your time Guillaume! 2018-01-30 19:08 GMT+01:00 Guillaume Quintard : > Technically, bans are evaluated outside of vcl, after vcl_hash. We grab a > tentative obj, then test it, if it matches, we destroy the objcontinue > searching, otherwise we go to vcl_hit > > -- > Guillaume Quintard > > On Jan 30, 2018 18:49, "Alexandros Kechagias" > wrote: >> >> > ban() takes a string :-) >> >> D'oh! Thanks Guillaume, it works now. :-) >> >> The documentation says that: >> "Bans are checked when we hit an object in the cache, but before we >> deliver it." [1] >> I guess that's why obj.* is only available in vcl_hit and vcl_deliver, >> right? >> >> [1] http://varnish-cache.org/docs/4.0/users-guide/purging.html#bans From arvind at cs.umn.edu Wed Jan 31 19:46:42 2018 From: arvind at cs.umn.edu (Arvind Narayanan) Date: Wed, 31 Jan 2018 13:46:42 -0600 Subject: Can I separately run VCC Compiler to peek into its output? Message-ID: Hi, I am trying to understand your source code, more specifically in understanding what kind of files are generated by the VCC compiler. If I understood correctly, /lib/libvcc/generate.py is used to generate vcl.h / vcc_obj.c, etc. Can I separately run a VCC program to convert say a sample.vcl to its corresponding varnish-based *.c files? If not, are there any alternatives for me to understand the internals? Thanks, Arvind -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Jan 31 19:55:27 2018 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 31 Jan 2018 19:55:27 +0000 Subject: Can I separately run VCC Compiler to peek into its output? In-Reply-To: References: Message-ID: <88732.1517428527@critter.freebsd.dk> -------- In message , Arvind Narayanan writes: >I am trying to understand your source code, more specifically in >understanding what kind of files are generated by the VCC compiler. Use the -C argument, and varnishd emits the C source for you to look at. It's not so much a compiler as a translator. Struture wise it's very simple: First it converts the source file into a list of tokens. There's a "half-pass" where any "include filename" constructs in the token-list gets expanded. And then it walks the list from end to other, and spitting out "dot-h" and a "dot-c" streams, which are then concatenated and sent to the C-compiler. Comments, observations, suggestoins and wisdom is most welcome :-) Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From arvind at cs.umn.edu Wed Jan 31 20:48:56 2018 From: arvind at cs.umn.edu (Arvind Narayanan) Date: Wed, 31 Jan 2018 14:48:56 -0600 Subject: Can I separately run VCC Compiler to peek into its output? In-Reply-To: References: Message-ID: Thanks Poul & Dridi - works like a charm! *-----------------* Arvind Narayanan Graduate Student & Research Assistant, Department of Computer Science and Engineering (CS&E) University of Minnesota, Twin Cities (UMN) *w:* cs.umn.edu/~arvind/ On Wed, Jan 31, 2018 at 2:10 PM, Dridi Boukelmoune < dridi at varnish-software.com> wrote: > varnishd -C > > On Jan 31, 2018 20:47, "Arvind Narayanan" wrote: > >> Hi, >> >> I am trying to understand your source code, more specifically in >> understanding what kind of files are generated by the VCC compiler. >> >> If I understood correctly, /lib/libvcc/generate.py is used to generate vcl.h >> / vcc_obj.c, etc. Can I separately run a VCC program to convert say a sample.vcl >> to its corresponding varnish-based *.c files? >> >> If not, are there any alternatives for me to understand the internals? >> >> Thanks, >> Arvind >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: