From asbjorn.taugbol at investtech.com Wed Oct 4 10:03:00 2017 From: asbjorn.taugbol at investtech.com (=?utf-8?q?Asbj=c3=b8rn=20Taugb=c3=b8l?=) Date: Wed, 04 Oct 2017 10:03:00 +0000 Subject: Varnish 5 and munin plugin In-Reply-To: References: Message-ID: Has anyone a working setup of the munin varnish plugin for Varnish 5? What steps did you do? Since upgrading to Varnish 5 (running 5.0 atm) the munin plugin stopped working. Server is Ubuntu 16.04. munin-run aborts with an error message. Here is relevant output: root at www8:/etc/munin/plugins# munin-run varnish4_uptime Can't exec "/etc/munin/plugins/varnish4_uptime": No such file or directory at /usr/share/perl5/Munin/Node/Service.pm line 263. # FATAL: Failed to exec. Munin plugin: https://github.com/munin-monitoring/contrib/tree/master/plugins/varnish munin-node config file has this section: [varnish4_*] group varnish env.varnishstat varnishstat root at www8:/etc/munin/plugins# ls -l varnish4_uptime lrwxrwxrwx 1 root root 34 Oct 4 10:12 varnish4_uptime -> /usr/share/munin/plugins/varnish4_ root at www8:/etc/munin/plugins# perl /usr/share/munin/plugins/varnish4_ autoconf yes root at www8:/etc/munin/plugins# perl /usr/share/munin/plugins/varnish4_ config No such aspect Known arguments: suggest, config, autoconf. Run with suggest to get a list of known aspects. root at www8:/etc/munin/plugins# perl /usr/share/munin/plugins/varnish4_ suggest expunge bad request_rate transfer_rates threads backend_traffic uptime memory_usage objects hit_rate Thank you. -Asbjorn -------------- next part -------------- An HTML attachment was scrubbed... URL: From noelle at uni-wuppertal.de Wed Oct 4 10:49:39 2017 From: noelle at uni-wuppertal.de (=?UTF-8?Q?Christian_N=c3=b6lle?=) Date: Wed, 4 Oct 2017 12:49:39 +0200 Subject: Varnish 5 and munin plugin In-Reply-To: References: Message-ID: Am 04.10.2017 um 12:03 schrieb Asbj?rn Taugb?l: > Has anyone a working setup of the munin varnish plugin for Varnish 5? > What steps did you do? See https://github.com/munin-monitoring/contrib/issues/876 and my post to this list in late September. -- -c -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5397 bytes Desc: S/MIME Cryptographic Signature URL: From asbjorn.taugbol at investtech.com Wed Oct 4 11:30:02 2017 From: asbjorn.taugbol at investtech.com (=?utf-8?q?Asbj=c3=b8rn=20Taugb=c3=b8l?=) Date: Wed, 04 Oct 2017 11:30:02 +0000 Subject: Varnish 5 and munin plugin In-Reply-To: References: Message-ID: From: "Christian N?lle" > >Am 04.10.2017 um 12:03 schrieb Asbj?rn Taugb?l: >>Has anyone a working setup of the munin varnish plugin for Varnish 5? >>What steps did you do? > >See https://github.com/munin-monitoring/contrib/issues/876 and my post >to this list in late September. > >-- -c My bad. Spurious char encoding of plugin file. root at www8:/usr/share/munin/plugins# file varnish4_ varnish4_: a /usr/bin/perl script, ASCII text executable, with CRLF line terminators Got rid of the CRLF and it works fine. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Tue Oct 10 21:26:01 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 10 Oct 2017 23:26:01 +0200 Subject: VCDK Message-ID: Dear Varnish community, This is rather for developers, so if you are only using Varnish it will probably not be very interesting... So, developers then: today I'm sharing a very recent project to get Varnish projects started differently than how projects usually start. Instead of forking libvmod-example, you can now run a script to generate a working project. With Varnish 5.2.0 the VUT (Varnish UTility) API became public, it allows you to create a log processor similar to varnishlog or varnishncsa but removes most of the boilerplate. Even if you aren't making a log processor, this API can still be useful: it is used in varnishstat too. Now instead of introducing a new vutexample repo you could fork to start your project, I wrote VCDK. Based on my experience that such a utility can often be used in conjunction with a VMOD, VCDK makes it easy to start a project with both VMODs and VUTs (yes, plural): vcdk autotools --vmod=foo,bar --vut=baz myproject It is available on Github, currently super alpha. See the quick tutorial to get a feel of how easy it is to get a project started: https://github.com/dridi/vcdk#varnish-cache-development-kit VCDK is modular, although currently only one plug-in exists. It only supports Varnish 5.2 because a) I don't know when 4.1 will go EOL and b) I started it less than a week ago and went straight from idea to working code. I hope this will convince people to look at the VUT framework and consider writing more log processors instead of doing blocking operations in VCL, and help the ecosystem grow like I believe libvmod-example helped people get started with VMODs. This is how I started myself. About log processors or VUTs, some idiot [1] tried to create a project ready-to-fork similar to libvmod-example, but for all ways to extend Varnish. I can report at least one successful VUT is running in production thanks to that, but can't give any details. That idiot is me, and I was pleased to see this project help others solve a problem using a log processor, but I very much prefer the change of direction I made in VCDK (probably because it brings some shell scripting fun too). If you don't like autotools (or autocrap as PHK calls them;) why not help build a cmake plugin? Not just C projects, there are bindings in other languages like Go or Pythton. If you read this far, please consider running it and see whether it really is portable or only works on my machine. Cheers, Dridi [1] https://github.com/dridi/varnish-template/ From cosimo at streppone.it Wed Oct 11 21:59:26 2017 From: cosimo at streppone.it (Cosimo Streppone) Date: Wed, 11 Oct 2017 23:59:26 +0200 Subject: VCDK In-Reply-To: References: Message-ID: <1507759166.103944.1135799016.4E658FB6@webmail.messagingengine.com> On Tue, Oct 10, 2017, at 23:26, Dridi Boukelmoune wrote: > Dear Varnish community, > > [...] > Instead of forking libvmod-example, you can now run a script to > generate a working project. > > VCDK makes it > easy to start a project with both VMODs and VUTs (yes, plural): > > vcdk autotools --vmod=foo,bar --vut=baz myproject > > It is available on Github, currently super alpha. This is a great step forward for vmod (and vut) development. Well done, Dridi! Admittedly, I always found the whole forking and modifying vmod-example too messy, so much that I tried to build a "vmod-bootstrap" years ago. Lack of time/energy and other priorities made me abandon that particular effort. -- Cosimo Streppone cosimo at streppone.it From dridi at varni.sh Thu Oct 12 08:08:15 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 12 Oct 2017 10:08:15 +0200 Subject: VCDK In-Reply-To: <1507759166.103944.1135799016.4E658FB6@webmail.messagingengine.com> References: <1507759166.103944.1135799016.4E658FB6@webmail.messagingengine.com> Message-ID: On Wed, Oct 11, 2017 at 11:59 PM, Cosimo Streppone wrote: [...] > This is a great step forward for vmod (and vut) development. Well done, > Dridi! Thanks! > Admittedly, I always found the whole forking and modifying vmod-example > too messy, so much that I tried to build a "vmod-bootstrap" years ago. Yes, and having a mix of libvmod-example and libvmod-whatever history in my git repo makes me uneasy too. > Lack of time/energy and other priorities made me abandon that particular > effort. I hope you will find some time and energy to give it a try ;) Cheers From pinakee at waltzz.com Thu Oct 12 11:56:09 2017 From: pinakee at waltzz.com (Pinakee BIswas) Date: Thu, 12 Oct 2017 17:26:09 +0530 Subject: Ignore utm_* values with varnish? Message-ID: <5164e138-21eb-df3d-4121-1171f2ee2342@waltzz.com> Hi, We are using varnish 4.1.2 for our website caching. We use bunch of standard query parameters (like utm*) to track the channels for our website visits - this is quite standard in the web world. Can I 'ignore' query string variables before pulling matching objects from the cache, but not actually remove them from the URL to the end-user? For example, all the marketing|utm_source|,|utm_campaign|,|utm_*|values don't change the content of the page, they just vary a lot from campaign to campaign and are used by all of our client-side tracking. So this also means that the URL can't change on the client side, but it should somehow be 'normalized' in the cache. Essentially I want all of these... |http://example.com/page/?utm_source=google| |http://example.com/page/?utm_source=facebook&utm_content=123| |http://example.com/page/?utm_campaign=usa| ... to all access HIT the cache for|http://example.com/page/| However, this URL would cause a MISS (because the param is not a utm_* param) |http://example.com/page/?utm_source=google&variation=5| Would trigger the cache for |http://example.com/page/?variation=5| Also, keeping in mind that the URL the user sees must remain the same, I can't redirect to something without params or any kind of solution like that. Would appreciate if you could help me with the above to increase the performance of our site. Thanks, Pinakee -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Thu Oct 12 12:02:04 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 12 Oct 2017 14:02:04 +0200 Subject: Ignore utm_* values with varnish? In-Reply-To: <5164e138-21eb-df3d-4121-1171f2ee2342@waltzz.com> References: <5164e138-21eb-df3d-4121-1171f2ee2342@waltzz.com> Message-ID: On Thu, Oct 12, 2017 at 1:56 PM, Pinakee BIswas wrote: > Hi, > > We are using varnish 4.1.2 for our website caching. We use bunch of standard > query parameters (like utm*) to track the channels for our website visits - > this is quite standard in the web world. You should upgrade right away to 4.1.8: https://varnish-cache.org/security/VSV00001.html > Can I 'ignore' query string variables before pulling matching objects from > the cache, but not actually remove them from the URL to the end-user? > > For example, all the marketing utm_source, utm_campaign, utm_* values don't > change the content of the page, they just vary a lot from campaign to > campaign and are used by all of our client-side tracking. > > So this also means that the URL can't change on the client side, but it > should somehow be 'normalized' in the cache. > > Essentially I want all of these... > > http://example.com/page/?utm_source=google > > http://example.com/page/?utm_source=facebook&utm_content=123 > > http://example.com/page/?utm_campaign=usa > > ... to all access HIT the cache for http://example.com/page/ > > However, this URL would cause a MISS (because the param is not a utm_* > param) > > http://example.com/page/?utm_source=google&variation=5 > > Would trigger the cache for > > http://example.com/page/?variation=5 > > Also, keeping in mind that the URL the user sees must remain the same, I > can't redirect to something without params or any kind of solution like > that. > > Would appreciate if you could help me with the above to increase the > performance of our site. May I suggest vmod-querystring? https://github.com/Dridi/libvmod-querystring#vmod-querystring Dridi From mattias at nucleus.be Thu Oct 12 12:10:49 2017 From: mattias at nucleus.be (Mattias Geniar) Date: Thu, 12 Oct 2017 12:10:49 +0000 Subject: Ignore utm_* values with varnish? In-Reply-To: <5164e138-21eb-df3d-4121-1171f2ee2342@waltzz.com> References: <5164e138-21eb-df3d-4121-1171f2ee2342@waltzz.com> Message-ID: <1E22D520-5892-4DE5-A5C8-30D78A3622C6@nucleus.be> > Can I 'ignore' query string variables before pulling matching objects from the cache, but not actually remove them from the URL to the end-user? The quickest ?hack? is to strip those parameters from the req.url, for a copy/paste?able example, please see here: https://github.com/mattiasgeniar/varnish-4.0-configuration-templates/blob/master/default.vcl#L111-L115 Mattias From dridi at varni.sh Thu Oct 12 12:21:18 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 12 Oct 2017 14:21:18 +0200 Subject: Ignore utm_* values with varnish? In-Reply-To: <1E22D520-5892-4DE5-A5C8-30D78A3622C6@nucleus.be> References: <5164e138-21eb-df3d-4121-1171f2ee2342@waltzz.com> <1E22D520-5892-4DE5-A5C8-30D78A3622C6@nucleus.be> Message-ID: On Thu, Oct 12, 2017 at 2:10 PM, Mattias Geniar wrote: >> Can I 'ignore' query string variables before pulling matching objects from the cache, but not actually remove them from the URL to the end-user? > > The quickest ?hack? is to strip those parameters from the req.url, for a copy/paste?able example, please see here: https://github.com/mattiasgeniar/varnish-4.0-configuration-templates/blob/master/default.vcl#L111-L115 You can indeed do it in pure VCL, but for long URLs it also means a lot more workspace consumption. If you want to increase you performance even further, vmod-querystring can sort too (if appropriate). The difference with std.querysort is that a vmod-querystring filter will both sanitize your URL and do the sorting with the same memory footprint: no extra cost (except CPU time obviously) comes from the sort operation. Another interesting feature is the ability to whitelist query-params instead, this way you may only retain what your application needs and not care when the next campaign doesn't use Google Analytics' utm_* parameters, they will be filtered out already. Cheers From mattias at nucleus.be Thu Oct 12 12:31:28 2017 From: mattias at nucleus.be (Mattias Geniar) Date: Thu, 12 Oct 2017 12:31:28 +0000 Subject: Ignore utm_* values with varnish? In-Reply-To: References: <5164e138-21eb-df3d-4121-1171f2ee2342@waltzz.com> <1E22D520-5892-4DE5-A5C8-30D78A3622C6@nucleus.be> Message-ID: > You can indeed do it in pure VCL, but for long URLs it also means a > lot more workspace consumption. Oh absolutely, long-term vmod?s are the way to go, but depending on the server setup, those can be cumbersome to install & get going since they get compiled from source. Not always convenient on servers. If Pinakee is looking for a stable, supported solution, vmod?s should definitely be on top of his list. Mattias From pinakee at waltzz.com Thu Oct 12 12:41:06 2017 From: pinakee at waltzz.com (Pinakee BIswas) Date: Thu, 12 Oct 2017 18:11:06 +0530 Subject: Ignore utm_* values with varnish? In-Reply-To: References: <5164e138-21eb-df3d-4121-1171f2ee2342@waltzz.com> <1E22D520-5892-4DE5-A5C8-30D78A3622C6@nucleus.be> Message-ID: <94ac537a-4076-91a6-13b2-ea87bc9c974e@waltzz.com> Thanks for all the insights. I am not familiar with complexity of the setup of vmod. Would have to look into it and then take a call in terms of our project planning. Would have to gaze into all the features provided by vmod and take a call based on effort vs the need/what we could get out of vmod. But certainly stable and supported solution would be the way to go for the long term. Thanks, Pinakee On 12/10/17 6:01 pm, Mattias Geniar wrote: >> You can indeed do it in pure VCL, but for long URLs it also means a >> lot more workspace consumption. > Oh absolutely, long-term vmod?s are the way to go, but depending > on the server setup, those can be cumbersome to install & get going > since they get compiled from source. Not always convenient on > servers. > > If Pinakee is looking for a stable, supported solution, vmod?s should > definitely be on top of his list. > > Mattias > From dridi at varni.sh Thu Oct 12 12:54:06 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 12 Oct 2017 14:54:06 +0200 Subject: Ignore utm_* values with varnish? In-Reply-To: References: <5164e138-21eb-df3d-4121-1171f2ee2342@waltzz.com> <1E22D520-5892-4DE5-A5C8-30D78A3622C6@nucleus.be> Message-ID: On Thu, Oct 12, 2017 at 2:31 PM, Mattias Geniar wrote: >> You can indeed do it in pure VCL, but for long URLs it also means a >> lot more workspace consumption. > > Oh absolutely, long-term vmod?s are the way to go, but depending > on the server setup, those can be cumbersome to install & get going > since they get compiled from source. Not always convenient on > servers. Yeah, if it's not already available in repositories, some will give up. I added rpm and dpkg (experimental) packaging to vmod-querystring to help a bit, but you still have to build it from source... I'm not hosting apt or yum repositories myself (and we still have an open question regarding vmod packaging and varnish upgrades). > If Pinakee is looking for a stable, supported solution, vmod?s should > definitely be on top of his list. > > Mattias Cheers From pinakee at waltzz.com Fri Oct 13 12:19:58 2017 From: pinakee at waltzz.com (Pinakee BIswas) Date: Fri, 13 Oct 2017 17:49:58 +0530 Subject: Optimal way to handle Vary on User-Agent Message-ID: Hi, We are using Varnish 4.1.8 to cache and speed up our web delivery. Our page content varies based on following 3 device types: 1. Desktop 2. Mobile (doesn't matter whether android, iOS or windows) 3. Tablet As universally known, varying on User-Agent is quite bad for caching as there would be thousands of UA. Hence, would like to know a solution based on following if possible in Varnish: 1. Normalize header User-Agent to only the 3 types mentioned above OR 2. Add a new normalized HTTP header for the above 3 types. Our backed then Vary on the new HTTP header. Would appreciate if any solution could be provided for the above.. Thanks, Pinakee -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri Oct 13 12:31:19 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 13 Oct 2017 14:31:19 +0200 Subject: Optimal way to handle Vary on User-Agent In-Reply-To: References: Message-ID: The user-agent is an abomination and needs to die. Now, with that being said, I would rewrite the UA header, just to make sure the backend doesn't use the original information. You can have a look at https://github.com/varnishcache/varnish-devicedetect for a vcl solution. Other solutions exist, obviously. -- Guillaume Quintard On Fri, Oct 13, 2017 at 2:19 PM, Pinakee BIswas wrote: > Hi, > > We are using Varnish 4.1.8 to cache and speed up our web delivery. Our > page content varies based on following 3 device types: > > 1. Desktop > 2. Mobile (doesn't matter whether android, iOS or windows) > 3. Tablet > > As universally known, varying on User-Agent is quite bad for caching as > there would be thousands of UA. Hence, would like to know a solution based > on following if possible in Varnish: > > 1. Normalize header User-Agent to only the 3 types mentioned above OR > 2. Add a new normalized HTTP header for the above 3 types. Our backed > then Vary on the new HTTP header. > > Would appreciate if any solution could be provided for the above.. > > Thanks, > > Pinakee > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugues at betabrand.com Wed Oct 18 00:27:33 2017 From: hugues at betabrand.com (Hugues Alary) Date: Tue, 17 Oct 2017 17:27:33 -0700 Subject: Gracefully stopping varnish Message-ID: Hi there, I've been looking around and I can't find a documented way of gracefully shutting down varnishd, and by gracefully I mean tell varnish "stop accepting connections, but finish what you were doing, then shutdown". I did find something in the "first varnish design notes" ( https://varnish-cache.org/docs/5.1/phk/firstdesign.html) which seemed to indicate that sending SIGKILL/SIGTERM would mean "suspend/stop" but KILL doesn't seem to work, and TERM, well... terminates but not gracefully. I also tried using "varnishadm stop", which also doesn't gracefully stops connection. Is there anyway to achieve this? Thanks! -Hugues -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.staudinger at nyi.net Wed Oct 18 02:17:45 2017 From: mark.staudinger at nyi.net (Mark Staudinger) Date: Tue, 17 Oct 2017 22:17:45 -0400 Subject: Repeated panic in obj_getmethods() Message-ID: Hi Folks, I've seen this panic recently, twice, on two companion servers running Varnish-4.1.8 on FreeBSD-11.0 % varnishd -V varnishd (varnish-4.1.8 revision d266ac5c6) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS % uname -a FreeBSD hostname 11.0-RELEASE-p2 FreeBSD 11.0-RELEASE-p2 #0: Mon Oct 24 06:55:27 UTC 2016 root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 Unfortunately I do not have the full backtrace, but here's what I do have. Oct 16 12:24:47 hostname varnishd[50931]: Child (50932) Last panic at: Mon, 16 Oct 2017 12:24:47 GMT "Assert error in obj_getmethods(), cache/cache_obj.c line 55: Condition((oc->stobj->stevedore) != NULL) not true. thread = (cache-worker) version = varnish-4.1.8 revision d266ac5c6 ident = FreeBSD,11.0-RELEASE-p2,amd64,-junix,-sfile,-smalloc,-sfile,-hcritbit,kqueue now = 3794380.754560 (mono), 1508156686.857677 (real) Backtrace: 0x433a38: varnishd 0x431821: varnishd 0x431f62: varnishd 0x425f9d: varnishd 0x41eb0c: varnishd 0x420d51: varnishd 0x41e8db: varnishd 0x41e36a: varnishd 0x426155: varnishd busyobj = 0xbf88dbbb60 { ws = 0xbf88dbbbf8 { id = \"bo\", {s,f,r,e} = {0xbf88dbdab0,+4712,0x0,+57480}, }, refcnt = 2, retries = 0, failed = 1, state = 1, flags = {do_esi, is_gzip}, http_conn = 0xbf88dbde30 { fd = 153, doclose = RX_BODY, ws = 0xbf88dbbbf8, {rxbuf_b, rxbuf_e} = {0xbf88dbdee0, 0xbf88dbe134}, {pipeline_b, pipeline_e} = {0xbf88dbe134, 0xbf88dbea65}, Oct 16 12:24:47 hostname kernel: xbf88dbea65}, Varnishd process uptime was near-identical on both servers, and the panics occurred at around the same time on both machines, which could potentially indicate that the panic was caused either by a particular request, and/or some resource-related issue. Time between panics was approximately 19 days. I would welcome any advice about known possible causes for this particular assertion failing! Best Regards, Mark Staudinger From guillaume at varnish-software.com Wed Oct 18 06:51:29 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 18 Oct 2017 08:51:29 +0200 Subject: Gracefully stopping varnish In-Reply-To: References: Message-ID: Hi, That's not possible. However, what you really want, I think, is not sending new requests to Varnish. That's usually done at the loa-bbalancing level. If your LB use probes, you can tell Varnish to stop honoring them, drain the connections, then kill it. -- Guillaume Quintard On Oct 18, 2017 02:28, "Hugues Alary" wrote: > Hi there, > > I've been looking around and I can't find a documented way of gracefully > shutting down varnishd, and by gracefully I mean tell varnish "stop > accepting connections, but finish what you were doing, then shutdown". > > I did find something in the "first varnish design notes" ( > https://varnish-cache.org/docs/5.1/phk/firstdesign.html) which seemed to > indicate that sending SIGKILL/SIGTERM would mean "suspend/stop" but KILL > doesn't seem to work, and TERM, well... terminates but not gracefully. > > I also tried using "varnishadm stop", which also doesn't gracefully stops > connection. > > Is there anyway to achieve this? > > Thanks! > -Hugues > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hermunn at varnish-software.com Wed Oct 18 09:34:40 2017 From: hermunn at varnish-software.com (=?UTF-8?Q?P=C3=A5l_Hermunn_Johansen?=) Date: Wed, 18 Oct 2017 11:34:40 +0200 Subject: Repeated panic in obj_getmethods() In-Reply-To: References: Message-ID: Hello Mark, Can you include a list of VMODs you are using? Also, did you change any of the parameters from the default? The last question can be answered by running varnishadm param.show Best, P?l 2017-10-18 4:17 GMT+02:00 Mark Staudinger : > Hi Folks, > > I've seen this panic recently, twice, on two companion servers running > Varnish-4.1.8 on FreeBSD-11.0 > > % varnishd -V > varnishd (varnish-4.1.8 revision d266ac5c6) > Copyright (c) 2006 Verdens Gang AS > Copyright (c) 2006-2015 Varnish Software AS > > % uname -a > FreeBSD hostname 11.0-RELEASE-p2 FreeBSD 11.0-RELEASE-p2 #0: Mon Oct 24 > 06:55:27 UTC 2016 > root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 > > Unfortunately I do not have the full backtrace, but here's what I do have. > > Oct 16 12:24:47 hostname varnishd[50931]: Child (50932) Last panic at: Mon, > 16 Oct 2017 12:24:47 GMT "Assert error in obj_getmethods(), > cache/cache_obj.c line 55: Condition((oc->stobj->stevedore) != NULL) not > true. thread = (cache-worker) version = varnish-4.1.8 revision d266ac5c6 > ident = > FreeBSD,11.0-RELEASE-p2,amd64,-junix,-sfile,-smalloc,-sfile,-hcritbit,kqueue > now = 3794380.754560 (mono), 1508156686.857677 (real) Backtrace: 0x433a38: > varnishd 0x431821: varnishd 0x431f62: varnishd 0x425f9d: varnishd > 0x41eb0c: varnishd 0x420d51: varnishd 0x41e8db: varnishd 0x41e36a: > varnishd 0x426155: varnishd busyobj = 0xbf88dbbb60 { ws = 0xbf88dbbbf8 { > id = \"bo\", {s,f,r,e} = {0xbf88dbdab0,+4712,0x0,+57480}, }, refcnt > = 2, retries = 0, failed = 1, state = 1, flags = {do_esi, is_gzip}, > http_conn = 0xbf88dbde30 { fd = 153, doclose = RX_BODY, ws = > 0xbf88dbbbf8, {rxbuf_b, rxbuf_e} = {0xbf88dbdee0, 0xbf88dbe134}, > {pipeline_b, pipeline_e} = {0xbf88dbe134, 0xbf88dbea65}, > Oct 16 12:24:47 hostname kernel: xbf88dbea65}, > > Varnishd process uptime was near-identical on both servers, and the panics > occurred at around the same time on both machines, which could potentially > indicate that the panic was caused either by a particular request, and/or > some resource-related issue. Time between panics was approximately 19 days. > > I would welcome any advice about known possible causes for this particular > assertion failing! > > Best Regards, > Mark Staudinger > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From admin at beckspaced.com Wed Oct 18 09:59:24 2017 From: admin at beckspaced.com (Admin Beckspaced) Date: Wed, 18 Oct 2017 11:59:24 +0200 Subject: Hitch SSL chain issues with Google Chrome Message-ID: <565873ef-1dd4-3181-4140-bfab0d9905a0@beckspaced.com> Hello there, I use hitch as an SSL terminator in front of varnish. I get my SSL certificates via letsencrypt this is what i get via the letsencrypt ACME client cert-1504079018.csr cert-1504079018.pem cert.csr -> cert-1504079018.csr cert-1504079018.pem chain-1504079018.pem chain.pem -> chain-1504079018.pem fullchain-1504079018.pem fullchain.pem -> fullchain-1504079018.pem privkey-1504079018.pem privkey.pem -> privkey-1504079018.pem to prepare the certificates for hitch I run a small script which merges the certificates into 1 file #!/bin/bash for d in /etc/dehydrated/certs/*; do ? if [ -d "$d" ]; then ??? # echo "$d" ??? cat "$d"/cert.pem "$d"/privkey.pem "$d"/chain.pem "$d"/fullchain.pem > /etc/hitch/certs/$(basename "$d").pem ? fi done then in hitch config I reference the .pem file pem-file = "/etc/hitch/certs/physiotherapie-neustadt-aisch.de.pem" so ... if i open the website in firefox all is fine https://physiotherapie-neustadt-aisch.de/ if I open in Google Chrome it's not working. So i did a bit of search on google and found out it's a chain issue and chrome seems to be a bit more sensitive than firefox https://www.ssllabs.com/ssltest/analyze.html?d=physiotherapie-neustadt-aisch.de on ssllabs.com it also states chain issues, incorrect order, extra certs ... how would i fix this? I assume it has something to do with the way I merge the certificates into 1 .pem file any help would be awesome ;) thanks & greetings becki -- Beckspaced - Server Administration ------------------------------------------------ Ralf Flederer Marienplatz 9 97353 Wiesentheid Tel.: 09383-9033825 Mobil: 01577-7258912 Internet: www.beckspaced.com ------------------------------------------------ From A.Hongens at netmatch.nl Wed Oct 18 10:54:21 2017 From: A.Hongens at netmatch.nl (=?utf-8?B?QW5nZWxvIEjDtm5nZW5z?=) Date: Wed, 18 Oct 2017 10:54:21 +0000 Subject: Hitch SSL chain issues with Google Chrome In-Reply-To: <565873ef-1dd4-3181-4140-bfab0d9905a0@beckspaced.com> References: <565873ef-1dd4-3181-4140-bfab0d9905a0@beckspaced.com> Message-ID: <186336c0af274a8e8c9d235c5ca4e86b@netmatch.nl> Just do cert + chain + privkey, in that order. -- With kind regards, Angelo H?ngens Systems Administrator ------------------------------------------ NetMatch travel technology solutions Professor Donderstraat 46 5017 HL Tilburg T: +31 (0)13 5811088 F: +31 (0)13 5821239 mailto:A.Hongens at netmatch.nl http://www.netmatch.nl ------------------------------------------ Disclaimer Deze e-mail is vertrouwelijk en uitsluitend bedoeld voor geadresseerde(n) en de organisatie van geadresseerde(n) en mag niet openbaar worden gemaakt aan derde partijen This e-mail is confidential and may not be disclosed to third parties since this e-mail is only intended for the addressee and the organization the addressee represents. -----Original Message----- From: varnish-misc [mailto:varnish-misc-bounces+a.hongens=netmatch.nl at varnish-cache.org] On Behalf Of Admin Beckspaced Sent: Wednesday, 18 October, 2017 11:59 To: varnish-misc at varnish-cache.org Subject: Hitch SSL chain issues with Google Chrome Hello there, I use hitch as an SSL terminator in front of varnish. I get my SSL certificates via letsencrypt this is what i get via the letsencrypt ACME client cert-1504079018.csr cert-1504079018.pem cert.csr -> cert-1504079018.csr cert-1504079018.pem chain-1504079018.pem chain.pem -> chain-1504079018.pem fullchain-1504079018.pem fullchain.pem -> fullchain-1504079018.pem privkey-1504079018.pem privkey.pem -> privkey-1504079018.pem to prepare the certificates for hitch I run a small script which merges the certificates into 1 file #!/bin/bash for d in /etc/dehydrated/certs/*; do ? if [ -d "$d" ]; then ??? # echo "$d" ??? cat "$d"/cert.pem "$d"/privkey.pem "$d"/chain.pem "$d"/fullchain.pem > /etc/hitch/certs/$(basename "$d").pem ? fi done then in hitch config I reference the .pem file pem-file = "/etc/hitch/certs/physiotherapie-neustadt-aisch.de.pem" so ... if i open the website in firefox all is fine https://physiotherapie-neustadt-aisch.de/ if I open in Google Chrome it's not working. So i did a bit of search on google and found out it's a chain issue and chrome seems to be a bit more sensitive than firefox https://www.ssllabs.com/ssltest/analyze.html?d=physiotherapie-neustadt-aisch.de on ssllabs.com it also states chain issues, incorrect order, extra certs ... how would i fix this? I assume it has something to do with the way I merge the certificates into 1 .pem file any help would be awesome ;) thanks & greetings becki -- Beckspaced - Server Administration ------------------------------------------------ Ralf Flederer Marienplatz 9 97353 Wiesentheid Tel.: 09383-9033825 Mobil: 01577-7258912 Internet: www.beckspaced.com ------------------------------------------------ _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From mark.staudinger at nyi.net Wed Oct 18 14:06:05 2017 From: mark.staudinger at nyi.net (Mark Staudinger) Date: Wed, 18 Oct 2017 10:06:05 -0400 Subject: Repeated panic in obj_getmethods() In-Reply-To: References: Message-ID: Hi P?l, Sure - the non-standard parameters here: % echo 'param.show' | varnishadm|grep -v '(default)' 200 accept_filter off [bool] gzip_level 8 gzip_memlevel 6 max_restarts 2 [restarts] max_retries 0 [retries] thread_pool_max 350 [threads] thread_pool_min 225 [threads] thread_pools 12 [pools] vsl_space 250M [bytes] vsm_space 4M [bytes] VMODs in use are all sourced from varnish-modules-0.9.1_1: import std; import directors; import softpurge; I will have to scrutinize the paths, but I'm 99% certain that softpurge is not being called. Cheers, -Mark On Wed, 18 Oct 2017 05:34:40 -0400, P?l Hermunn Johansen wrote: > Hello Mark, > > Can you include a list of VMODs you are using? Also, did you change > any of the parameters from the default? The last question can be > answered by running > > varnishadm param.show > > Best, > P?l > > > 2017-10-18 4:17 GMT+02:00 Mark Staudinger : >> Hi Folks, >> >> I've seen this panic recently, twice, on two companion servers running >> Varnish-4.1.8 on FreeBSD-11.0 >> >> % varnishd -V >> varnishd (varnish-4.1.8 revision d266ac5c6) >> Copyright (c) 2006 Verdens Gang AS >> Copyright (c) 2006-2015 Varnish Software AS >> >> % uname -a >> FreeBSD hostname 11.0-RELEASE-p2 FreeBSD 11.0-RELEASE-p2 #0: Mon Oct 24 >> 06:55:27 UTC 2016 >> root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 >> >> Unfortunately I do not have the full backtrace, but here's what I do >> have. >> >> Oct 16 12:24:47 hostname varnishd[50931]: Child (50932) Last panic at: >> Mon, >> 16 Oct 2017 12:24:47 GMT "Assert error in obj_getmethods(), >> cache/cache_obj.c line 55: Condition((oc->stobj->stevedore) != NULL) >> not >> true. thread = (cache-worker) version = varnish-4.1.8 revision d266ac5c6 >> ident = >> FreeBSD,11.0-RELEASE-p2,amd64,-junix,-sfile,-smalloc,-sfile,-hcritbit,kqueue >> now = 3794380.754560 (mono), 1508156686.857677 (real) Backtrace: >> 0x433a38: >> varnishd 0x431821: varnishd 0x431f62: varnishd 0x425f9d: varnishd >> 0x41eb0c: varnishd 0x420d51: varnishd 0x41e8db: varnishd 0x41e36a: >> varnishd 0x426155: varnishd busyobj = 0xbf88dbbb60 { ws = >> 0xbf88dbbbf8 { >> id = \"bo\", {s,f,r,e} = {0xbf88dbdab0,+4712,0x0,+57480}, }, >> refcnt >> = 2, retries = 0, failed = 1, state = 1, flags = {do_esi, is_gzip}, >> http_conn = 0xbf88dbde30 { fd = 153, doclose = RX_BODY, ws = >> 0xbf88dbbbf8, {rxbuf_b, rxbuf_e} = {0xbf88dbdee0, 0xbf88dbe134}, >> {pipeline_b, pipeline_e} = {0xbf88dbe134, 0xbf88dbea65}, >> Oct 16 12:24:47 hostname kernel: xbf88dbea65}, >> >> Varnishd process uptime was near-identical on both servers, and the >> panics >> occurred at around the same time on both machines, which could >> potentially >> indicate that the panic was caused either by a particular request, >> and/or >> some resource-related issue. Time between panics was approximately 19 >> days. >> >> I would welcome any advice about known possible causes for this >> particular >> assertion failing! >> >> Best Regards, >> Mark Staudinger >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From admin at beckspaced.com Wed Oct 18 15:13:00 2017 From: admin at beckspaced.com (Admin Beckspaced) Date: Wed, 18 Oct 2017 17:13:00 +0200 Subject: Hitch SSL chain issues with Google Chrome In-Reply-To: <186336c0af274a8e8c9d235c5ca4e86b@netmatch.nl> References: <565873ef-1dd4-3181-4140-bfab0d9905a0@beckspaced.com> <186336c0af274a8e8c9d235c5ca4e86b@netmatch.nl> Message-ID: <92e17dc7-c654-d4cf-4a99-9f11304c9a78@beckspaced.com> On 18.10.2017 12:54, Angelo H?ngens wrote: > Just do cert + chain + privkey, in that order. > Thanks ;) re-merging the certs in that order solved the issue. Greetings Becki From colas.delmas at gmail.com Wed Oct 18 16:15:54 2017 From: colas.delmas at gmail.com (Nicolas Delmas) Date: Wed, 18 Oct 2017 18:15:54 +0200 Subject: Hitch SSL chain issues with Google Chrome In-Reply-To: <92e17dc7-c654-d4cf-4a99-9f11304c9a78@beckspaced.com> References: <565873ef-1dd4-3181-4140-bfab0d9905a0@beckspaced.com> <186336c0af274a8e8c9d235c5ca4e86b@netmatch.nl> <92e17dc7-c654-d4cf-4a99-9f11304c9a78@beckspaced.com> Message-ID: Hello, I'm surprising, that we need to keep an order to merge all files. In my case I contact like this and never get a problem : cat /etc/letsencrypt/live/example.org/privkey.pem \ /etc/letsencrypt/live/example.org/fullchain.pem \ /etc/ssl/certs/dhparam.pem \ > /etc/hitch/example.org.pem chmod 0600 /etc/hitch/example.org.pem I think it was because you tried to merge the chain and fullchain *Nicolas Delmas* http://tutoandco.colas-delmas.fr/ 2017-10-18 17:13 GMT+02:00 Admin Beckspaced : > > On 18.10.2017 12:54, Angelo H?ngens wrote: > >> Just do cert + chain + privkey, in that order. >> >> Thanks ;) > > re-merging the certs in that order solved the issue. > > Greetings > Becki > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugues at betabrand.com Wed Oct 18 18:22:48 2017 From: hugues at betabrand.com (Hugues Alary) Date: Wed, 18 Oct 2017 11:22:48 -0700 Subject: Gracefully stopping varnish In-Reply-To: References: Message-ID: Hi, That is indeed what I want to do. Draining connections at the LB level just involves more work ;). For context: my entire stack runs on a kubernetes cluster. I sometime need to replace a running instance of a server process [be it nginx apache php-fpm varnish whatever] with a new instance. Taking Apache as an example, I simply create the new apache instance (a new pod in kubernetes speak), which immediately starts getting some traffic (without changing the load balancer configuration at all, a kubernetes "service" automatically detects new pods), then I gracefully shutdown the old instance (kubernetes actually automatically tells the pod to shutdown), by issuing a "apachectl -k graceful-stop" (kubernetes is configured to issue this command for me), which instructs apache to stop accepting connections, finish, then shutdown. It's really great because instead of having to push a new config refusing probes and reload it, I (/kubernetes) simply gracefully stops apache and the traffic flows to the new instance. nginx and php-fpm also handle things this way. At any rate, thanks for the advice, I will start using probes! Cheers, -Hugues On Tue, Oct 17, 2017 at 11:51 PM, Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hi, > > That's not possible. However, what you really want, I think, is not > sending new requests to Varnish. That's usually done at the loa-bbalancing > level. If your LB use probes, you can tell Varnish to stop honoring them, > drain the connections, then kill it. > > -- > Guillaume Quintard > > On Oct 18, 2017 02:28, "Hugues Alary" wrote: > >> Hi there, >> >> I've been looking around and I can't find a documented way of gracefully >> shutting down varnishd, and by gracefully I mean tell varnish "stop >> accepting connections, but finish what you were doing, then shutdown". >> >> I did find something in the "first varnish design notes" ( >> https://varnish-cache.org/docs/5.1/phk/firstdesign.html) which seemed to >> indicate that sending SIGKILL/SIGTERM would mean "suspend/stop" but KILL >> doesn't seem to work, and TERM, well... terminates but not gracefully. >> >> I also tried using "varnishadm stop", which also doesn't gracefully stops >> connection. >> >> Is there anyway to achieve this? >> >> Thanks! >> -Hugues >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Oct 18 19:01:33 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 18 Oct 2017 21:01:33 +0200 Subject: Gracefully stopping varnish In-Reply-To: References: Message-ID: Hello, So, there must be a misconception on my, because to me, just refusing new connections isn't graceful. If the LB is sending you new connections, you should honor them. So, from what I get, k8s stops sending new connections to Varnish, and what you want is is for Varnish to shutdown once there's no more active connections, which is, I would argue, kinda different from refusing new connections. But, let's stop being a pompous jerk obsessed with semantics and let's try to work on a solution. >From what I get, we can kill varnish as soon as the number of active connections drops to 0, so your apachectl command would be equal to this in Varnish language: if [ `varnishstat -1 | awk '/MEMPOOL.sess[0-9]+.live/ {a+=$2} END {print a}'` != 0 ]; then sleep 1 fi killall varnishd There's probably a shorter version ,but that's the gist of it. Would that do? -- Guillaume Quintard On Wed, Oct 18, 2017 at 8:22 PM, Hugues Alary wrote: > Hi, > > That is indeed what I want to do. Draining connections at the LB level > just involves more work ;). > > For context: my entire stack runs on a kubernetes cluster. > > I sometime need to replace a running instance of a server process [be it > nginx apache php-fpm varnish whatever] with a new instance. > > Taking Apache as an example, I simply create the new apache instance (a > new pod in kubernetes speak), which immediately starts getting some traffic > (without changing the load balancer configuration at all, a kubernetes > "service" automatically detects new pods), then I gracefully shutdown the > old instance (kubernetes actually automatically tells the pod to shutdown), > by issuing a "apachectl -k graceful-stop" (kubernetes is configured to > issue this command for me), which instructs apache to stop accepting > connections, finish, then shutdown. > > It's really great because instead of having to push a new config refusing > probes and reload it, I (/kubernetes) simply gracefully stops apache and > the traffic flows to the new instance. nginx and php-fpm also handle things > this way. > > At any rate, thanks for the advice, I will start using probes! > > Cheers, > -Hugues > > > > On Tue, Oct 17, 2017 at 11:51 PM, Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> Hi, >> >> That's not possible. However, what you really want, I think, is not >> sending new requests to Varnish. That's usually done at the loa-bbalancing >> level. If your LB use probes, you can tell Varnish to stop honoring them, >> drain the connections, then kill it. >> >> -- >> Guillaume Quintard >> >> On Oct 18, 2017 02:28, "Hugues Alary" wrote: >> >>> Hi there, >>> >>> I've been looking around and I can't find a documented way of gracefully >>> shutting down varnishd, and by gracefully I mean tell varnish "stop >>> accepting connections, but finish what you were doing, then shutdown". >>> >>> I did find something in the "first varnish design notes" ( >>> https://varnish-cache.org/docs/5.1/phk/firstdesign.html) which seemed >>> to indicate that sending SIGKILL/SIGTERM would mean "suspend/stop" but KILL >>> doesn't seem to work, and TERM, well... terminates but not gracefully. >>> >>> I also tried using "varnishadm stop", which also doesn't gracefully >>> stops connection. >>> >>> Is there anyway to achieve this? >>> >>> Thanks! >>> -Hugues >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugues at betabrand.com Wed Oct 18 19:21:25 2017 From: hugues at betabrand.com (Hugues Alary) Date: Wed, 18 Oct 2017 12:21:25 -0700 Subject: Gracefully stopping varnish In-Reply-To: References: Message-ID: > So, there must be a misconception on my, because to me, just refusing new connections isn't graceful. If the LB is sending you new connections, you should honor them. You are right. I took some shortcuts in my explanation. > So, from what I get, k8s stops sending new connections to Varnish, and what you want is is for Varnish to shutdown once there's no more active connections. That is exactly what happens, indeed. How it happens is irrelevant to our conversation, but, for the sake of being on the same page it's important to assume: k8s stopped sending new connections to the old instance and started sending them to the new instance, and I want old varnish to shutdown once there's no more active connections. > But, let's stop being a pompous jerk obsessed with semantics and let's try to work on a solution. All good, it's actually important ;) I will try your solution right away and let you know, in theory it seems like it should work. Thanks for trying to find a solution! -Hugues On Wed, Oct 18, 2017 at 12:01 PM, Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hello, > > So, there must be a misconception on my, because to me, just refusing new > connections isn't graceful. If the LB is sending you new connections, you > should honor them. > > So, from what I get, k8s stops sending new connections to Varnish, and > what you want is is for Varnish to shutdown once there's no more active > connections, which is, I would argue, kinda different from refusing new > connections. But, let's stop being a pompous jerk obsessed with semantics > and let's try to work on a solution. > > From what I get, we can kill varnish as soon as the number of active > connections drops to 0, so your apachectl command would be equal to this in > Varnish language: > > if [ `varnishstat -1 | awk '/MEMPOOL.sess[0-9]+.live/ {a+=$2} END {print > a}'` != 0 ]; then > sleep 1 > fi > killall varnishd > > > There's probably a shorter version ,but that's the gist of it. > > Would that do? > > -- > Guillaume Quintard > > On Wed, Oct 18, 2017 at 8:22 PM, Hugues Alary > wrote: > >> Hi, >> >> That is indeed what I want to do. Draining connections at the LB level >> just involves more work ;). >> >> For context: my entire stack runs on a kubernetes cluster. >> >> I sometime need to replace a running instance of a server process [be it >> nginx apache php-fpm varnish whatever] with a new instance. >> >> Taking Apache as an example, I simply create the new apache instance (a >> new pod in kubernetes speak), which immediately starts getting some traffic >> (without changing the load balancer configuration at all, a kubernetes >> "service" automatically detects new pods), then I gracefully shutdown the >> old instance (kubernetes actually automatically tells the pod to shutdown), >> by issuing a "apachectl -k graceful-stop" (kubernetes is configured to >> issue this command for me), which instructs apache to stop accepting >> connections, finish, then shutdown. >> >> It's really great because instead of having to push a new config refusing >> probes and reload it, I (/kubernetes) simply gracefully stops apache and >> the traffic flows to the new instance. nginx and php-fpm also handle things >> this way. >> >> At any rate, thanks for the advice, I will start using probes! >> >> Cheers, >> -Hugues >> >> >> >> On Tue, Oct 17, 2017 at 11:51 PM, Guillaume Quintard < >> guillaume at varnish-software.com> wrote: >> >>> Hi, >>> >>> That's not possible. However, what you really want, I think, is not >>> sending new requests to Varnish. That's usually done at the loa-bbalancing >>> level. If your LB use probes, you can tell Varnish to stop honoring them, >>> drain the connections, then kill it. >>> >>> -- >>> Guillaume Quintard >>> >>> On Oct 18, 2017 02:28, "Hugues Alary" wrote: >>> >>>> Hi there, >>>> >>>> I've been looking around and I can't find a documented way of >>>> gracefully shutting down varnishd, and by gracefully I mean tell varnish >>>> "stop accepting connections, but finish what you were doing, then shutdown". >>>> >>>> I did find something in the "first varnish design notes" ( >>>> https://varnish-cache.org/docs/5.1/phk/firstdesign.html) which seemed >>>> to indicate that sending SIGKILL/SIGTERM would mean "suspend/stop" but KILL >>>> doesn't seem to work, and TERM, well... terminates but not gracefully. >>>> >>>> I also tried using "varnishadm stop", which also doesn't gracefully >>>> stops connection. >>>> >>>> Is there anyway to achieve this? >>>> >>>> Thanks! >>>> -Hugues >>>> >>>> _______________________________________________ >>>> varnish-misc mailing list >>>> varnish-misc at varnish-cache.org >>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugues at betabrand.com Wed Oct 18 19:47:32 2017 From: hugues at betabrand.com (Hugues Alary) Date: Wed, 18 Oct 2017 12:47:32 -0700 Subject: Gracefully stopping varnish In-Reply-To: References: Message-ID: Your solution works, thanks much. Cheers, -Hugues On Wed, Oct 18, 2017 at 12:21 PM, Hugues Alary wrote: > > So, there must be a misconception on my, because to me, just refusing > new connections isn't graceful. If the LB is sending you new connections, > you should honor them. > > You are right. I took some shortcuts in my explanation. > > > So, from what I get, k8s stops sending new connections to Varnish, and > what you want is is for Varnish to shutdown once there's no more active > connections. > > That is exactly what happens, indeed. How it happens is irrelevant to our > conversation, but, for the sake of being on the same page it's important to > assume: k8s stopped sending new connections to the old instance and started > sending them to the new instance, and I want old varnish to shutdown once > there's no more active connections. > > > But, let's stop being a pompous jerk obsessed with semantics and let's > try to work on a solution. > > All good, it's actually important ;) > > I will try your solution right away and let you know, in theory it seems > like it should work. Thanks for trying to find a solution! > > -Hugues > > > On Wed, Oct 18, 2017 at 12:01 PM, Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> Hello, >> >> So, there must be a misconception on my, because to me, just refusing new >> connections isn't graceful. If the LB is sending you new connections, you >> should honor them. >> >> So, from what I get, k8s stops sending new connections to Varnish, and >> what you want is is for Varnish to shutdown once there's no more active >> connections, which is, I would argue, kinda different from refusing new >> connections. But, let's stop being a pompous jerk obsessed with semantics >> and let's try to work on a solution. >> >> From what I get, we can kill varnish as soon as the number of active >> connections drops to 0, so your apachectl command would be equal to this in >> Varnish language: >> >> if [ `varnishstat -1 | awk '/MEMPOOL.sess[0-9]+.live/ {a+=$2} END {print >> a}'` != 0 ]; then >> sleep 1 >> fi >> killall varnishd >> >> >> There's probably a shorter version ,but that's the gist of it. >> >> Would that do? >> >> -- >> Guillaume Quintard >> >> On Wed, Oct 18, 2017 at 8:22 PM, Hugues Alary >> wrote: >> >>> Hi, >>> >>> That is indeed what I want to do. Draining connections at the LB level >>> just involves more work ;). >>> >>> For context: my entire stack runs on a kubernetes cluster. >>> >>> I sometime need to replace a running instance of a server process [be it >>> nginx apache php-fpm varnish whatever] with a new instance. >>> >>> Taking Apache as an example, I simply create the new apache instance (a >>> new pod in kubernetes speak), which immediately starts getting some traffic >>> (without changing the load balancer configuration at all, a kubernetes >>> "service" automatically detects new pods), then I gracefully shutdown the >>> old instance (kubernetes actually automatically tells the pod to shutdown), >>> by issuing a "apachectl -k graceful-stop" (kubernetes is configured to >>> issue this command for me), which instructs apache to stop accepting >>> connections, finish, then shutdown. >>> >>> It's really great because instead of having to push a new config >>> refusing probes and reload it, I (/kubernetes) simply gracefully stops >>> apache and the traffic flows to the new instance. nginx and php-fpm also >>> handle things this way. >>> >>> At any rate, thanks for the advice, I will start using probes! >>> >>> Cheers, >>> -Hugues >>> >>> >>> >>> On Tue, Oct 17, 2017 at 11:51 PM, Guillaume Quintard < >>> guillaume at varnish-software.com> wrote: >>> >>>> Hi, >>>> >>>> That's not possible. However, what you really want, I think, is not >>>> sending new requests to Varnish. That's usually done at the loa-bbalancing >>>> level. If your LB use probes, you can tell Varnish to stop honoring them, >>>> drain the connections, then kill it. >>>> >>>> -- >>>> Guillaume Quintard >>>> >>>> On Oct 18, 2017 02:28, "Hugues Alary" wrote: >>>> >>>>> Hi there, >>>>> >>>>> I've been looking around and I can't find a documented way of >>>>> gracefully shutting down varnishd, and by gracefully I mean tell varnish >>>>> "stop accepting connections, but finish what you were doing, then shutdown". >>>>> >>>>> I did find something in the "first varnish design notes" ( >>>>> https://varnish-cache.org/docs/5.1/phk/firstdesign.html) which seemed >>>>> to indicate that sending SIGKILL/SIGTERM would mean "suspend/stop" but KILL >>>>> doesn't seem to work, and TERM, well... terminates but not gracefully. >>>>> >>>>> I also tried using "varnishadm stop", which also doesn't gracefully >>>>> stops connection. >>>>> >>>>> Is there anyway to achieve this? >>>>> >>>>> Thanks! >>>>> -Hugues >>>>> >>>>> _______________________________________________ >>>>> varnish-misc mailing list >>>>> varnish-misc at varnish-cache.org >>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Oct 18 20:00:06 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 18 Oct 2017 22:00:06 +0200 Subject: Gracefully stopping varnish In-Reply-To: References: Message-ID: Glad to hear it! Out of curiosity, do you run that script manually on container you want to kill, or is there a way to do it automatically? -- Guillaume Quintard On Wed, Oct 18, 2017 at 9:47 PM, Hugues Alary wrote: > Your solution works, thanks much. > > Cheers, > -Hugues > > On Wed, Oct 18, 2017 at 12:21 PM, Hugues Alary > wrote: > >> > So, there must be a misconception on my, because to me, just refusing >> new connections isn't graceful. If the LB is sending you new connections, >> you should honor them. >> >> You are right. I took some shortcuts in my explanation. >> >> > So, from what I get, k8s stops sending new connections to Varnish, and >> what you want is is for Varnish to shutdown once there's no more active >> connections. >> >> That is exactly what happens, indeed. How it happens is irrelevant to our >> conversation, but, for the sake of being on the same page it's important to >> assume: k8s stopped sending new connections to the old instance and started >> sending them to the new instance, and I want old varnish to shutdown once >> there's no more active connections. >> >> > But, let's stop being a pompous jerk obsessed with semantics and let's >> try to work on a solution. >> >> All good, it's actually important ;) >> >> I will try your solution right away and let you know, in theory it seems >> like it should work. Thanks for trying to find a solution! >> >> -Hugues >> >> >> On Wed, Oct 18, 2017 at 12:01 PM, Guillaume Quintard < >> guillaume at varnish-software.com> wrote: >> >>> Hello, >>> >>> So, there must be a misconception on my, because to me, just refusing >>> new connections isn't graceful. If the LB is sending you new connections, >>> you should honor them. >>> >>> So, from what I get, k8s stops sending new connections to Varnish, and >>> what you want is is for Varnish to shutdown once there's no more active >>> connections, which is, I would argue, kinda different from refusing new >>> connections. But, let's stop being a pompous jerk obsessed with semantics >>> and let's try to work on a solution. >>> >>> From what I get, we can kill varnish as soon as the number of active >>> connections drops to 0, so your apachectl command would be equal to this in >>> Varnish language: >>> >>> if [ `varnishstat -1 | awk '/MEMPOOL.sess[0-9]+.live/ {a+=$2} END {print >>> a}'` != 0 ]; then >>> sleep 1 >>> fi >>> killall varnishd >>> >>> >>> There's probably a shorter version ,but that's the gist of it. >>> >>> Would that do? >>> >>> -- >>> Guillaume Quintard >>> >>> On Wed, Oct 18, 2017 at 8:22 PM, Hugues Alary >>> wrote: >>> >>>> Hi, >>>> >>>> That is indeed what I want to do. Draining connections at the LB level >>>> just involves more work ;). >>>> >>>> For context: my entire stack runs on a kubernetes cluster. >>>> >>>> I sometime need to replace a running instance of a server process [be >>>> it nginx apache php-fpm varnish whatever] with a new instance. >>>> >>>> Taking Apache as an example, I simply create the new apache instance (a >>>> new pod in kubernetes speak), which immediately starts getting some traffic >>>> (without changing the load balancer configuration at all, a kubernetes >>>> "service" automatically detects new pods), then I gracefully shutdown the >>>> old instance (kubernetes actually automatically tells the pod to shutdown), >>>> by issuing a "apachectl -k graceful-stop" (kubernetes is configured to >>>> issue this command for me), which instructs apache to stop accepting >>>> connections, finish, then shutdown. >>>> >>>> It's really great because instead of having to push a new config >>>> refusing probes and reload it, I (/kubernetes) simply gracefully stops >>>> apache and the traffic flows to the new instance. nginx and php-fpm also >>>> handle things this way. >>>> >>>> At any rate, thanks for the advice, I will start using probes! >>>> >>>> Cheers, >>>> -Hugues >>>> >>>> >>>> >>>> On Tue, Oct 17, 2017 at 11:51 PM, Guillaume Quintard < >>>> guillaume at varnish-software.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> That's not possible. However, what you really want, I think, is not >>>>> sending new requests to Varnish. That's usually done at the loa-bbalancing >>>>> level. If your LB use probes, you can tell Varnish to stop honoring them, >>>>> drain the connections, then kill it. >>>>> >>>>> -- >>>>> Guillaume Quintard >>>>> >>>>> On Oct 18, 2017 02:28, "Hugues Alary" wrote: >>>>> >>>>>> Hi there, >>>>>> >>>>>> I've been looking around and I can't find a documented way of >>>>>> gracefully shutting down varnishd, and by gracefully I mean tell varnish >>>>>> "stop accepting connections, but finish what you were doing, then shutdown". >>>>>> >>>>>> I did find something in the "first varnish design notes" ( >>>>>> https://varnish-cache.org/docs/5.1/phk/firstdesign.html) which >>>>>> seemed to indicate that sending SIGKILL/SIGTERM would mean "suspend/stop" >>>>>> but KILL doesn't seem to work, and TERM, well... terminates but not >>>>>> gracefully. >>>>>> >>>>>> I also tried using "varnishadm stop", which also doesn't gracefully >>>>>> stops connection. >>>>>> >>>>>> Is there anyway to achieve this? >>>>>> >>>>>> Thanks! >>>>>> -Hugues >>>>>> >>>>>> _______________________________________________ >>>>>> varnish-misc mailing list >>>>>> varnish-misc at varnish-cache.org >>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugues at betabrand.com Wed Oct 18 20:35:05 2017 From: hugues at betabrand.com (Hugues Alary) Date: Wed, 18 Oct 2017 13:35:05 -0700 Subject: Gracefully stopping varnish In-Reply-To: References: Message-ID: Since this could be useful to some other people, here's a bit more details on how it's implemented on my end. TL;DR: it's automatic. I embedded the script in my docker image and it gets run by k8s when needed. *My container image looks like this:* ``` FROM bb:base ENV DEBIAN_FRONTEND noninteractive RUN apt-get update RUN apt-get install -y --force-yes varnish COPY files/graceful-shutdown.sh / CMD our-long-varnishd-cmd ``` *graceful-shutdown.sh looks like this:* ``` CONNECTIONS_REMAINING=1 while [ $CONNECTIONS_REMAINING != 0 ]; do CONNECTIONS_REMAINING=`varnishstat -1 | awk '/MEMPOOL.sess[0-9]+.live/ {a+=$2} END {print a}'` echo "$CONNECTIONS_REMAINING remaining, waiting..." sleep 1 done killall varnishd ``` *I then a have a k8s deployment file:* ``` --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: varnish labels: app: varnish version: "5.2" spec: replicas: 1 template: metadata: labels: app: varnish spec: terminationGracePeriodSeconds: 300 containers: - name: varnish image: bb/bb:varnish lifecycle: preStop: exec: command: - "sh" - "/graceful-shutdown.sh" imagePullPolicy: Always command: ["our-long-varnishd-cmd"] ``` The preStop command is run by k8s (when needed**), which will wait until the command exits (up to 300s in this config). Once the preStop command exits, k8s will send a SIGTERM to varnishd (which should already be gone by that time). (** `kubectl delete my-varnish-pod` for example, or, if I upload a new deployment file or other conditions triggering a shutdown of the pod.) -H On Wed, Oct 18, 2017 at 1:00 PM, Guillaume Quintard < guillaume at varnish-software.com> wrote: > Glad to hear it! Out of curiosity, do you run that script manually on > container you want to kill, or is there a way to do it automatically? > > -- > Guillaume Quintard > > On Wed, Oct 18, 2017 at 9:47 PM, Hugues Alary > wrote: > >> Your solution works, thanks much. >> >> Cheers, >> -Hugues >> >> On Wed, Oct 18, 2017 at 12:21 PM, Hugues Alary >> wrote: >> >>> > So, there must be a misconception on my, because to me, just refusing >>> new connections isn't graceful. If the LB is sending you new connections, >>> you should honor them. >>> >>> You are right. I took some shortcuts in my explanation. >>> >>> > So, from what I get, k8s stops sending new connections to Varnish, >>> and what you want is is for Varnish to shutdown once there's no more active >>> connections. >>> >>> That is exactly what happens, indeed. How it happens is irrelevant to >>> our conversation, but, for the sake of being on the same page it's >>> important to assume: k8s stopped sending new connections to the old >>> instance and started sending them to the new instance, and I want old >>> varnish to shutdown once there's no more active connections. >>> >>> > But, let's stop being a pompous jerk obsessed with semantics and let's >>> try to work on a solution. >>> >>> All good, it's actually important ;) >>> >>> I will try your solution right away and let you know, in theory it seems >>> like it should work. Thanks for trying to find a solution! >>> >>> -Hugues >>> >>> >>> On Wed, Oct 18, 2017 at 12:01 PM, Guillaume Quintard < >>> guillaume at varnish-software.com> wrote: >>> >>>> Hello, >>>> >>>> So, there must be a misconception on my, because to me, just refusing >>>> new connections isn't graceful. If the LB is sending you new connections, >>>> you should honor them. >>>> >>>> So, from what I get, k8s stops sending new connections to Varnish, and >>>> what you want is is for Varnish to shutdown once there's no more active >>>> connections, which is, I would argue, kinda different from refusing new >>>> connections. But, let's stop being a pompous jerk obsessed with semantics >>>> and let's try to work on a solution. >>>> >>>> From what I get, we can kill varnish as soon as the number of active >>>> connections drops to 0, so your apachectl command would be equal to this in >>>> Varnish language: >>>> >>>> if [ `varnishstat -1 | awk '/MEMPOOL.sess[0-9]+.live/ {a+=$2} END >>>> {print a}'` != 0 ]; then >>>> sleep 1 >>>> fi >>>> killall varnishd >>>> >>>> >>>> There's probably a shorter version ,but that's the gist of it. >>>> >>>> Would that do? >>>> >>>> -- >>>> Guillaume Quintard >>>> >>>> On Wed, Oct 18, 2017 at 8:22 PM, Hugues Alary >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> That is indeed what I want to do. Draining connections at the LB level >>>>> just involves more work ;). >>>>> >>>>> For context: my entire stack runs on a kubernetes cluster. >>>>> >>>>> I sometime need to replace a running instance of a server process [be >>>>> it nginx apache php-fpm varnish whatever] with a new instance. >>>>> >>>>> Taking Apache as an example, I simply create the new apache instance >>>>> (a new pod in kubernetes speak), which immediately starts getting some >>>>> traffic (without changing the load balancer configuration at all, a >>>>> kubernetes "service" automatically detects new pods), then I gracefully >>>>> shutdown the old instance (kubernetes actually automatically tells the pod >>>>> to shutdown), by issuing a "apachectl -k graceful-stop" (kubernetes is >>>>> configured to issue this command for me), which instructs apache to stop >>>>> accepting connections, finish, then shutdown. >>>>> >>>>> It's really great because instead of having to push a new config >>>>> refusing probes and reload it, I (/kubernetes) simply gracefully stops >>>>> apache and the traffic flows to the new instance. nginx and php-fpm also >>>>> handle things this way. >>>>> >>>>> At any rate, thanks for the advice, I will start using probes! >>>>> >>>>> Cheers, >>>>> -Hugues >>>>> >>>>> >>>>> >>>>> On Tue, Oct 17, 2017 at 11:51 PM, Guillaume Quintard < >>>>> guillaume at varnish-software.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> That's not possible. However, what you really want, I think, is not >>>>>> sending new requests to Varnish. That's usually done at the loa-bbalancing >>>>>> level. If your LB use probes, you can tell Varnish to stop honoring them, >>>>>> drain the connections, then kill it. >>>>>> >>>>>> -- >>>>>> Guillaume Quintard >>>>>> >>>>>> On Oct 18, 2017 02:28, "Hugues Alary" wrote: >>>>>> >>>>>>> Hi there, >>>>>>> >>>>>>> I've been looking around and I can't find a documented way of >>>>>>> gracefully shutting down varnishd, and by gracefully I mean tell varnish >>>>>>> "stop accepting connections, but finish what you were doing, then shutdown". >>>>>>> >>>>>>> I did find something in the "first varnish design notes" ( >>>>>>> https://varnish-cache.org/docs/5.1/phk/firstdesign.html) which >>>>>>> seemed to indicate that sending SIGKILL/SIGTERM would mean "suspend/stop" >>>>>>> but KILL doesn't seem to work, and TERM, well... terminates but not >>>>>>> gracefully. >>>>>>> >>>>>>> I also tried using "varnishadm stop", which also doesn't gracefully >>>>>>> stops connection. >>>>>>> >>>>>>> Is there anyway to achieve this? >>>>>>> >>>>>>> Thanks! >>>>>>> -Hugues >>>>>>> >>>>>>> _______________________________________________ >>>>>>> varnish-misc mailing list >>>>>>> varnish-misc at varnish-cache.org >>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Thu Oct 19 05:33:00 2017 From: lagged at gmail.com (Andrei) Date: Thu, 19 Oct 2017 00:33:00 -0500 Subject: Gracefully stopping varnish In-Reply-To: References: Message-ID: Thanks for sharing! On Wed, Oct 18, 2017 at 3:35 PM, Hugues Alary wrote: > Since this could be useful to some other people, here's a bit more details > on how it's implemented on my end. > > TL;DR: it's automatic. I embedded the script in my docker image and it > gets run by k8s when needed. > > > *My container image looks like this:* > > ``` > FROM bb:base > ENV DEBIAN_FRONTEND noninteractive > > RUN apt-get update > RUN apt-get install -y --force-yes varnish > > COPY files/graceful-shutdown.sh / > > CMD our-long-varnishd-cmd > ``` > > *graceful-shutdown.sh looks like this:* > > ``` > CONNECTIONS_REMAINING=1 > while [ $CONNECTIONS_REMAINING != 0 ]; do > CONNECTIONS_REMAINING=`varnishstat -1 | awk > '/MEMPOOL.sess[0-9]+.live/ {a+=$2} END {print a}'` > echo "$CONNECTIONS_REMAINING remaining, waiting..." > sleep 1 > done > killall varnishd > ``` > > *I then a have a k8s deployment file:* > > ``` > --- > apiVersion: extensions/v1beta1 > kind: Deployment > metadata: > name: varnish > labels: > app: varnish > version: "5.2" > spec: > replicas: 1 > template: > metadata: > labels: > app: varnish > spec: > terminationGracePeriodSeconds: 300 > containers: > - name: varnish > image: bb/bb:varnish > lifecycle: > preStop: > exec: > command: > - "sh" > - "/graceful-shutdown.sh" > imagePullPolicy: Always > command: ["our-long-varnishd-cmd"] > ``` > > The preStop command is run by k8s (when needed**), which will wait until > the command exits (up to 300s in this config). Once the preStop command > exits, k8s will send a SIGTERM to varnishd (which should already be gone by > that time). > > (** `kubectl delete my-varnish-pod` for example, or, if I upload a new > deployment file or other conditions triggering a shutdown of the pod.) > > -H > > On Wed, Oct 18, 2017 at 1:00 PM, Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> Glad to hear it! Out of curiosity, do you run that script manually on >> container you want to kill, or is there a way to do it automatically? >> >> -- >> Guillaume Quintard >> >> On Wed, Oct 18, 2017 at 9:47 PM, Hugues Alary >> wrote: >> >>> Your solution works, thanks much. >>> >>> Cheers, >>> -Hugues >>> >>> On Wed, Oct 18, 2017 at 12:21 PM, Hugues Alary >>> wrote: >>> >>>> > So, there must be a misconception on my, because to me, just >>>> refusing new connections isn't graceful. If the LB is sending you new >>>> connections, you should honor them. >>>> >>>> You are right. I took some shortcuts in my explanation. >>>> >>>> > So, from what I get, k8s stops sending new connections to Varnish, >>>> and what you want is is for Varnish to shutdown once there's no more active >>>> connections. >>>> >>>> That is exactly what happens, indeed. How it happens is irrelevant to >>>> our conversation, but, for the sake of being on the same page it's >>>> important to assume: k8s stopped sending new connections to the old >>>> instance and started sending them to the new instance, and I want old >>>> varnish to shutdown once there's no more active connections. >>>> >>>> > But, let's stop being a pompous jerk obsessed with semantics and >>>> let's try to work on a solution. >>>> >>>> All good, it's actually important ;) >>>> >>>> I will try your solution right away and let you know, in theory it >>>> seems like it should work. Thanks for trying to find a solution! >>>> >>>> -Hugues >>>> >>>> >>>> On Wed, Oct 18, 2017 at 12:01 PM, Guillaume Quintard < >>>> guillaume at varnish-software.com> wrote: >>>> >>>>> Hello, >>>>> >>>>> So, there must be a misconception on my, because to me, just refusing >>>>> new connections isn't graceful. If the LB is sending you new connections, >>>>> you should honor them. >>>>> >>>>> So, from what I get, k8s stops sending new connections to Varnish, and >>>>> what you want is is for Varnish to shutdown once there's no more active >>>>> connections, which is, I would argue, kinda different from refusing new >>>>> connections. But, let's stop being a pompous jerk obsessed with semantics >>>>> and let's try to work on a solution. >>>>> >>>>> From what I get, we can kill varnish as soon as the number of active >>>>> connections drops to 0, so your apachectl command would be equal to this in >>>>> Varnish language: >>>>> >>>>> if [ `varnishstat -1 | awk '/MEMPOOL.sess[0-9]+.live/ {a+=$2} END >>>>> {print a}'` != 0 ]; then >>>>> sleep 1 >>>>> fi >>>>> killall varnishd >>>>> >>>>> >>>>> There's probably a shorter version ,but that's the gist of it. >>>>> >>>>> Would that do? >>>>> >>>>> -- >>>>> Guillaume Quintard >>>>> >>>>> On Wed, Oct 18, 2017 at 8:22 PM, Hugues Alary >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> That is indeed what I want to do. Draining connections at the LB >>>>>> level just involves more work ;). >>>>>> >>>>>> For context: my entire stack runs on a kubernetes cluster. >>>>>> >>>>>> I sometime need to replace a running instance of a server process [be >>>>>> it nginx apache php-fpm varnish whatever] with a new instance. >>>>>> >>>>>> Taking Apache as an example, I simply create the new apache instance >>>>>> (a new pod in kubernetes speak), which immediately starts getting some >>>>>> traffic (without changing the load balancer configuration at all, a >>>>>> kubernetes "service" automatically detects new pods), then I gracefully >>>>>> shutdown the old instance (kubernetes actually automatically tells the pod >>>>>> to shutdown), by issuing a "apachectl -k graceful-stop" (kubernetes is >>>>>> configured to issue this command for me), which instructs apache to stop >>>>>> accepting connections, finish, then shutdown. >>>>>> >>>>>> It's really great because instead of having to push a new config >>>>>> refusing probes and reload it, I (/kubernetes) simply gracefully stops >>>>>> apache and the traffic flows to the new instance. nginx and php-fpm also >>>>>> handle things this way. >>>>>> >>>>>> At any rate, thanks for the advice, I will start using probes! >>>>>> >>>>>> Cheers, >>>>>> -Hugues >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Oct 17, 2017 at 11:51 PM, Guillaume Quintard < >>>>>> guillaume at varnish-software.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> That's not possible. However, what you really want, I think, is not >>>>>>> sending new requests to Varnish. That's usually done at the loa-bbalancing >>>>>>> level. If your LB use probes, you can tell Varnish to stop honoring them, >>>>>>> drain the connections, then kill it. >>>>>>> >>>>>>> -- >>>>>>> Guillaume Quintard >>>>>>> >>>>>>> On Oct 18, 2017 02:28, "Hugues Alary" wrote: >>>>>>> >>>>>>>> Hi there, >>>>>>>> >>>>>>>> I've been looking around and I can't find a documented way of >>>>>>>> gracefully shutting down varnishd, and by gracefully I mean tell varnish >>>>>>>> "stop accepting connections, but finish what you were doing, then shutdown". >>>>>>>> >>>>>>>> I did find something in the "first varnish design notes" ( >>>>>>>> https://varnish-cache.org/docs/5.1/phk/firstdesign.html) which >>>>>>>> seemed to indicate that sending SIGKILL/SIGTERM would mean "suspend/stop" >>>>>>>> but KILL doesn't seem to work, and TERM, well... terminates but not >>>>>>>> gracefully. >>>>>>>> >>>>>>>> I also tried using "varnishadm stop", which also doesn't gracefully >>>>>>>> stops connection. >>>>>>>> >>>>>>>> Is there anyway to achieve this? >>>>>>>> >>>>>>>> Thanks! >>>>>>>> -Hugues >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> varnish-misc mailing list >>>>>>>> varnish-misc at varnish-cache.org >>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Thu Oct 19 05:33:44 2017 From: lagged at gmail.com (Andrei) Date: Thu, 19 Oct 2017 00:33:44 -0500 Subject: Hitch SSL chain issues with Google Chrome In-Reply-To: References: <565873ef-1dd4-3181-4140-bfab0d9905a0@beckspaced.com> <186336c0af274a8e8c9d235c5ca4e86b@netmatch.nl> <92e17dc7-c654-d4cf-4a99-9f11304c9a78@beckspaced.com> Message-ID: Chain order needs to be followed per RFC. While not all browsers may care, quite a few payment gateways do. On Wed, Oct 18, 2017 at 11:15 AM, Nicolas Delmas wrote: > Hello, > > I'm surprising, that we need to keep an order to merge all files. In my > case I contact like this and never get a problem : > > cat /etc/letsencrypt/live/example.org/privkey.pem \ > /etc/letsencrypt/live/example.org/fullchain.pem \ > /etc/ssl/certs/dhparam.pem \ > /etc/hitch/example.org.pem > > chmod 0600 /etc/hitch/example.org.pem > > I think it was because you tried to merge the chain and fullchain > > > > *Nicolas Delmas* > http://tutoandco.colas-delmas.fr/ > > > > > > > > 2017-10-18 17:13 GMT+02:00 Admin Beckspaced : > >> >> On 18.10.2017 12:54, Angelo H?ngens wrote: >> >>> Just do cert + chain + privkey, in that order. >>> >>> Thanks ;) >> >> re-merging the certs in that order solved the issue. >> >> Greetings >> Becki >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luca.gervasi at gmail.com Thu Oct 19 06:44:12 2017 From: luca.gervasi at gmail.com (Luca Gervasi) Date: Thu, 19 Oct 2017 06:44:12 +0000 Subject: Strange issue with probes Message-ID: Hi, i have a strange issue where varnish suddenly stops sending probes thus declaring a backend healthy or sick till a next restart and i'm unable to determine why. Please note that my backend is able to receive my probes (and actually receives it), and i'm able to get a response every time i go with a curl -H "Host: healthcheck" 10.32.161.89/balance_me, so i'll consider my backend ultimately "good" and "able to respond". Thanks a lot for every hint! Luca This is my backend configuration: probe backend_check { .request = "GET /balance_me HTTP/1.1" "Host: healthcheck" "Connection: close"; .timeout = 1s; .interval = 2s; .window = 5; .threshold = 2; } backend othaph { .host = "10.32.161.89"; .port = "80"; .connect_timeout = 1s; .first_byte_timeout = 20s; .between_bytes_timeout = 20s; .probe = backend_check; } This is my "varnishadm backend.list" boot.othaph probe Healthy 3/5 This is the total log of 20 minutes of "varnishlog -g raw -i Backend_health" (please note that above it shows 3/5 while i have only 2 probes sent, apparently) 0 Backend_health - boot.othaph Back healthy 4--X-RH 2 2 5 0.067021 0.033510 HTTP/1.1 200 OK 0 Backend_health - boot.othaph Still healthy 4--X-RH 3 2 5 0.015176 0.027399 HTTP/1.1 200 OK And this is my "varnishadm backend.list -p" Backend name Admin Probe boot.othaph probe Healthy 3/5 Current states good: 3 threshold: 2 window: 5 Average response time of good probes: 0.027399 Oldest ================================================== Newest --------------------------------------------------------------44 Good IPv4 --------------------------------------------------------------XX Good Xmit --------------------------------------------------------------RR Good Recv -------------------------------------------------------------HHH Happy -------------- next part -------------- An HTML attachment was scrubbed... URL: From hermunn at varnish-software.com Thu Oct 19 14:08:00 2017 From: hermunn at varnish-software.com (=?UTF-8?Q?P=C3=A5l_Hermunn_Johansen?=) Date: Thu, 19 Oct 2017 16:08:00 +0200 Subject: Repeated panic in obj_getmethods() In-Reply-To: References: Message-ID: Thanks for the added info. It is quite uncommon to have that many thread pools, but I know that some people has had performance improvements after increasing from the recommended 2. I do not think this has anything to do with the panic. Right now I have no ideas, so if anyone else have some input, please share. 2017-10-18 16:06 GMT+02:00 Mark Staudinger : > Hi P?l, > > Sure - the non-standard parameters here: > > % echo 'param.show' | varnishadm|grep -v '(default)' > 200 > accept_filter off [bool] > gzip_level 8 > gzip_memlevel 6 > max_restarts 2 [restarts] > max_retries 0 [retries] > thread_pool_max 350 [threads] > thread_pool_min 225 [threads] > thread_pools 12 [pools] > vsl_space 250M [bytes] > vsm_space 4M [bytes] > > VMODs in use are all sourced from varnish-modules-0.9.1_1: > > import std; > import directors; > import softpurge; > > I will have to scrutinize the paths, but I'm 99% certain that softpurge is > not being called. > > Cheers, > -Mark > > > > On Wed, 18 Oct 2017 05:34:40 -0400, P?l Hermunn Johansen > wrote: > >> Hello Mark, >> >> Can you include a list of VMODs you are using? Also, did you change >> any of the parameters from the default? The last question can be >> answered by running >> >> varnishadm param.show >> >> Best, >> P?l >> >> >> 2017-10-18 4:17 GMT+02:00 Mark Staudinger : >>> >>> Hi Folks, >>> >>> I've seen this panic recently, twice, on two companion servers running >>> Varnish-4.1.8 on FreeBSD-11.0 >>> >>> % varnishd -V >>> varnishd (varnish-4.1.8 revision d266ac5c6) >>> Copyright (c) 2006 Verdens Gang AS >>> Copyright (c) 2006-2015 Varnish Software AS >>> >>> % uname -a >>> FreeBSD hostname 11.0-RELEASE-p2 FreeBSD 11.0-RELEASE-p2 #0: Mon Oct 24 >>> 06:55:27 UTC 2016 >>> root at amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 >>> >>> Unfortunately I do not have the full backtrace, but here's what I do >>> have. >>> >>> Oct 16 12:24:47 hostname varnishd[50931]: Child (50932) Last panic at: >>> Mon, >>> 16 Oct 2017 12:24:47 GMT "Assert error in obj_getmethods(), >>> cache/cache_obj.c line 55: Condition((oc->stobj->stevedore) != NULL) >>> not >>> true. thread = (cache-worker) version = varnish-4.1.8 revision d266ac5c6 >>> ident = >>> >>> FreeBSD,11.0-RELEASE-p2,amd64,-junix,-sfile,-smalloc,-sfile,-hcritbit,kqueue >>> now = 3794380.754560 (mono), 1508156686.857677 (real) Backtrace: >>> 0x433a38: >>> varnishd 0x431821: varnishd 0x431f62: varnishd 0x425f9d: varnishd >>> 0x41eb0c: varnishd 0x420d51: varnishd 0x41e8db: varnishd 0x41e36a: >>> varnishd 0x426155: varnishd busyobj = 0xbf88dbbb60 { ws = >>> 0xbf88dbbbf8 { >>> id = \"bo\", {s,f,r,e} = {0xbf88dbdab0,+4712,0x0,+57480}, }, >>> refcnt >>> = 2, retries = 0, failed = 1, state = 1, flags = {do_esi, is_gzip}, >>> http_conn = 0xbf88dbde30 { fd = 153, doclose = RX_BODY, ws = >>> 0xbf88dbbbf8, {rxbuf_b, rxbuf_e} = {0xbf88dbdee0, 0xbf88dbe134}, >>> {pipeline_b, pipeline_e} = {0xbf88dbe134, 0xbf88dbea65}, >>> Oct 16 12:24:47 hostname kernel: xbf88dbea65}, >>> >>> Varnishd process uptime was near-identical on both servers, and the >>> panics >>> occurred at around the same time on both machines, which could >>> potentially >>> indicate that the panic was caused either by a particular request, and/or >>> some resource-related issue. Time between panics was approximately 19 >>> days. >>> >>> I would welcome any advice about known possible causes for this >>> particular >>> assertion failing! >>> >>> Best Regards, >>> Mark Staudinger >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From lbarfield at netactuate.com Mon Oct 23 19:22:41 2017 From: lbarfield at netactuate.com (Logan Barfield) Date: Mon, 23 Oct 2017 15:22:41 -0400 Subject: Sticky Sessions for URL based backends Message-ID: Hi All, I'm looking for information on what is by all indications an unusual setup. We are attempting to proxy and cache from a single domain (e.g., test.example.com) to individual pages on many backend domains (e.g., test.domainone.com/page, test.domaintwo.com/anotherpage, etc.). The goal is to be able to hit a path on test.example.com and have it returned the cached page for a backend domain/path. For example: test.example.com/apple/ -> test.domainone.com/pear/ test.example.com/peach/ -> test.domaintwo.com/strawberry/ ... and so on. The backend pages can be anything (HTML, Wordpress, Joomla, etc.), and are almost always a "sub-page" of the site rather than the index. We have no control over the backend servers, so we can't alter their site's code directly. We've so far been able to get the general proxy working. The problem is that for any linked resources that are a level above the backend pages they either miss the cache (for absolute links) or hit a bad gateway because they don't match one of the cached paths. For example: For: test.example.com/apple/ -> test.domainone.com/pear/ (linked) test.domainone.com/wp-content/css/style.css -> test.example.com/wp-content/css/style.css (returns bad gateway because test.example.com/wp-content/ doesn't exist) OR (linked) test.domainone.com/wp-content/css/style.css -> (rewrite) test.example.com/apple/wp-content/css/style.css -> test.domainone.com/pear/wp-content/css/style.css (404 because it doesn't exist). So far we've attempted to solve this by using "sticky sessions" to set a cookie, and then direct traffic to the correct backend if either the path matches or the cookie is set, but we haven't gotten it quite right yet. We're also concerned that using a cookie for this may affect caching. Example config: import cookie; import header; backend domain_one { .host = "test.domainone.com"; .port = "80"; } backend domain_two { .host = "test.domaintwo.com"; .port = "80"; } sub vcl_recv { if (req.http.host == "test.example.com") { cookie.parse(req.http.cookie); if(cookie.get("sticky")) { set req.http.sticky = cookie.get("sticky"); } if(req.http.sticky == "domain_one" || req.url ~ "^/apple/") { set req.http.sticky = "domain_one"; set req.http.host = "test.domainone.com"; set req.backend_hint = domain_one; if(req.url == "/apple/") { set req.url = "/pear/"; } } if (req.http.sticky == "domain_two" || req.url ~ "^/peach/") { set req.http.sticky = "domain_two"; set req.http.host = "test.domaintwo.com"; set req.backend_hint = domain_two; if(req.url == "/peach/") { set req.url = "/strawberry/"; } } } } sub vcl_deliver { if(req.http.sticky) { header.append(resp.http.Set-Cookie,"sticky=" + req.http.sticky + ";"); } } With this configuration if I go to test.example.com/apple/ it will load the HTML for test.domainone.com/pear/, and I do get 'Set-Cookie: sticky=domain_one;' in the initial response. However, all the cookie isn't passed in subsequent linked requests, so I get 503 errors for all of the other static content and images (that should be served by test.domainone.com/wp-content/css/style.css,for example), resulting in a broken page. This setup may not be possible, but as I don't have a lot of Varnish experience I'm trying to make sure I'm not missing something simple. -- Thank You, Logan B NetActuate, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinakee at waltzz.com Tue Oct 24 06:05:30 2017 From: pinakee at waltzz.com (Pinakee BIswas) Date: Tue, 24 Oct 2017 11:35:30 +0530 Subject: Hit-for-pass Message-ID: <8e0ba992-b7a3-4da7-e614-d41397eaf7a0@waltzz.com> Hi, We are using Varnish 4.1.8 for our ecommerce site. We are getting lots of hit-for-pass. I have read bunch of articles on hit-for-pass and there seem to be issues with the same (like when cacheable pages are not getting cached). Would appreciate if you could throw some light on hit-for-pass: 1. how to figure out which URLs are cacheable but aren't getting cached due to hit-for-pass 2. how to fix 1? One approach given in the articles is reducing the TTL for hit-for-pass. Thanks, Pinakee -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Oct 25 13:14:30 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 25 Oct 2017 15:14:30 +0200 Subject: Hit-for-pass In-Reply-To: <8e0ba992-b7a3-4da7-e614-d41397eaf7a0@waltzz.com> References: <8e0ba992-b7a3-4da7-e614-d41397eaf7a0@waltzz.com> Message-ID: Hello Pinakee, Look at the varnishlog, possibly using the "-d" switch and find your request. If your request is a Hit-for-Pass, you should be able to see "HIT-FOR-PASS" in it, or at the least see something like: - VCL_return hash - VCL_call HASH *- VCL_return lookup* - Debug "XXXX HIT-FOR-PASS" - HitPass 3 *- VCL_call PASS* Meaning we said "let's find the object!" but instead of returning a HIT, varnish converted it into a PASS (hence the Hit-for-Pass name!). That happens because someone in vcl_backend_response set the beresp.uncacheable to true in the original request. So, one way could be to turn back the hands of time and scour the varnishlog to find that original request. But chances are that we won't have to. The logic that set beresp.uncacheable to true in the original request will probably do the same thing in the present one, so find the corresponding bereq, it's easy, just find a line looking like: - Link bereq *32774* pass The bold number is the id of the request, go find it! If you are lazy, you can also use "-g request" to group reqs with their bereq. Combine that with "-q" and life is grand: varnishlog -d -q 'ReqURL ~ "/MY_URL_THAT_IS_PASSED" ' -g request Now that we have the bereq, look at it closely, notably, look at the cache-control header, and at the set-cookie one. Also, look at the TTL that varnish set for this object by looking at the metadata. This is found in the TTL line, the one starting with "RFC": -- TTL RFC *-1 10 -1* 1508934970 1508934970 1508934957 0 0 In my case, ttl is -1, grace is 10, keep is -1. Let's keep in mind, but also digress. Varnish has a little secret: when you compile you vcl before using it, it appends its own built inversion of it and if one of your function doesn't return (or simply doesn't exist), the built-in version is run after your code. You can find the buitin.vcl in "/usr/share/..." but also by running "varnishd -x builtin". And it's also accessible here: https://github.com/varnishcache/varnish-cache/blob/4.1/bin/varnishd/builtin.vcl Look at its version of vcl_backend_response: if ttl is zero or less, if there's a set-cookie header, or if we don't like the cache-control (mainly because it tell us that it's not cacheable), we set beresp.uncacheable and store that info for 2 minutes. So chances are that your response triggered that code. Solutions include fixing your backend, using the vcl to fix the response, or just returning(deliver) or return(abandon) in the vcl_backend_response to prevent that code from being run. Sooooooooooooooooo, was that all clear? -- Guillaume Quintard On Tue, Oct 24, 2017 at 8:05 AM, Pinakee BIswas wrote: > Hi, > > We are using Varnish 4.1.8 for our ecommerce site. We are getting lots of > hit-for-pass. I have read bunch of articles on hit-for-pass and there seem > to be issues with the same (like when cacheable pages are not getting > cached). > > Would appreciate if you could throw some light on hit-for-pass: > > 1. how to figure out which URLs are cacheable but aren't getting > cached due to hit-for-pass > 2. how to fix 1? One approach given in the articles is reducing the > TTL for hit-for-pass. > > Thanks, > > Pinakee > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From info+varnish at shee.org Fri Oct 27 00:29:53 2017 From: info+varnish at shee.org (info+varnish at shee.org) Date: Fri, 27 Oct 2017 02:29:53 +0200 Subject: Handling HTML5 Video for iOS and Safari clients In-Reply-To: References: <19037858-010A-4C08-B3F5-168AB1A4C430@shee.org> <625FFD69-A031-47CA-A5DD-C0EC11643A96@shee.org> <9712A4CC-A880-41BC-A068-D798C06B716A@shee.org> <8ECF12EA-A42E-418B-BCFB-3850757727BE@shee.org> <37FA0EF7-C7B0-4AE3-AF41-326BF1CFD70E@shee.org> Message-ID: <955A732A-2141-45CB-BBD2-14D14C342DB0@shee.org> > Am 20.09.2017 um 12:21 schrieb Dridi Boukelmoune : > >>> Did you try playing videos using HTTPS without enabling h2? As of 5.2.0 it is still experimental. >> >> >> Yes, that is what I have deployed now. Without h2 enabled it works as expected >> (hitch; alpn-protos = "http/1.1). Doing so I lose a bit of site performance but >> the functional requirement playing media is higher scored. >> >> Right now rolling 5.2.0 packages, just to stay current (h2 stuff seems to be untouched). > > Thanks for confirming, so we narrowed it down to what looks like a > range bug with h2. Can you open a github issue? We'll pick it up from > there and try to reproduce it. https://github.com/varnishcache/varnish-cache/issues/2473 -- Thanks Leon From legreg.accounts at gmail.com Tue Oct 31 14:17:33 2017 From: legreg.accounts at gmail.com (Legreg Accounts) Date: Tue, 31 Oct 2017 15:17:33 +0100 Subject: varnish4 vs varnish3 - grace behaviour Message-ID: Hi all, I'm currently working on a migration project from varnish3 to varnish4 and I am facing a behaviour that I don't understand. To put it simply, with the same configuration file used both in Varnish3 and Varnish4, I don't have the same results concerning hits and misses. This seems to be related to the grace attribute, but I don't figure out how it works. So below is an example which describes the problem into details: My varnish3 configuration: # ---------------------------------------------- backend default { .host = "nginx"; .port = "80"; } sub vcl_fetch { set beresp.grace= 5s; set beresp.ttl = 1s; } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } set resp.http.X-Cache-Hits = obj.hits; } # ---------------------------------------------- My varnish4 configuration: (only change is the method name vcl_backend_response instead of vcl_fetch ) # ---------------------------------------------- vcl 4.0; backend default { .host = "nginx"; .port = "80"; } sub vcl_backend_response { set beresp.grace= 5s; set beresp.ttl = 1s; } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } set resp.http.X-Cache-Hits = obj.hits; } # ---------------------------------------------- The result of a little scenario which retrieves headers of a given resource after some time: # ---------------------------------------------- Time 14-47-12 VARNISH4: X-Cache: MISS X-Cache-Hits: 0 VARNISH3: X-Cache: MISS X-Cache-Hits: 0 Time 14-47-13 VARNISH4: X-Cache: HIT X-Cache-Hits: 1 VARNISH3: X-Cache: MISS X-Cache-Hits: 0 Time 14-47-20 VARNISH4: X-Cache: MISS X-Cache-Hits: 0 VARNISH3: X-Cache: MISS X-Cache-Hits: 0 # ---------------------------------------------- So as you can see in the scenario, for the first request, both v3 and v4 return a MISS, which is normal. But one second after, the second request returns a MISS for varnish3, which is normal for me, and a HIT for varnish4, that I don't really understand. As I suspected this was related to the grace parameter, I have added a third request in my scenario 7 seconds later ( greeter than 1sec for cache plus 5 seconds for grace), and as expected, both varnish 3 and varnish 4 are MISS. So could you help me to understand / workaround this problem ? The objective for me is to get the same result with varnish4 than with varnish3 (while I'm migrating from 3 to 4 ;) ). My current workaround is to set beresp.grace=1ms on varnish4, but I don't like that at all, and I can't do that on every of my configurations :( Any help would be very appreciated ! Thanks in advance ! :) -------------- next part -------------- An HTML attachment was scrubbed... URL: