From hrhosseini at hotmail.com Thu Jun 10 10:53:35 2021 From: hrhosseini at hotmail.com (Hamidreza Hosseini) Date: Thu, 10 Jun 2021 10:53:35 +0000 Subject: Use varnish-cache in front of HLS servers for live streaming Message-ID: Hi, I want to use varnish as a cache server in front of my Http live streaming servers to serves .ts files to client and I want to ignore caching .m3u8 files extension to be cached. When I read how varnish would cache the objects again, I encountered with some issues for example because each clients would request the .ts files from varnish directly or through load balancers (load balancer would pass all headers to varnish) so for unique .ts file , varnish will cache the file for each client! so I should normalize header or delete some clients header or somehow I should tell varnish that this file is unique and dont cache it again based on different useless header... How can I tell this to varnish or which header should be deleted by varnish because I don't know which client would send which header ! which header would affect on double caching? Is there any sample config to satisfy my needs? -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Thu Jun 10 16:18:22 2021 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 10 Jun 2021 09:18:22 -0700 Subject: Use varnish-cache in front of HLS servers for live streaming In-Reply-To: References: Message-ID: Hi, By default, Varnish only hashes the host and URL (including the query string): https://github.com/varnishcache/varnish-cache/blob/master/bin/varnishd/builtin.vcl#L124 So you possibly need to clean the query string. Or, while unlikely, it could be that your backend is returning a Vary header, in which case, you should remove the request headers corresponding to this (ignore content-encoding though) -- Guillaume Quintard On Thu, Jun 10, 2021 at 3:54 AM Hamidreza Hosseini wrote: > Hi, > I want to use varnish as a cache server in front of my Http live streaming > servers to serves .ts files to client and I want to ignore caching .m3u8 > files extension to be cached. > When I read how varnish would cache the objects again, I encountered with > some issues for example because each clients would request the .ts files > from varnish directly or through load balancers (load balancer would pass > all headers to varnish) so for unique .ts file , varnish will cache the > file for each client! so I should normalize header or delete some clients > header or somehow I should tell varnish that this file is unique and dont > cache it again based on different useless header... > How can I tell this to varnish or which header should be deleted by > varnish because I don't know which client would send which header ! > which header would affect on double caching? > Is there any sample config to satisfy my needs? > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwetzel at atanar.com Fri Jun 11 14:36:38 2021 From: dwetzel at atanar.com (Damien Wetzel) Date: Fri, 11 Jun 2021 16:36:38 +0200 Subject: which CDNs are using varnish (except Fastly) Message-ID: <24771.29942.616996.646364@blackcube.at.myplace> Hi I'm looking for CDNs that use Varnish in their caches and that allow setting customer VCL in it. It seems there are not plenty of them ;) Best Regards, Damien -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Damien WETZEL (ATANAR TECHNOLOGIES) ("`-/")_.-'"``-._ http://www.atanar.com . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' Phone:+33 9 67 35 09 05 _.- _..-_/ / ((.' - So much to do, so little time - ((,.-' ((,/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From hrhosseini at hotmail.com Sun Jun 13 07:15:46 2021 From: hrhosseini at hotmail.com (Hamidreza Hosseini) Date: Sun, 13 Jun 2021 07:15:46 +0000 Subject: Varnish wouldn't cache HLS fragments Message-ID: Hi, I put varnish in front of my http servers to serve Hls streaming, I want varnish cache the fragments but not .m3u8 manifest file, I configure it but it cache nothing! My configuration file: ``` vcl 4.1; import directors; backend b1 { .host = "playback-02"; .probe = { .url = "/"; .timeout = 150 ms; .interval = 10s; .window = 6; .threshold = 5; } } sub vcl_init { # we use round robin director for our backend swift proxies new hls_cluster = directors.round_robin(); hls_cluster.add_backend(b1); } acl purge { "localhost"; } sub vcl_recv { set req.backend_hint = hls_cluster.backend(); if (req.method == "PURGE") { if (!client.ip ~ purge) { return(synth(405,"Not allowed.")); } return (purge); } if (req.url ~ "\.m3u8$") { return (pass); } } sub vcl_backend_response { # cache for half of a day set beresp.ttl=5m; # Don't cache 404 responses if (bereq.url ~ "\.(aac|dash|m4s|mp4|ts)$") { set beresp.ttl = 30s; } if ( beresp.status == 404 ) { set beresp.ttl = 120s; set beresp.uncacheable = true; return (deliver); } if (beresp.status == 500 || beresp.status == 502 || beresp.status == 503 || beresp.status == 504) { set beresp.uncacheable = true; } } ``` Varnish version: varnishd (varnish-6.0.7 revision 525d371e3ea0e0c38edd7baf0f80dc226560f26e) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2020 Varnish Software AS Distribution: Ubuntu 20.04 LTS -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Sun Jun 13 15:45:50 2021 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Sun, 13 Jun 2021 08:45:50 -0700 Subject: Varnish wouldn't cache HLS fragments In-Reply-To: References: Message-ID: Hi, Can you share the output of "varnishlog -g request" for one of those requests that should be cached please? Cheers, -- Guillaume Quintard On Sun, Jun 13, 2021, 00:17 Hamidreza Hosseini wrote: > Hi, > I put varnish in front of my http servers to serve Hls streaming, I want > varnish cache the fragments but not .m3u8 manifest file, > I configure it but it cache nothing! > My configuration file: > > ``` > vcl 4.1; > > import directors; > > > backend b1 { > .host = "playback-02"; > .probe = { > .url = "/"; > .timeout = 150 ms; > .interval = 10s; > .window = 6; > .threshold = 5; > } > } > > > > sub vcl_init { > # we use round robin director for our backend swift proxies > > new hls_cluster = directors.round_robin(); > hls_cluster.add_backend(b1); > > } > > acl purge { > "localhost"; > } > > > sub vcl_recv { > > set req.backend_hint = hls_cluster.backend(); > if (req.method == "PURGE") { > if (!client.ip ~ purge) { > return(synth(405,"Not allowed.")); > } > return (purge); > } > > if (req.url ~ "\.m3u8$") { > return (pass); > } > } > > > > > > sub vcl_backend_response { > # cache for half of a day > set beresp.ttl=5m; > # Don't cache 404 responses > > if (bereq.url ~ "\.(aac|dash|m4s|mp4|ts)$") { > set beresp.ttl = 30s; > } > > if ( beresp.status == 404 ) { > set beresp.ttl = 120s; > set beresp.uncacheable = true; > return (deliver); > } > if (beresp.status == 500 || beresp.status == 502 || beresp.status == > 503 || beresp.status == 504) > { > set beresp.uncacheable = true; > } > } > > ``` > > Varnish version: > varnishd (varnish-6.0.7 revision 525d371e3ea0e0c38edd7baf0f80dc226560f26e) > Copyright (c) 2006 Verdens Gang AS > Copyright (c) 2006-2020 Varnish Software AS > > Distribution: Ubuntu 20.04 LTS > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Mon Jun 14 06:26:49 2021 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Sun, 13 Jun 2021 23:26:49 -0700 Subject: Varnish wouldn't cache HLS fragments In-Reply-To: References: Message-ID: please keep the mailing-list CC'd you backend is telling Varnish to not cache: -- BerespHeader Cache-Control: no-cache which is acted upon in the built-in.vcl: https://github.com/varnishcache/varnish-cache/blob/6.0/bin/varnishd/builtin.vcl#L161 more info here; https://varnish-cache.org/docs/trunk/users-guide/vcl-built-in-code.html#vcl-built-in-code and maybe this can help too: https://info.varnish-software.com/blog/finally-understanding-built-in-vcl -- Guillaume Quintard On Sun, Jun 13, 2021 at 11:16 PM Hamidreza Hosseini wrote: > This is one of hls fragments that I want to be cached: > > > wget http://stream.test.local/hls/mystream/1623650629260.ts > > ``` > * << Request >> 32770 > - Begin req 32769 rxreq > - Timestamp Start: 1623650670.552461 0.000000 0.000000 > - Timestamp Req: 1623650670.552461 0.000000 0.000000 > - VCL_use boot > - ReqStart 192.168.200.10 58016 a0 > - ReqMethod GET > - ReqURL /hls/mystream/1623650629260.ts > - ReqProtocol HTTP/1.1 > - ReqHeader User-Agent: Wget/1.20.3 (linux-gnu) > - ReqHeader Accept: */* > - ReqHeader Accept-Encoding: identity > - ReqHeader Host: stream.test.local > - ReqHeader Connection: Keep-Alive > - ReqHeader X-Forwarded-For: 192.168.200.10 > - VCL_call RECV > - VCL_return hash > - ReqUnset Accept-Encoding: identity > - VCL_call HASH > - VCL_return lookup > - VCL_call MISS > - VCL_return fetch > - Link bereq 32771 fetch > - Timestamp Fetch: 1623650670.557642 0.005181 0.005181 > - RespProtocol HTTP/1.1 > - RespStatus 200 > - RespReason OK > - RespHeader Server: nginx/1.20.1 > - RespHeader Date: Mon, 14 Jun 2021 06:04:30 GMT > - RespHeader Content-Type: video/mp2t > - RespHeader Content-Length: 161868 > - RespHeader Last-Modified: Mon, 14 Jun 2021 06:03:51 GMT > - RespHeader ETag: "60c6f147-2784c" > - RespHeader Cache-Control: no-cache > - RespHeader Access-Control-Allow-Origin: * > - RespHeader Access-Control-Expose-Headers: Content-Length > - RespHeader Accept-Ranges: bytes > - RespHeader X-Varnish: 32770 > - RespHeader Age: 0 > - RespHeader Via: 1.1 varnish (Varnish/6.2) > - VCL_call DELIVER > - VCL_return deliver > - Timestamp Process: 1623650670.557660 0.005199 0.000018 > - Filters > - RespHeader Connection: keep-alive > - Timestamp Resp: 1623650670.558417 0.005956 0.000757 > - ReqAcct 179 0 179 406 161868 162274 > - End > ** << BeReq >> 32771 > -- Begin bereq 32770 fetch > -- VCL_use boot > -- Timestamp Start: 1623650670.552655 0.000000 0.000000 > -- BereqMethod GET > -- BereqURL /hls/mystream/1623650629260.ts > -- BereqProtocol HTTP/1.1 > -- BereqHeader User-Agent: Wget/1.20.3 (linux-gnu) > -- BereqHeader Accept: */* > -- BereqHeader Host: stream.test.local > -- BereqHeader X-Forwarded-For: 192.168.200.10 > -- BereqHeader Accept-Encoding: gzip > -- BereqHeader X-Varnish: 32771 > -- VCL_call BACKEND_FETCH > -- VCL_return fetch > -- BackendOpen 25 b1 {Backend_ip} 80 {Varnish_ip} 49734 > -- BackendStart {Backend_ip} 80 > -- Timestamp Bereq: 1623650670.552739 0.000084 0.000084 > -- Timestamp Beresp: 1623650670.557325 0.004669 0.004586 > -- BerespProtocol HTTP/1.1 > -- BerespStatus 200 > -- BerespReason OK > -- BerespHeader Server: nginx/1.20.1 > -- BerespHeader Date: Mon, 14 Jun 2021 06:04:30 GMT > -- BerespHeader Content-Type: video/mp2t > -- BerespHeader Content-Length: 161868 > -- BerespHeader Last-Modified: Mon, 14 Jun 2021 06:03:51 GMT > -- BerespHeader Connection: keep-alive > -- BerespHeader ETag: "60c6f147-2784c" > -- BerespHeader Cache-Control: no-cache > -- BerespHeader Access-Control-Allow-Origin: * > -- BerespHeader Access-Control-Expose-Headers: Content-Length > -- BerespHeader Accept-Ranges: bytes > -- TTL RFC 120 10 0 1623650671 1623650671 1623650670 0 0 > cacheable > -- VCL_call BACKEND_RESPONSE > -- TTL VCL 300 10 0 1623650671 cacheable > -- TTL VCL 30 10 0 1623650671 cacheable > -- TTL VCL 120 10 0 1623650671 cacheable > -- TTL VCL 120 10 0 1623650671 uncacheable > -- VCL_return deliver > -- Filters > -- Storage malloc Transient > -- Fetch_Body 3 length stream > -- BackendReuse 25 b1 > -- Timestamp BerespBody: 1623650670.558352 0.005697 0.001028 > -- Length 161868 > -- BereqAcct 202 0 202 348 161868 162216 > -- End > > ``` > > ------------------------------ > *From:* Guillaume Quintard > *Sent:* Sunday, June 13, 2021 8:45 AM > *To:* Hamidreza Hosseini > *Cc:* varnish-misc > *Subject:* Re: Varnish wouldn't cache HLS fragments > > Hi, > > Can you share the output of "varnishlog -g request" for one of those > requests that should be cached please? > > Cheers, > > -- > Guillaume Quintard > > On Sun, Jun 13, 2021, 00:17 Hamidreza Hosseini > wrote: > > Hi, > I put varnish in front of my http servers to serve Hls streaming, I want > varnish cache the fragments but not .m3u8 manifest file, > I configure it but it cache nothing! > My configuration file: > > ``` > vcl 4.1; > > import directors; > > > backend b1 { > .host = "playback-02"; > .probe = { > .url = "/"; > .timeout = 150 ms; > .interval = 10s; > .window = 6; > .threshold = 5; > } > } > > > > sub vcl_init { > # we use round robin director for our backend swift proxies > > new hls_cluster = directors.round_robin(); > hls_cluster.add_backend(b1); > > } > > acl purge { > "localhost"; > } > > > sub vcl_recv { > > set req.backend_hint = hls_cluster.backend(); > if (req.method == "PURGE") { > if (!client.ip ~ purge) { > return(synth(405,"Not allowed.")); > } > return (purge); > } > > if (req.url ~ "\.m3u8$") { > return (pass); > } > } > > > > > > sub vcl_backend_response { > # cache for half of a day > set beresp.ttl=5m; > # Don't cache 404 responses > > if (bereq.url ~ "\.(aac|dash|m4s|mp4|ts)$") { > set beresp.ttl = 30s; > } > > if ( beresp.status == 404 ) { > set beresp.ttl = 120s; > set beresp.uncacheable = true; > return (deliver); > } > if (beresp.status == 500 || beresp.status == 502 || beresp.status == > 503 || beresp.status == 504) > { > set beresp.uncacheable = true; > } > } > > ``` > > Varnish version: > varnishd (varnish-6.0.7 revision 525d371e3ea0e0c38edd7baf0f80dc226560f26e) > Copyright (c) 2006 Verdens Gang AS > Copyright (c) 2006-2020 Varnish Software AS > > Distribution: Ubuntu 20.04 LTS > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From justinl at arena.net Mon Jun 14 14:37:44 2021 From: justinl at arena.net (Justin Lloyd) Date: Mon, 14 Jun 2021 14:37:44 +0000 Subject: Varnish HA and MediaWiki HTTP PURGEs Message-ID: Hi all, I just saw the new Varnish HA video and was wondering if VHA's node synchronization would obviate the need for all of the Varnish nodes to be listed in the MediaWiki Varnish caching configuration. MediaWiki uses the list of cache nodes to send HTTP PURGE requests to invalidate cached pages when they are updated. So with VHA, could MediaWiki just be configured with a single hostname or floating IP address (e.g. keepalived) that points to the Varnish cluster so that the cluster could handle replicating the PURGE requests? Thanks, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Mon Jun 14 14:54:23 2021 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 14 Jun 2021 07:54:23 -0700 Subject: Varnish HA and MediaWiki HTTP PURGEs In-Reply-To: References: Message-ID: Hello Justin! VHA is a commercial product, so we should probably keep it short of private as this is an open-source mailing-list. However, since I'm sure the answer will be useful for other people, let's answer publicly :-) VHA is a fire-and-forget tool, outside of the critical path so that replication requests failing (or being rate-limited) don't cause harm. Purging, on the other hand, needs to be very vocal about failed purge requests failing as your cache consistency is at stake, so while VHA can do it, it's a bad idea. However, VHA uses a tool named broadcaster which can be used on its own to do exactly what you need: replicate a single request for the CMS backend to the whole cluster, and report back so you can act on failures. Cheer, -- Guillaume Quintard On Mon, Jun 14, 2021 at 7:39 AM Justin Lloyd wrote: > Hi all, > > > > I just saw the new Varnish HA video > and was wondering if VHA?s > node synchronization would obviate the need for all of the Varnish nodes to > be listed in the MediaWiki Varnish caching configuration > . MediaWiki uses > the list of cache nodes to send HTTP PURGE requests to invalidate cached > pages when they are updated. So with VHA, could MediaWiki just be > configured with a single hostname or floating IP address (e.g. keepalived) > that points to the Varnish cluster so that the cluster could handle > replicating the PURGE requests? > > > > Thanks, > > Justin > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From justinl at arena.net Mon Jun 14 15:00:11 2021 From: justinl at arena.net (Justin Lloyd) Date: Mon, 14 Jun 2021 15:00:11 +0000 Subject: Varnish HA and MediaWiki HTTP PURGEs In-Reply-To: References: Message-ID: Thanks, Guillaume! I may follow up with you separately to discuss this in more depth since this could help with a redesign of our architecture that I?d like to do. Justin From: Guillaume Quintard Sent: Monday, June 14, 2021 7:54 AM To: Justin Lloyd Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish HA and MediaWiki HTTP PURGEs Hello Justin! VHA is a commercial product, so we should probably keep it short of private as this is an open-source mailing-list. However, since I'm sure the answer will be useful for other people, let's answer publicly :-) VHA is a fire-and-forget tool, outside of the critical path so that replication requests failing (or being rate-limited) don't cause harm. Purging, on the other hand, needs to be very vocal about failed purge requests failing as your cache consistency is at stake, so while VHA can do it, it's a bad idea. However, VHA uses a tool named broadcaster which can be used on its own to do exactly what you need: replicate a single request for the CMS backend to the whole cluster, and report back so you can act on failures. Cheer, -- Guillaume Quintard On Mon, Jun 14, 2021 at 7:39 AM Justin Lloyd > wrote: Hi all, I just saw the new Varnish HA video and was wondering if VHA?s node synchronization would obviate the need for all of the Varnish nodes to be listed in the MediaWiki Varnish caching configuration. MediaWiki uses the list of cache nodes to send HTTP PURGE requests to invalidate cached pages when they are updated. So with VHA, could MediaWiki just be configured with a single hostname or floating IP address (e.g. keepalived) that points to the Varnish cluster so that the cluster could handle replicating the PURGE requests? Thanks, Justin _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.chivers at zengenti.com Fri Jun 18 12:22:32 2021 From: r.chivers at zengenti.com (Richard Chivers) Date: Fri, 18 Jun 2021 13:22:32 +0100 Subject: Upgrading to 6.6 can't reload Message-ID: Hi, First post, hope this is the right place. I am doing some work at the moment moving our config from version 6 to 6.6, also moving from ubuntu bionic to focal. In the systemd configuration when we start varnish we pass the args (also many more not detailed): -T localhost:6082 \ -S /etc/varnish/secret \ We have generated an appropriate secret file etc. In bionic when we run a varnishadm, we don't need to pass the -T or -S args, it just reads the secret file ( I am assuming) and connects. In focal this is not the case, I need to pass the args. e.g. varnishadm -T localhost:6082 -S /etc/varnish/secret Because of this calling /usr/sbin/varnishreload fails because it calls varnishadm -n '' -- vcl.list and gets the response "No -T in shared memory" So my question is where does this default from, is there an ENV variable to set, or am I just missing something? Another strange issue is that varnishlog is not returning anything, it simply hangs and doen't show anything or an error for that matter. I Installed by adding the repo: deb https://packagecloud.io/varnishcache/varnish66/ubuntu/ focal main Any ideas or help appreciated. I have gone back through change logs, but can't spot anything. Thanks Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Fri Jun 18 13:13:28 2021 From: dridi at varni.sh (Dridi Boukelmoune) Date: Fri, 18 Jun 2021 13:13:28 +0000 Subject: Upgrading to 6.6 can't reload In-Reply-To: References: Message-ID: On Fri, Jun 18, 2021 at 12:24 PM Richard Chivers wrote: > > Hi, > > First post, hope this is the right place. > > I am doing some work at the moment moving our config from version 6 to 6.6, also moving from ubuntu bionic to focal. > > In the systemd configuration when we start varnish we pass the args (also many more not detailed): > > -T localhost:6082 \ > -S /etc/varnish/secret \ > > We have generated an appropriate secret file etc. > > In bionic when we run a varnishadm, we don't need to pass the -T or -S args, it just reads the secret file ( I am assuming) and connects. > > In focal this is not the case, I need to pass the args. e.g. varnishadm -T localhost:6082 -S /etc/varnish/secret > > Because of this calling /usr/sbin/varnishreload fails because it calls varnishadm -n '' -- vcl.list and gets the response "No -T in shared memory" > > So my question is where does this default from, is there an ENV variable to set, or am I just missing something? Did your system's hostname change between the moment when varnish was started and when you attempted a reload? Can you share the rest of your service file? (maybe redact sensitive parts if any) > Another strange issue is that varnishlog is not returning anything, it simply hangs and doen't show anything or an error for that matter. > > I Installed by adding the repo: deb https://packagecloud.io/varnishcache/varnish66/ubuntu/ focal main > > Any ideas or help appreciated. > > I have gone back through change logs, but can't spot anything. > > Thanks > > Richard > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From dridi at varni.sh Mon Jun 21 08:00:56 2021 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 21 Jun 2021 08:00:56 +0000 Subject: Upgrading to 6.6 can't reload In-Reply-To: References: Message-ID: On Mon, Jun 21, 2021 at 6:45 AM Richard Chivers wrote: > > Hi, thanks for coming back. No, the hostname didn't change. Here is the rest of the file: > > [Unit] > -S /etc/varnish/secret \ Can you try removing the -S option from varnishd? Since you only listen to the CLI on localhost, there's likely no remote access, so leaving the secret out will make varnishd generate a random one. Basically, if you want to use varnishadm you need local root privileges, same as your current setup. >> > In bionic when we run a varnishadm, we don't need to pass the -T or -S args, it just reads the secret file ( I am assuming) and connects. >> > >> > In focal this is not the case, I need to pass the args. e.g. varnishadm -T localhost:6082 -S /etc/varnish/secret The main entry point is the -n option, and then options (or lack thereof) like -T and -S can be found from the working directory. >> > Because of this calling /usr/sbin/varnishreload fails because it calls varnishadm -n '' -- vcl.list and gets the response "No -T in shared memory" >> > >> > So my question is where does this default from, is there an ENV variable to set, or am I just missing something? >> >> Did your system's hostname change between the moment when varnish was >> started and when you attempted a reload? >> >> Can you share the rest of your service file? (maybe redact sensitive >> parts if any) I didn't have time to give more details, but the default value for -n is the system's hostname, that's why I asked initially about the hostname changing. >> > Another strange issue is that varnishlog is not returning anything, it simply hangs and doen't show anything or an error for that matter. Are you running varnishlog with enough privileges? (most likely belonging at least to the varnish group.) If you omit the -d option, varnishlog will print transactions as they complete, so if by any chance you are inspecting a test system with no traffic that's not surprising. A surefire way to see whether varnishlog connects to a running varnish instance is to try: varnishlog -d -g raw >> > I Installed by adding the repo: deb https://packagecloud.io/varnishcache/varnish66/ubuntu/ focal main >> > >> > Any ideas or help appreciated. >> > >> > I have gone back through change logs, but can't spot anything. >> > >> > Thanks >> > >> > Richard Please keep the mailing list CC'd. Dridi From r.chivers at zengenti.com Wed Jun 23 06:21:27 2021 From: r.chivers at zengenti.com (Richard Chivers) Date: Wed, 23 Jun 2021 07:21:27 +0100 Subject: Upgrading to 6.6 can't reload In-Reply-To: References: Message-ID: Hey, thanks for coming back to me, I have done some more work, but haven't got any further at this stage. On Mon, Jun 21, 2021 at 9:01 AM Dridi Boukelmoune wrote: > On Mon, Jun 21, 2021 at 6:45 AM Richard Chivers > wrote: > > > > Hi, thanks for coming back. No, the hostname didn't change. Here is the > rest of the file: > > > > [Unit] > > > -S /etc/varnish/secret \ > > Can you try removing the -S option from varnishd? > > Since you only listen to the CLI on localhost, there's likely no > remote access, so leaving the secret out will make varnishd generate a > random one. Basically, if you want to use varnishadm you need local > root privileges, same as your current setup. > I tried this and it makes no difference, I think the fundamental issue is that calling varnishadm without args seems (regardless the args I pass to varnishd) to end in the message "No -T in shared memory" if run from root. If I run from another user, I do get the message "could not get hold of varnishd, is it running?" I guess I could update the reload script to pass the -T and -S args, but this seems wrong, just concerned there is a general issue on focal, Is anyone else running 6.6 on focal? Looking at the source code in 6 and 6.6 I can't see anywhere that the -T would default from and yet on 6 under bionic varnishadm as a root user just works without any -T or -S flags. https://github.com/varnishcache/varnish-cache/blob/6.0/bin/varnishadm/varnishadm.c > > >> > In bionic when we run a varnishadm, we don't need to pass the -T or > -S args, it just reads the secret file ( I am assuming) and connects. > >> > > >> > In focal this is not the case, I need to pass the args. e.g. > varnishadm -T localhost:6082 -S /etc/varnish/secret > > The main entry point is the -n option, and then options (or lack > thereof) like -T and -S can be found from the working directory. > > >> > Because of this calling /usr/sbin/varnishreload fails because it > calls varnishadm -n '' -- vcl.list and gets the response "No -T in shared > memory" > >> > > >> > So my question is where does this default from, is there an ENV > variable to set, or am I just missing something? > >> > >> Did your system's hostname change between the moment when varnish was > >> started and when you attempted a reload? > >> > >> Can you share the rest of your service file? (maybe redact sensitive > >> parts if any) > > I didn't have time to give more details, but the default value for -n > is the system's hostname, that's why I asked initially about the > hostname changing. > > >> > Another strange issue is that varnishlog is not returning anything, > it simply hangs and doen't show anything or an error for that matter. > > Are you running varnishlog with enough privileges? (most likely > belonging at least to the varnish group.) > > If you omit the -d option, varnishlog will print transactions as they > complete, so if by any chance you are inspecting a test system with no > traffic that's not surprising. > > A surefire way to see whether varnishlog connects to a running varnish > instance is to try: > > varnishlog -d -g raw > I am running as root. If I execute this it connects but I get no output, I know it is connected because when I restart the varnish process I get the message, "Log abandoned (vsm)" which you always see when a new varbnishd process starts. I am definitely hitting the varnish server, as I am executing curl requests to localhost:80, but there is no output from varnishlog. I am about to spin up some more boxes, so will check to see wheter this is just specific to this box or not, I did initially install 6.2 on this server and varnishlog was working as expected with that. > > >> > I Installed by adding the repo: deb > https://packagecloud.io/varnishcache/varnish66/ubuntu/ focal main > >> > > >> > Any ideas or help appreciated. > >> > > >> > I have gone back through change logs, but can't spot anything. > >> > > >> > Thanks > >> > > >> > Richard > > Please keep the mailing list CC'd. > > Dridi > -------------- next part -------------- An HTML attachment was scrubbed... URL: