varnishadm or varnish in weird state
John Cherouvim
jc at eworx.gr
Mon Aug 28 08:42:08 CEST 2017
I have 2 Ubuntu 16 LTS servers provisioned using the same ansible
playbook and they both run varnish-4.1.1 with the exact same
configuration. The first server is production (receives a lot of
traffic) and the other one is staging (minimal traffic). They both have
~20 days of uptime with varnishstat reporting an 11 days uptime each.
On production I get weird results for trying to list the backends. Some
times I get this:
$ sudo varnishadm backend.list
PONG 1503057040 1.0
And some other times I get this:
$ sudo varnishadm backend.list
Backend name Admin Probe
8440ffbd-e1de-4827-9f83-9096f5a97bf1.www probe Healthy 5/5
but on staging I consistently get the following, which is what I've been
used to seeing in all other environments I've used varnish:
$ sudo varnishadm backend.list
Backend name Admin Probe
boot.www probe Healthy 5/5
My /etc/varnish/default.vcl starts like this on both servers:
> vcl 4.0;
> backend www {
> .host = "localhost";
> .port = "8888";
> .connect_timeout = 60s;
> .first_byte_timeout = 120s;
> .between_bytes_timeout = 120s;
> .max_connections = 256;
> .probe = {
> .url = "/health-check";
> .timeout = 15s;
> .interval = 5s;
> .window = 5;
> .threshold = 2;
> }
> }
For that kind of configuration I've never seen that hash like backend
name, nor that behavior where it randomly shows that or the "PONG" response.
Doing a "service varnish restart" fixed this.
But does anyone know why did this happen in the first place?
thanks
More information about the varnish-misc
mailing list