[Varnish] #1723: High CPU load and exponential peak of objects in a281a10
Varnish
varnish-bugs at varnish-cache.org
Fri Apr 24 20:39:28 CEST 2015
#1723: High CPU load and exponential peak of objects in a281a10
--------------------------+----------------------------------
Reporter: zaterio@… | Owner:
Type: defect | Status: new
Priority: normal | Milestone: Varnish 4.0 release
Component: varnishd | Version: trunk
Severity: major | Resolution:
Keywords: load objects |
--------------------------+----------------------------------
Old description:
> I am using varnishd (varnish-trunk review a281a10), with 2 storage
> backends:
> ram1: default storage (malloc, 8GB)
> vod1: only for mp4 files (file, 50 GB)
>
> in vcl_backend_response, whe have the following rule:
>
> set beresp.storage_hint = "ram1";
> set beresp.http.x-storage = "ram1";
>
> if (bereq.url ~ "\.mp4$")
> {
> set beresp.storage_hint = "vod1";
> set beresp.http.x-storage =
> "vod1";
> set beresp.do_stream = true;
> set beresp.ttl = 10h;
> set beresp.http.Cache-Control =
> "max-age=604800, public";
> set beresp.do_esi = true;
> return (deliver);
> }
>
> On this machine we have traffic peaks of 1 Gbits, but yesterady we
> accomodate new service and trafic increased to 2 Gbps.
>
> In this situation, every 30 min, the load increases, up to 100 (5
> minutes), and MAIN.n_objectcore and
> MAIN.n_objecthead rises to 16970000000000 (normal is 5.5k), and then
> uptime returns to 0, and load decreases.
>
> We have another server with identical hardware and varnish configuration
> (but more traffic, 5Gbps), This server does not have the above described
> behavior, this server is running varnish-trunk revision 7746e30.
> Whe downgrade the problematic machine from a281a10 to 7746e30 and after
> 10 hours looks normal.
>
> DAEMON_OPTS="-a XXX.XXX.XXX.XXX:80, \
> -T XXX:XXX:XXX:XXX:6082 \
> -f /etc/varnish/default.vcl \
> -h classic,16383 \
> -s ram1=malloc,8G \
> -s vod1=file,/varnishcache/varnish.bin,50G \
> -p thread_pools=2 \
> -p thread_pool_min=500 \
> -p thread_pool_max=3000 \
> -p thread_pool_add_delay=2 \
> -p auto_restart=on \
> -p ping_interval=3 \
> -p send_timeout=5000 \
> -p workspace_session=1M \
> -p cli_timeout=25 \
> -p http_gzip_support=off \
> -p tcp_keepalive_time=600 \
> -p listen_depth=8192 \
> -p cli_buffer=32k \
> -p cli_limit=96k \
> -p ban_dups=on"
New description:
I am using varnishd (varnish-trunk review a281a10), with 2 storage
backends:
ram1: default storage (malloc, 8GB)
vod1: only for mp4 files (file, 50 GB)
in vcl_backend_response, whe have the following rule:
{{{
set beresp.storage_hint = "ram1";
set beresp.http.x-storage = "ram1";
if (bereq.url ~ "\.mp4$")
{
set beresp.storage_hint = "vod1";
set beresp.http.x-storage =
"vod1";
set beresp.do_stream = true;
set beresp.ttl = 10h;
set beresp.http.Cache-Control =
"max-age=604800, public";
set beresp.do_esi = true;
return (deliver);
}
}}}
On this machine we have traffic peaks of 1 Gbits, but yesterady we
accomodate new service and trafic increased to 2 Gbps.
In this situation, every 30 min, the load increases, up to 100 (5
minutes), and
MAIN.n_objectcore and
MAIN.n_objecthead rises to 16970000000000 (normal is 5.5k), and then
uptime returns to 0, and load decreases.
We have another server with identical hardware and varnish configuration
(but more traffic, 5Gbps), This server does not have the above described
behavior, this server is running varnish-trunk revision 7746e30.
Whe downgrade the problematic machine from a281a10 to 7746e30 and after 10
hours looks normal.
{{{
DAEMON_OPTS="-a XXX.XXX.XXX.XXX:80, \
-T XXX:XXX:XXX:XXX:6082 \
-f /etc/varnish/default.vcl \
-h classic,16383 \
-s ram1=malloc,8G \
-s vod1=file,/varnishcache/varnish.bin,50G \
-p thread_pools=2 \
-p thread_pool_min=500 \
-p thread_pool_max=3000 \
-p thread_pool_add_delay=2 \
-p auto_restart=on \
-p ping_interval=3 \
-p send_timeout=5000 \
-p workspace_session=1M \
-p cli_timeout=25 \
-p http_gzip_support=off \
-p tcp_keepalive_time=600 \
-p listen_depth=8192 \
-p cli_buffer=32k \
-p cli_limit=96k \
-p ban_dups=on"
}}}
--
Comment (by phk):
Can you try "varnishadm panic.show" and send us the output ?
--
Ticket URL: <https://www.varnish-cache.org/trac/ticket/1723#comment:1>
Varnish <https://varnish-cache.org/>
The Varnish HTTP Accelerator
More information about the varnish-bugs
mailing list