I would like to add some details to this case:<div><br></div><div><div>We encounter various varnish panic (the forked processus crash, won't restart and nothing listen to http/80 port anymore), with persistent storage (tested with 20/35/40/90G) and kernel address randomize On/Off.</div>
<div>Same servers with file,malloc parameters instead of persistent are healthy. Feel free to contact me to get the full coredump.</div><div>All details below :)</div><div><br></div><div><br></div><div>Varnish Version : 3 - trunk d56069e Sep 06, 2011 d56069e8ef221310d75455feb9b03483c9caf63b</div>
<div>Ubuntu 10.04 64Bits Linux 2.6.32-33-generic #72-Ubuntu SMP Fri Jul 29 21:07:13 UTC 2011 x86_64 GNU/Linux</div><div>48G RAM / two Intel(R) Xeon(R) CPU L5640 @ 2.27GHz</div><div>SSD-SATA 90G</div><div><br></div>
<div><br></div><div>2) Startup config :</div><div><br></div><div>VARNISH_INSTANCE=default</div><div>START=yes</div><div>NFILES="131072"</div><div>MEMLOCK="82000"</div><div>VARNISH_VCL_CONF=/etc/varnish/default/default.vcl</div>
<div>VARNISH_LISTEN_ADDRESS=</div><div>VARNISH_LISTEN_PORT=80</div><div>VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1</div><div>VARNISH_ADMIN_LISTEN_PORT=6082</div><div>VARNISH_SECRET_FILE=/etc/varnish/default/secret</div><div>
VARNISH_THREAD_POOLS=12</div>
<div><br></div><div>VARNISH_STORAGE_FILE_1=/mnt/ssd/varnish/cachefile1</div><div>VARNISH_STORAGE_SIZE=30G</div><div>VARNISH_STORAGE_1="persistent,${VARNISH_STORAGE_FILE_1},${VARNISH_STORAGE_SIZE}"</div><div><br>
</div><div>DAEMON_OPTS=" -n ${VARNISH_INSTANCE} \</div><div> -u root \</div><div> -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \</div><div> -f ${VARNISH_VCL_CONF} \</div><div> -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \</div>
<div> -S ${VARNISH_SECRET_FILE} \</div><div> -s ${VARNISH_STORAGE_1} \</div><div> -s Transient=malloc,1G\</div><div> -p first_byte_timeout=5 \</div>
<div> -p between_bytes_timeout=5 \</div><div> -p pipe_timeout=5 \</div><div> -p send_timeout=2700 \</div>
<div> -p default_grace=240 \</div><div> -p default_ttl=3600 \</div><div> -p http_gzip_support=off \</div>
<div> -p http_range_support=on \</div><div> -p max_restarts=2 \</div><div> -p thread_pool_add_delay=2 \</div>
<div> -p thread_pool_max=4000 \</div><div> -p thread_pool_min=80 \</div><div> -p thread_pool_timeout=120 \</div>
<div> -p thread_pools=12 \</div><div> -p thread_stats_rate=50</div><div><br></div><div><br></div><div> </div><div>#### VCL FILE #####</div><div>
### SECDownMod</div><div>### <a href="https://github.com/footplus/libvmod-secdown" target="_blank">https://github.com/footplus/libvmod-secdown</a></div><div><br></div><div>import secdown;</div><div><br></div><div>include "/etc/varnish/backend/director_edge_2xx.vcl";</div>
<div>include "/etc/varnish/acl/purge.vcl";</div><div><br></div><div>sub vcl_recv {</div><div> set req.backend = origin;</div><div><br></div><div> if (req.request !~ "(GET|HEAD|PURGE)") {</div><div>
error 405 "Not allowed.";</div><div> }</div><div><br></div><div> if (req.url ~ "^/files") {</div><div> set req.url = secdown.check_url(req.url, "MySecretIsNotYourSecret", "/link-expired.html", "/link-error.html");</div>
<div> }</div><div><br></div><div> # Before anything else we need to fix gzip compression</div><div> if (req.http.Accept-Encoding) {</div><div> if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|flv|ts|mp4)$") {</div>
<div> # No point in compressing these</div><div> remove req.http.Accept-Encoding;</div><div> } else if (req.http.Accept-Encoding ~ "gzip") {</div><div> set req.http.Accept-Encoding = "gzip";</div>
<div> } else if (req.http.Accept-Encoding ~ "deflate") {</div><div> set req.http.Accept-Encoding = "deflate";</div><div> } else {</div><div> # unknown algorithm</div>
<div> remove req.http.Accept-Encoding;</div><div> }</div><div> }</div><div><br></div><div><br></div><div> # Allow a PURGE method to clear cache via regular expression.</div><div> if (req.request == "PURGE") {</div>
<div> # If the client has not an authorized IP or</div><div> # if he comes from the HTTPS proxy on localhost, deny it.</div><div> if (!client.ip ~ purge || req.http.X-Forwarded-For) {</div><div> error 405 "Not allowed.";</div>
<div> }</div><div> ban_url(req.url);</div><div> error 200 "Expression " + req.url + " added to ban.list.";</div><div> }</div><div><br></div><div>}</div><div><br></div><div>sub vcl_pipe {</div>
<div><br></div><div> set bereq.http.connection = "close";</div><div><br></div><div>}</div><div><br></div><div>sub vcl_pass {</div><div># return (pass);</div><div>}</div><div><br></div><div>sub vcl_hash {</div>
<div><br></div><div> hash_data(req.url);</div><div><br></div><div> return (hash);</div><div>}</div><div><br></div><div><br></div><div>sub vcl_hit {</div><div># return (deliver);</div><div>}</div><div><br></div>
<div>
sub vcl_miss {</div><div># return (fetch);</div><div>}</div><div><br></div><div><br></div><div>sub vcl_fetch {</div><div> unset beresp.http.expires;</div><div> set beresp.http.cache-control = "max-age=86400";</div>
<div> set beresp.ttl = 365d;</div><div><br></div><div> if (beresp.status >= 400) {</div><div> set beresp.ttl = 1m;</div><div> }</div><div><br></div><div> if ((beresp.status == 301) || (beresp.status == 302) || (beresp.status == 401)) {</div>
<div> return (hit_for_pass);</div><div> }</div><div>}</div><div><br></div><div><br></div><div>sub vcl_deliver {</div><div><br></div><div> # Rename Varnish XIDs headers</div><div> if (resp.http.X-Varnish)</div>
<div> {</div><div> set resp.http.X-Object-ID = resp.http.X-Varnish;</div><div> unset resp.http.X-Varnish;</div><div> }</div><div><br></div><div> remove resp.http.Via;</div><div> remove resp.http.X-Powered-By;</div>
<div><br></div><div># return (deliver);</div><div>}</div><div><br></div><div>sub vcl_error {</div><div><br></div><div> # Do not reveal what's inside the box :)</div><div> remove obj.http.Server;</div><div> set obj.http.Server = "EdgeCache/1.4";</div>
<div>}</div><div><br></div><div>sub vcl_init {</div><div># return (ok);</div><div>}</div><div><br></div><div>sub vcl_fini {</div><div># return (ok);</div><div>}</div><div><br></div><div><br></div><div>3) Assert Message (from syslog)</div>
<div><br></div><div>Sep 15 18:21:02 e101 default[18290]: Child (19438) said Out of space in persistent silo</div><div>Sep 15 18:21:02 e101 default[18290]: Child (19438) said Committing suicide, restart will make space</div>
<div>Sep 15 18:21:02 e101 default[18290]: Child (19438) ended</div><div>Sep 15 18:21:02 e101 default[18290]: Child cleanup complete</div><div>Sep 15 18:21:02 e101 default[18290]: child (20924) Started</div><div>Sep 15 18:21:02 e101 default[18290]: Child (20924) said Child starts</div>
<div>Sep 15 18:21:02 e101 default[18290]: Child (20924) said Dropped 11 segments to make free_reserve</div><div>Sep 15 18:21:02 e101 default[18290]: Child (20924) said Silo completely loaded</div><div>Sep 15 18:21:27 e101 default[18290]: Child (20924) died signal=6 (core dumped)</div>
<div>Sep 15 18:21:27 e101 default[18290]: Child (20924) Panic message: Assert error in smp_oc_getobj(), storage_persistent_silo.c line 401:#012 Condition((o)->mag</div><div>ic == 0x32851d42) not true.#012thread = (ban-lurker)#012ident = Linux,2.6.32-33-generic,x86_64,-spersistent,-smalloc,-hcritbit,epoll#012Backtrace:#012 0x437e</div>
<div>49: pan_backtrace+19#012 0x43811e: pan_ic+1ad#012 0x45da38: smp_oc_getobj+282#012 0x415407: oc_getobj+14c#012 0x417848: ban_lurker_work+299#012 0x41793d:</div><div> ban_lurker+5b#012 0x43ad91: wrk_bgthread+184#012 0x7ffff6a9c9ca: _end+7ffff6408692#012 0x7ffff67f970d: _end+7ffff61653d5#012</div>
<div>Sep 15 18:21:27 e101 default[18290]: Child cleanup complete</div><div>Sep 15 18:21:27 e101 default[18290]: child (21898) Started</div><div>Sep 15 18:21:27 e101 default[18290]: Pushing vcls failed: CLI communication error (hdr)</div>
<div>Sep 15 18:21:27 e101 default[18290]: Stopping Child</div><div>Sep 15 18:21:27 e101 default[18290]: Child (21898) died signal=6 (core dumped)</div><div>Sep 15 18:21:27 e101 default[18290]: Child (21898) Panic message: Assert error in smp_open_segs(), storage_persistent.c line 239:#012 Condition(sg1->p.offset</div>
<div> != sg->p.offset) not true.#012thread = (cache-main)#012ident = Linux,2.6.32-33-generic,x86_64,-spersistent,-smalloc,-hcritbit,no_waiter#012Backtrace:#012 0x</div><div>437e49: pan_backtrace+19#012 0x43811e: pan_ic+1ad#012 0x45a568: smp_open_segs+415#012 0x45ab93: smp_open+236#012 0x456391: STV_open+40#012 0x435fa4: chil</div>
<div>d_main+124#012 0x44d3a7: start_child+36a#012 0x44ddce: mgt_sigchld+3e7#012 0x7ffff7bd1fec: _end+7ffff753dcb4#012 0x7ffff7bd2348: _end+7ffff753e010#012</div><div>Sep 15 18:21:27 e101 default[18290]: Child (-1) said Child starts</div>
<div>Sep 15 18:21:27 e101 default[18290]: Child cleanup complete</div><div><br></div><div>4) GDB Core bt</div><div><br></div><div>(gdb) bt</div><div>#0 0x00007ffff6746a75 in raise () from /lib/libc.so.6</div><div>#1 0x00007ffff674a5c0 in abort () from /lib/libc.so.6</div>
<div>#2 0x00000000004381dd in pan_ic (func=0x482dd5 "smp_open_segs", file=0x4827c4 "storage_persistent.c", line=239,</div><div> cond=0x48283f "sg1->p.offset != sg->p.offset", err=0, xxx=0) at cache_panic.c:374</div>
<div>#3 0x000000000045a568 in smp_open_segs (sc=0x7ffff6433000, ctx=0x7ffff6433220) at storage_persistent.c:239</div><div>#4 0x000000000045ab93 in smp_open (st=0x7ffff64213c0) at storage_persistent.c:331</div><div>#5 0x0000000000456391 in STV_open () at stevedore.c:406</div>
<div>#6 0x0000000000435fa4 in child_main () at cache_main.c:128</div><div>#7 0x000000000044d3a7 in start_child (cli=0x0) at mgt_child.c:345</div><div>#8 0x000000000044ddce in mgt_sigchld (e=0x7ffff64da1d0, what=-1) at mgt_child.c:524</div>
<div>#9 0x00007ffff7bd1fec in vev_sched_signal (evb=0x7ffff6408380) at vev.c:435</div><div>#10 0x00007ffff7bd2348 in vev_schedule_one (evb=0x7ffff6408380) at vev.c:478</div><div>#11 0x00007ffff7bd1d2a in vev_schedule (evb=0x7ffff6408380) at vev.c:363</div>
<div>#12 0x000000000044e1c9 in MGT_Run () at mgt_child.c:602</div><div>#13 0x0000000000461a64 in main (argc=0, argv=0x7fffffffebd0) at varnishd.c:650</div><div><br></div><div>5) Last lines of varnishlog</div><div><br></div>
<div><br></div><div> 221 SessionOpen c 85.93.199.29 58335 :80</div><div><br></div><div> 234 SessionOpen c 77.196.147.182 2273 :80</div><div><br></div><div><br></div><br><div class="gmail_quote">2011/9/15 Aurélien <span dir="ltr"><<a href="mailto:footplus@gmail.com" target="_blank">footplus@gmail.com</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br><br>I'm currently investigating an issue on some caches we are trying to put in production, and I think we'll make a separate post about the whole setup, but i'm currently personnally interested in the following messages:<br>
<br>default[18290]: Child (19438) said Out of space in persistent silo<br>default[18290]: Child (19438) said Committing suicide, restart will make space<br><br>These can be triggered in storage_persistent_silo.c, but I'm not exactly clear on why varnish commits "suicide", and how this could be a "normal" condition (exit 0 + auto restart).<br>
<br>We're using one of the latest trunk versions (d56069e), with various
persistent storage sizes (tried 3*30G, 1*90Gb), on a Linux server with
48Gb memory. We're caching relatively big files (avg size: ~25 Mb), and they have a long expiry time (~1year). <br><br>Also, the document I found, <a href="https://www.varnish-cache.org/trac/wiki/ArchitecturePersistentStorage" target="_blank">https://www.varnish-cache.org/trac/wiki/ArchitecturePersistentStorage</a>, does not exactly explain if/how the segments are reused (or I did not understand it).<br>
<br>What is the reason and intent behind this restart ? Are the cache contents lost in this case ? Could this be caused by a certain workflow or configuration ?<br><br>Thanks,<br>Best regards,<br><font color="#888888">-- <br>
Aurélien Guillaume<br>
<br>
</font><br>_______________________________________________<br>
varnish-dev mailing list<br>
<a href="mailto:varnish-dev@varnish-cache.org" target="_blank">varnish-dev@varnish-cache.org</a><br>
<a href="https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev" target="_blank">https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev</a><br></blockquote></div><br></div>