Darryl, that sounds right. Yes, this is in our vcl_recv handler. I was watching varnish in top today and the memory just crept up by 1-2k every few seconds, monotonically increasing. This seems like a major issue - I'm surprised that purge_url doesn't just do that under the covers.<br>
<br><div>I'll see if I can't adjust our VCL logic as you suggest. Thanks.</div><div><br><div class="gmail_quote">On Thu, Jun 11, 2009 at 4:46 PM, Darryl Dixon - Winterhouse Consulting <span dir="ltr"><<a href="mailto:darryl.dixon@winterhouseconsulting.com">darryl.dixon@winterhouseconsulting.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Hi Mike,<br>
<br>
Quite possibly the purge_url usage is causing you a problem. I assume this<br>
is something that is being invoked from your VCL, rather than telnet-ing<br>
to the administrative interface or by varnishadm?<br>
<br>
My testing showed that with purge_url in the VCL, a 'purge record' was<br>
created every time the rule was struck, and that record never seemed to be<br>
removed, which meant that memory grew without bound nearly continuously<br>
(new memory allocated for each new purge record). See the thread I started<br>
here:<br>
<a href="http://www.mail-archive.com/varnish-misc@projects.linpro.no/msg02520.html" target="_blank">http://www.mail-archive.com/varnish-misc@projects.linpro.no/msg02520.html</a><br>
<br>
Instead, in vcl_hit, if the object should be purged, set obj.ttl to 0 and<br>
then restart the request. This solved the problem for me.<br>
<br>
regards,<br>
Darryl Dixon<br>
Winterhouse Consulting Ltd<br>
<a href="http://www.winterhouseconsulting.com" target="_blank">http://www.winterhouseconsulting.com</a><br>
<div><div></div><div class="h5"><br>
<br>
<br>
<br>
> We're using Varnish and finding that Linux runs the OOM killer on the<br>
> large<br>
> varnish child process every few days. I'm not sure what's causing the<br>
> memory to grow but now I want to tune it so that I know configuration is<br>
> not<br>
> an issue.<br>
> The default config we were using was 10MB. We're using a small 32-bit EC2<br>
> instance (Linux 2.6.21.7-2.fc8xen) with 1.75G of RAM and 10GB of disk so I<br>
> changed the storage specification to<br>
> "file,/var/lib/varnish/varnish_storage.bin,1500M". I'd like to be able<br>
> give<br>
> varnish 8GB of disk but it complains about sizes larger than 2GB. 32-bit<br>
> limitation?<br>
><br>
> Side note: I couldn't find any good doc on the various command line<br>
> parameters for varnishd. The 2.0.4 src only contains a man page for vcl.<br>
> It would be nice to see a man page for varnishd and its options.<br>
><br>
> We are using purge_url heavily as we update documents - this shouldn't<br>
> cause<br>
> unchecked grow though, right? We aren't using regexps to purge.<br>
><br>
><br>
><br>
> Attached is the /var/log/messages output from the oom-killer and here's a<br>
> few lines for the lazy. I can't grok the output.<br>
><br>
> Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer:<br>
> gfp_mask=0x201d2, order=0, oomkilladj=0<br>
> [...snip...]<br>
> Jun 11 15:35:02 (none) kernel: Mem-info:<br>
> Jun 11 15:35:02 (none) kernel: DMA per-cpu:<br>
> Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 94<br>
> Cold: hi: 62, btch: 15 usd: 60<br>
> Jun 11 15:35:02 (none) kernel: HighMem per-cpu:<br>
> Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26<br>
> Cold: hi: 62, btch: 15 usd: 14<br>
> Jun 11 15:35:02 (none) kernel: Active:213349 inactive:210447 dirty:0<br>
> writeback:0 unstable:0<br>
> Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23<br>
> pagetables:1493 bounce:13<br>
> Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB<br>
> high:5160kB active:355572kB inactive:346580kB present:739644kB<br>
> pages_scanned:1108980 all_unreclaimable? yes<br>
><br>
> Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972<br>
><br>
> Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB<br>
> high:2824kB active:497824kB inactive:495208kB present:995688kB<br>
> pages_scanned:1537436 all_unreclaimable? yes<br>
> Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0<br>
><br>
> Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB<br>
> 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB<br>
> Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB<br>
> 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB<br>
> Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890,<br>
> find<br>
> 572160/581746, race 3+9<br>
> Jun 11 15:35:02 (none) kernel: Free swap = 0kB<br>
><br>
> Jun 11 15:35:02 (none) kernel: Total swap = 917496kB<br>
</div></div>> _______________________________________________<br>
> varnish-misc mailing list<br>
> <a href="mailto:varnish-misc@projects.linpro.no">varnish-misc@projects.linpro.no</a><br>
> <a href="http://projects.linpro.no/mailman/listinfo/varnish-misc" target="_blank">http://projects.linpro.no/mailman/listinfo/varnish-misc</a><br>
><br>
<br>
</blockquote></div><br></div>