We're using Varnish and finding that Linux runs the OOM killer on the large varnish child process every few days. I'm not sure what's causing the memory to grow but now I want to tune it so that I know configuration is not an issue.<div>
<br></div><div>The default config we were using was 10MB. We're using a small 32-bit EC2 instance (Linux 2.6.21.7-2.fc8xen) with 1.75G of RAM and 10GB of disk so I changed the storage specification to "file,/var/lib/varnish/varnish_storage.bin,1500M". I'd like to be able give varnish 8GB of disk but it complains about sizes larger than 2GB. 32-bit limitation?</div>
<div><br></div><div>Side note: I couldn't find any good doc on the various command line parameters for varnishd. The 2.0.4 src only contains a man page for vcl. It would be nice to see a man page for varnishd and its options.</div>
<div><br></div><div>We are using purge_url heavily as we update documents - this shouldn't cause unchecked grow though, right? We aren't using regexps to purge.</div><div><br></div><div><br></div><div><br></div><div>
Attached is the /var/log/messages output from the oom-killer and here's a few lines for the lazy. I can't grok the output.</div><div><br></div><div><div>Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0</div>
<div>[...snip...]</div></div><div><div>Jun 11 15:35:02 (none) kernel: Mem-info:</div><div>Jun 11 15:35:02 (none) kernel: DMA per-cpu:</div><div>Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 94 Cold: hi: 62, btch: 15 usd: 60</div>
<div>Jun 11 15:35:02 (none) kernel: HighMem per-cpu:</div><div>Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14</div><div>Jun 11 15:35:02 (none) kernel: Active:213349 inactive:210447 dirty:0 writeback:0 unstable:0</div>
<div>Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:13</div><div>Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355572kB inactive:346580kB present:739644kB pages_scanned:1108980 all_unreclaimable? yes </div>
<div>Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 </div><div>Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497824kB inactive:495208kB present:995688kB pages_scanned:1537436 all_unreclaimable? yes</div>
<div>Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 </div><div>Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB</div>
<div>Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB</div><div>Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581746, race 3+9 </div>
<div>Jun 11 15:35:02 (none) kernel: Free swap = 0kB </div><div>Jun 11 15:35:02 (none) kernel: Total swap = 917496kB</div></div>