varnish nuking agressively all my objects.
Jorge Nerín
jnerin+varnish at gmail.com
Fri Sep 16 09:29:10 CEST 2011
On Thu, Sep 15, 2011 at 11:29, Aurélien Lemaire
<aurelien.lemaire at smile.fr>wrote:
> **
> Good day folks,
>
> First of all, varnish is an outstanding piece of software that my company
> and i are addicted to. So thanks to all the coders.
>
> Here is my problem :
> I allocated varnish 1G of RAM on a website that can have more than 2 Go of
> possible cacheable objects . Not to worry though as any proxy-cache system
> should smartly nuke old objects to make place to new one to live peacefully
> within its allocated RAM. And that's where Varnish behave unexpectedly :
> each time it need to nuke SOME objects : it nukes ALMOST ALL of them (often
> ~80% of my 35k objects) which is quite aggressive ; thus i lost almost all
> my cache....IRK !
>
> 3 Munin graphs attached to see the problem clearly : big drop each time a
> nuking happens.
>
> To make sure my pbr is about varnish nuking system : i increased from 1G to
> 3G(more than the max possible 2G cacheable objects) on another varnish of
> this platefom (this website is delivered by multiple front/varnish server
> all stricly similar and independant) and this issue disappeared (no more
> nuking : no lost of ~80%of my objects)
>
> Here is my env :
> Debian 5.0.8 64 bits on 2.6.32-5-openvz-amd64 kernel
> Varnish 2.1.3 SVN 5049:5055(debian package 2.1.3-8)
> 200 varnish 's worker threads running constantly (no issue on workers)
> 30req/s average with 60/s in peak
>
> Daemon run as such :
> /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -T localhost:6082 -S
> /etc/varnish/secret -f /etc/varnish/serverx.vcl -w 100,1024 -s
> file,/var/lib/varnish/serverx/varnish_storage.bin,3G
>
> Here a quick varnishstat -1 :
>
> client_conn 17177518 9.26 Client connections accepted
> client_drop 0 0.00 Connection dropped, no sess/wrk
> client_req 31277524 16.86 Client requests received
> cache_hit 20909485 11.27 Cache hits
> cache_hitpass 2558712 1.38 Cache hits for pass
> cache_miss 5422557 2.92 Cache misses
> backend_conn 175175 0.09 Backend conn. success
> backend_unhealthy 0 0.00 Backend conn. not attempted
> backend_busy 0 0.00 Backend conn. too many
> backend_fail 0 0.00 Backend conn. failures
> backend_reuse 10192538 5.49 Backend conn. reuses
> backend_toolate 98629 0.05 Backend conn. was closed
> backend_recycle 10291261 5.55 Backend conn. recycles
> backend_unused 0 0.00 Backend conn. unused
> fetch_head 459553 0.25 Fetch head
> fetch_length 9907522 5.34 Fetch with Length
> fetch_chunked 33 0.00 Fetch chunked
> fetch_eof 0 0.00 Fetch EOF
> fetch_bad 0 0.00 Fetch had bad headers
> fetch_close 7 0.00 Fetch wanted close
> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed
> fetch_zero 638 0.00 Fetch zero len
> fetch_failed 0 0.00 Fetch failed
> n_sess_mem 1189 . N struct sess_mem
> n_sess 890 . N struct sess
> n_object 32678 . N struct object
> n_vampireobject 0 . N unresurrected objects
> n_objectcore 32794 . N struct objectcore
> n_objecthead 30698 . N struct objecthead
> n_smf 68810 . N struct smf
> n_smf_frag 3002 . N small free smf
> n_smf_large 490 . N large free smf
> n_vbe_conn 8 . N struct vbe_conn
> n_wrk 200 . N worker threads
> n_wrk_create 200 0.00 N worker threads created
> n_wrk_failed 0 0.00 N worker threads not created
> n_wrk_max 4133 0.00 N worker threads limited
> n_wrk_queue 0 0.00 N queued work requests
> n_wrk_overflow 5 0.00 N overflowed work requests
> n_wrk_drop 0 0.00 N dropped work requests
> n_backend 1 . N backends
> n_expired 2291302 . N expired objects
> *n_lru_nuked 3098052 . N LRU nuked objects*
> n_lru_saved 0 . N LRU saved objects
> n_lru_moved 19894617 . N LRU moved objects
> n_deathrow 0 . N objects on deathrow
> losthdr 0 0.00 HTTP header overflows
> n_objsendfile 0 0.00 Objects sent with sendfile
> n_objwrite 23874235 12.87 Objects sent with write
> n_objoverflow 0 0.00 Objects overflowing workspace
> s_sess 17177512 9.26 Total Sessions
> s_req 31277524 16.86 Total Requests
> s_pipe 0 0.00 Total pipe
> s_pass 4945241 2.67 Total pass
> s_fetch 10367753 5.59 Total fetch
> s_hdrbytes 10732679592 5784.04 Total header bytes
> s_bodybytes 316828226025 170744.51 Total body bytes
> sess_closed 7071576 3.81 Session Closed
> sess_pipeline 13905 0.01 Session Pipeline
> sess_readahead 8085 0.00 Session Read Ahead
> sess_linger 24485521 13.20 Session Linger
> sess_herd 23385907 12.60 Session herd
> shm_records 1766166234 951.82 SHM records
> shm_writes 128900505 69.47 SHM writes
> shm_flushes 13215 0.01 SHM flushes due to overflow
> shm_cont 78095 0.04 SHM MTX contention
> shm_cycles 769 0.00 SHM cycles through buffer
> sm_nreq 18413160 9.92 allocator requests
> sm_nobj 65318 . outstanding allocations
> sm_balloc 824430592 . bytes allocated
> sm_bfree 249311232 . bytes free
> sma_nreq 0 0.00 SMA allocator requests
> sma_nobj 0 . SMA outstanding allocations
> sma_nbytes 0 . SMA outstanding bytes
> sma_balloc 0 . SMA bytes allocated
> sma_bfree 0 . SMA bytes free
> sms_nreq 61 0.00 SMS allocator requests
> sms_nobj 0 . SMS outstanding allocations
> sms_nbytes 0 . SMS outstanding bytes
> sms_balloc 16409 . SMS bytes allocated
> sms_bfree 16409 . SMS bytes freed
> backend_req 10367752 5.59 Backend requests made
> n_vcl 1 0.00 N vcl total
> n_vcl_avail 1 0.00 N vcl available
> n_vcl_discard 0 0.00 N vcl discarded
> n_purge 1 . N total active purges
> n_purge_add 12314 0.01 N new purges added
> n_purge_retire 12313 0.01 N old purges deleted
> n_purge_obj_test 2448163 1.32 N objects tested
> n_purge_re_test 62064275 33.45 N regexps tested against
> n_purge_dups 9524 0.01 N duplicate purges removed
> hcb_nolock 28886624 15.57 HCB Lookups without lock
> hcb_lock 4243837 2.29 HCB Lookups with lock
> hcb_insert 4243834 2.29 HCB Inserts
> esi_parse 0 0.00 Objects ESI parsed (unlock)
> esi_errors 0 0.00 ESI parse errors (unlock)
> accept_fail 0 0.00 Accept failures
> client_drop_late 0 0.00 Connection dropped late
> uptime 1855569 1.00 Client uptime
>
>
> Is it normal varnish behaviour ? sounds like a bug to me.
> Am i missing some tuning (lru_interval) to soften the nuking algo ?
> Do you need more info ?
> helps appreciated here ;-)
>
> Regards, Aurelien Lemaire
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
It could be someone downloading a large file (like a ~700Mb iso file) and
varnish nuking objects to make room for this file (even if its configured to
not cache it).
Try to get a varnishlog trace of the moment the nuking begins.
--
Jorge Nerín
<jnerin at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20110916/703dbf12/attachment-0003.html>
More information about the varnish-misc
mailing list