503 Service Unavailable + Stop Responding

Jauder Ho jauderho at gmail.com
Wed Apr 8 02:03:58 CEST 2009


I saw this while testing using ab on 2.0.3 a number of times. I was just
doing a ab -n 1000 -c 50 (x3) when this happened (repeatably).

Retesting on -trunk seems to be okay.

--Jauder


On Tue, Apr 7, 2009 at 4:41 PM, Ross Brown <ross at trademe.co.nz> wrote:

> I'm seeing similar issues (running Trunk). How much load is your Varnish
> server under when it stops responding?
>
>
> -----Original Message-----
> From: varnish-misc-bounces at projects.linpro.no [mailto:
> varnish-misc-bounces at projects.linpro.no] On Behalf Of Sascha Kain
> Sent: Wednesday, 8 April 2009 3:24 a.m.
> To: kitai at ya.com
> Cc: varnish-misc at projects.linpro.no
> Subject: Re: 503 Service Unavailable + Stop Responding
>
> Hi, its running correct. no such messages in logfiles.
>
> Im still having the second problem, that the varnishd just stops
> responding on port 80 and all requests die.
>
> maybe ill switch to 2.0.4, and check it out on high load.
>
>
> Kitai wrote:
> > Search in the "/var/log/messages" log if varnish it's restarting.
> >
> > After every crash, varnish starts answering with a 503 to every request.
> >
> >
> > DAvid (Kitai) Cruz
> >
> >
> > 2009/4/7 Sascha Kain <s.kain at eraffe-media.de>:
> >
> >> Hi
> >> im getting folloing error when accessing a Picture (jpg), delivered by
> >> Varnish-Cache.
> >>
> >> ====================
> >>
> >>
> >>  Error 503 Service Unavailable
> >>
> >> Service Unavailable
> >>
> >>
> >>      Guru Meditation:
> >>
> >> XID: 502211958
> >>
> >> Varnish <http://www.varnish-cache.org/>
> >>
> >> ================
> >>
> >>
> >> This happens sporadically.
> >> When i access the picture from the Backend directly, its working.
> >>
> >>
> >> My varnishd is running on a Debian Server
> >> proxycache2:~# uname -a
> >> Linux proxycache2 2.6.18-6-amd64 #1 SMP Mon Jun 16 22:30:01 UTC 2008
> >> x86_64 GNU/Linux
> >> proxycache2:~# varnishd -V
> >> varnishd (varnish-2.0.3)
> >> Copyright (c) 2006-2008 Linpro AS / Verdens Gang AS
> >>
> >> varnishd -a :80 -b xx.xx.xx.40:80 -s malloc,15360M
> >>
> >>
> >> It also happens, that the daemon just stop answering on Port 80, i have
> >> to manually restart it!
> >>
> >>
> >> proxycache2:~# varnishstat -1
> >> uptime                 691288          .   Child uptime
> >> client_conn          13757134        19.90 Client connections accepted
> >> client_req           62501336        90.41 Client requests received
> >> cache_hit            59008654        85.36 Cache hits
> >> cache_hitpass              35         0.00 Cache hits for pass
> >> cache_miss            3454784         5.00 Cache misses
> >> backend_conn          3491596         5.05 Backend connections success
> >> backend_unhealthy            0         0.00 Backend connections not
> attempted
> >> backend_busy                0         0.00 Backend connections too many
> >> backend_fail             1094         0.00 Backend connections failures
> >> backend_reuse         3190621         4.62 Backend connections reuses
> >> backend_recycle       3300087         4.77 Backend connections recycles
> >> backend_unused              0         0.00 Backend connections unused
> >> n_srcaddr                1082          .   N struct srcaddr
> >> n_srcaddr_act              69          .   N active struct srcaddr
> >> n_sess_mem               4061          .   N struct sess_mem
> >> n_sess                    325          .   N struct sess
> >> n_object               727166          .   N struct object
> >> n_objecthead           412871          .   N struct objecthead
> >> n_smf                       0          .   N struct smf
> >> n_smf_frag                  0          .   N small free smf
> >> n_smf_large                 0          .   N large free smf
> >> n_vbe_conn                 25          .   N struct vbe_conn
> >> n_bereq                   173          .   N struct bereq
> >> n_wrk                      46          .   N worker threads
> >> n_wrk_create             6590         0.01 N worker threads created
> >> n_wrk_failed                0         0.00 N worker threads not created
> >> n_wrk_max                   0         0.00 N worker threads limited
> >> n_wrk_queue                 0         0.00 N queued work requests
> >> n_wrk_overflow          40391         0.06 N overflowed work requests
> >> n_wrk_drop                  0         0.00 N dropped work requests
> >> n_backend                   1          .   N backends
> >> n_expired                 498          .   N expired objects
> >> n_lru_nuked           2717782          .   N LRU nuked objects
> >> n_lru_saved                 0          .   N LRU saved objects
> >> n_lru_moved          50761824          .   N LRU moved objects
> >> n_deathrow                  0          .   N objects on deathrow
> >> losthdr                     0         0.00 HTTP header overflows
> >> n_objsendfile               0         0.00 Objects sent with sendfile
> >> n_objwrite           43664646        63.16 Objects sent with write
> >> n_objoverflow               0         0.00 Objects overflowing workspace
> >> s_sess               13757122        19.90 Total Sessions
> >> s_req                62501369        90.41 Total Requests
> >> s_pipe                     10         0.00 Total pipe
> >> s_pass                  37897         0.05 Total pass
> >> s_fetch               3477236         5.03 Total fetch
> >> s_hdrbytes        19737092290     28551.19 Total header bytes
> >> s_bodybytes      681892000484    986407.98 Total body bytes
> >> sess_closed            544021         0.79 Session Closed
> >> sess_pipeline          164402         0.24 Session Pipeline
> >> sess_readahead          65588         0.09 Session Read Ahead
> >> sess_linger                 0         0.00 Session Linger
> >> sess_herd            61809876        89.41 Session herd
> >> shm_records        2591397419      3748.65 SHM records
> >> shm_writes          170178899       246.18 SHM writes
> >> shm_flushes                65         0.00 SHM flushes due to overflow
> >> shm_cont                 5567         0.01 SHM MTX contention
> >> shm_cycles                931         0.00 SHM cycles through buffer
> >> sm_nreq                     0         0.00 allocator requests
> >> sm_nobj                     0          .   outstanding allocations
> >> sm_balloc                   0          .   bytes allocated
> >> sm_bfree                    0          .   bytes free
> >> sma_nreq              9708485        14.04 SMA allocator requests
> >> sma_nobj              1442487          .   SMA outstanding allocations
> >> sma_nbytes        16106012162          .   SMA outstanding bytes
> >> sma_balloc        81856959440          .   SMA bytes allocated
> >> sma_bfree         65750947278          .   SMA bytes free
> >> sms_nreq                15433         0.02 SMS allocator requests
> >> sms_nobj                    0          .   SMS outstanding allocations
> >> sms_nbytes       18446744073709546966          .   SMS outstanding bytes
> >> sms_balloc            7172160          .   SMS bytes allocated
> >> sms_bfree             7176345          .   SMS bytes freed
> >> backend_req           3491587         5.05 Backend requests made
> >> n_vcl                       1         0.00 N vcl total
> >> n_vcl_avail                 1         0.00 N vcl available
> >> n_vcl_discard               0         0.00 N vcl discarded
> >> n_purge                     1          .   N total active purges
> >> n_purge_add                 1         0.00 N new purges added
> >> n_purge_retire              0         0.00 N old purges deleted
> >> n_purge_obj_test            0         0.00 N objects tested
> >> n_purge_re_test             0         0.00 N regexps tested against
> >> n_purge_dups                0         0.00 N duplicate purges removed
> >> hcb_nolock                  0         0.00 HCB Lookups without lock
> >> hcb_lock                    0         0.00 HCB Lookups with lock
> >> hcb_insert                  0         0.00 HCB Inserts
> >> esi_parse                   0         0.00 Objects ESI parsed (unlock)
> >> esi_errors                  0         0.00 ESI parse errors (unlock)
> >>
> >>
> >>
> >> Sascha Kain
> >> IT / Administration
> >> eraffe media GmbH & Co. KG Marketing - Consulting - Software
> >> Schönfeldstr. 17 - 83022 Rosenheim
> >>
> >> Fon: + 49 (0)8031 - 941 41 -46
> >> Fax: + 49 (0)8031 - 941 41 -59
> >> E-Mail: s.kain at eraffe-media.de
> >> www.eraffe-media.de - www.eraffe.de
> >>
> >> eraffe media GmbH & Co. KG, Sitz: Rosenheim,
> >> Registergericht: AG Traunstein HR A Nr. 9104,
> >> St-Nr. 156/157/58806, FA Rosenheim,
> >> USt.-ID: DE250117972
> >>
> >> Persönlich haftende Gesellschafterin:
> >> eraffe media Verwaltungs-GmbH, Sitz: Rosenheim,
> >> Registergericht: AG Traunstein HR B 16956
> >> St-Nr. 156/116/90247, FA Rosenheim
> >>
> >> Geschäftsführer: Maximilian Kuss, Oliver Döser
> >>
> >> _______________________________________________
> >> varnish-misc mailing list
> >> varnish-misc at projects.linpro.no
> >> http://projects.linpro.no/mailman/listinfo/varnish-misc
> >>
> >>
>
>
> --
> Sascha Kain
> IT / Administration
> eraffe media GmbH & Co. KG Marketing - Consulting - Software
> Schönfeldstr. 17 - 83022 Rosenheim
>
> Fon: + 49 (0)8031 - 941 41 -46
> Fax: + 49 (0)8031 - 941 41 -59
> E-Mail: s.kain at eraffe-media.de
> www.eraffe-media.de - www.eraffe.de
>
> eraffe media GmbH & Co. KG, Sitz: Rosenheim,
> Registergericht: AG Traunstein HR A Nr. 9104,
> St-Nr. 156/157/58806, FA Rosenheim,
> USt.-ID: DE250117972
>
> Persönlich haftende Gesellschafterin:
> eraffe media Verwaltungs-GmbH, Sitz: Rosenheim,
> Registergericht: AG Traunstein HR B 16956
> St-Nr. 156/116/90247, FA Rosenheim
>
> Geschäftsführer: Maximilian Kuss, Oliver Döser
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20090407/95eec2f5/attachment-0001.html>


More information about the varnish-misc mailing list