High Server Load Averages?

Artur Bergman sky at crucially.net
Thu Apr 9 22:43:13 CEST 2009


What is your iopressure?

iostat -k -x 5

or something like that

artur

On Apr 9, 2009, at 12:27 PM, Cloude Porteus wrote:

> Varnishstat doesn't list any nuked objects and file storage and  
> shmlog look like they have plenty of space:
>
> df -h
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Filesystem            Size  Used Avail Use% Mounted on
> tmpfs                 150M   81M   70M  54% /usr/local/var/varnish
> /dev/sdc1              74G   11G   61G  16% /var/lib/varnish
>
> top
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> top - 12:26:33 up 164 days, 22:21,  1 user,  load average: 2.60,  
> 3.26, 3.75
> Tasks:  67 total,   1 running,  66 sleeping,   0 stopped,   0 zombie
> Cpu(s):  0.7%us,  0.3%sy,  0.0%ni, 97.0%id,  0.7%wa,  0.3%hi,   
> 1.0%si,  0.0%st
> Mem:   8183492k total,  7763100k used,   420392k free,    13424k  
> buffers
> Swap:  3148720k total,    56636k used,  3092084k free,  7317692k  
> cached
>
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>  7441 varnish   15   0 70.0g 6.4g 6.1g S    2 82.5  56:33.31 varnishd
>
>
> Varnishstat:
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Hitrate ratio:        8        8        8
> Hitrate avg:     0.9782   0.9782   0.9782
>
>     36494404       219.98       160.57 Client connections accepted
>     36494486       220.98       160.57 Client requests received
>     35028477       212.98       154.12 Cache hits
>       474091         4.00         2.09 Cache hits for pass
>       988013         6.00         4.35 Cache misses
>      1465955        10.00         6.45 Backend connections success
>            9         0.00         0.00 Backend connections failures
>          994          .            .   N struct sess_mem
>           11          .            .   N struct sess
>       274047          .            .   N struct object
>       252063          .            .   N struct objecthead
>       609018          .            .   N struct smf
>        28720          .            .   N small free smf
>            2          .            .   N large free smf
>            2          .            .   N struct vbe_conn
>          901          .            .   N struct bereq
>         2000          .            .   N worker threads
>         2000         0.00         0.01 N worker threads created
>          143         0.00         0.00 N overflowed work requests
>            1          .            .   N backends
>       672670          .            .   N expired objects
>      3514467          .            .   N LRU moved objects
>           49         0.00         0.00 HTTP header overflows
>     32124238       206.98       141.34 Objects sent with write
>     36494396       224.98       160.57 Total Sessions
>     36494484       224.98       160.57 Total Requests
>          783         0.00         0.00 Total pipe
>       518770         4.00         2.28 Total pass
>      1464570        10.00         6.44 Total fetch
>  14559014884     93563.69     64058.18 Total header bytes
> 168823109304    489874.04    742804.45 Total body bytes
>     36494387       224.98       160.57 Session Closed
>          203         0.00         0.00 Session herd
>   1736767745     10880.80      7641.60 SHM records
>    148079555       908.90       651.53 SHM writes
>        15088         0.00         0.07 SHM flushes due to overflow
>        10494         0.00         0.05 SHM MTX contention
>          687         0.00         0.00 SHM cycles through buffer
>      2988576        21.00        13.15 allocator requests
>       580296          .            .   outstanding allocations
>   8916353024          .            .   bytes allocated
>  44770738176          .            .   bytes free
>          656         0.00         0.00 SMS allocator requests
>       303864          .            .   SMS bytes allocated
>       303864          .            .   SMS bytes freed
>      1465172        10.00         6.45 Backend requests made
>
>
>
> On Thu, Apr 9, 2009 at 12:18 PM, Artur Bergman <sky at crucially.net>  
> wrote:
> For the file storage or for the shmlog?
>
> When do you start nuking/expiring from disk? I suspect the load goes  
> up when you run out of storage space?
>
> Cheers
> Artur
>
>
> On Apr 9, 2009, at 12:02 PM, Cloude Porteus wrote:
>
>> Has anyone experienced very high server load averages? We're  
>> running varnish on a dual core with 8gb of ram. It runs okay for a  
>> day or two and then I start seeing load averages in 6-10 range for  
>> an hour or so, drops down to 2-3, then goes back up.
>>
>> This starts to happen once we have more items in the cache than our  
>> physical memory. Maybe increasing our lru_interval will help? It's  
>> currently set to 3600.
>>
>> Right now we're running with a 50gb file storage option. There are  
>> 270k objects in the cache, 70gb virtual memory, 6.2gb of res memory  
>> used, 11gb of data on disk in the file storage. We have a 98% hit  
>> ratio.
>>
>> We followed Artur's advice about setting a tmpfs and creating an  
>> ext2 partition for our file storage.
>>
>> I also tried running with malloc as our storage type, but I had to  
>> set it at a little less than half of our physical ram in order for  
>> it to work well after the cache got full. I don't understand why  
>> the virtual memory is double when I am running in malloc mode. I  
>> was running it with 5gb and the virtual memory was about 10-12gb  
>> and once it got full it started using the swap memory.
>>
>> Thanks for any help/insight.
>>
>> best,
>> cloude
>> -- 
>> VP of Product Development
>> Instructables.com
>>
>> http://www.instructables.com/member/lebowski
>> _______________________________________________
>> varnish-dev mailing list
>> varnish-dev at projects.linpro.no
>> http://projects.linpro.no/mailman/listinfo/varnish-dev
>
>
>
>
> -- 
> VP of Product Development
> Instructables.com
>
> http://www.instructables.com/member/lebowski

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-dev/attachments/20090409/e96f8210/attachment-0001.html>


More information about the varnish-dev mailing list