Varnish virtual memory usage

Henry Paulissen h.paulissen at qbell.nl
Thu Oct 22 10:04:39 CEST 2009


Awh,

Thank you for your comment.
I'll make a test case of it tomorrow (or else after the weekend).

I will report back.

-----Original Message-----
From: Scott Wilson [mailto:scott at idealist.org] 
Sent: donderdag 22 oktober 2009 8:52
To: Henry Paulissen
Cc: varnish-misc at projects.linpro.no; kb at slide.com
Subject: Re: Varnish virtual memory usage

We had a similar problem where varnish would fill all swap and crash
every couple of weeks. The trick that seems to have solved the problem
was to remove purge.url from our VCL (a lot of badly behaved clients
send a lot more no-cache headers than necessary).

We replaced purge.url with an approach that sets the object's ttl to
zero and restarts the request. The details are here:

http://varnish.projects.linpro.no/wiki/VCLExampleEnableForceRefresh

In our case we're using FreeBSD 7.2 64-bit.

All that said, it doesn't seem that this solution jives with Roi's
random url test unless purge.url figured in his vcl / testing script.

cheers,
scott

2009/10/22 Henry Paulissen <h.paulissen at qbell.nl>:
> We ran CentOS 5.3 X64 when we noticed this strange behavior. Later on we moved to Fedora core 11 X64. But we where still noticing the same memory allocation problems. Later on we reinstalled the server with vmware to run a couple of (half live a.k.a. beta) tests and noticed it isn’t happening under fedora core 11 x32.
>
> We do about 3000 connections/sec for static content (smaller images). For large images (> 200kb), javascript and css we have another instances running (all having the same issues, but im going to tell you about the static content instance).
>
> Hitrate is close to the 100% (99-100%).
> Server core's: 16
> Memory: 24GB (VM host server is upgraded to 64GB ram and only doing varnish guests on malloc, so I doubt there's a real performance impact)
>
>
> Tried changing the number of thread_pools and workers, nothing helped.
> Did the sysctl recommended settings. Disabled conntrack filter in iptables.
>
> All incoming requests are with the "connection: close" close header (we have a high availability server above it, who doesn’t allow keep-alive connections. So he transforms every connection to close).
>
> Both storage type's where used.
>
> I did noticed something when I changed the lru_interval to 60. The reserved memory was keeping within his limits (before this changing this setting it grow way above max limit). But virtual memory is still way above memory the limit.
>
> If we didn’t restart varnish every few hours it grow above the physical memory limit and starts using the swap space. If the varnish server was restarted it freed up the memory.
>
> Tried both stable and svn versions.
>
>
> My VCL for static:
>
> #################################################################################################
> #################################################################################################
>
> director staticbackend round-robin {
>        {
>                .backend = {
>                        .host = "192.168.x.x";
>                        .port = "x";
>                        .connect_timeout = 2s;
>                        .first_byte_timeout = 5s;
>                        .between_bytes_timeout = 2s;
>                }
>        }
>        {
>                .backend = {
>                        .host = "192.168.x.x";
>                        .port = "x";
>                        .connect_timeout = 2s;
>                        .first_byte_timeout = 5s;
>                        .between_bytes_timeout = 2s;
>                }
>        }
> }
>
> sub vcl_recv {
>        set req.backend = staticbackend;
>
>        if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIO
> NS" && req.request != "DELETE") {
>                /*
>                        Non-RFC2616 or CONNECT which is weird.
>                        Shoot this client, but first go in pipeline to the webserver.
>                        Maybe he knows what to do with this request.
>                */
>
>                return (pipe);
>        }
>
>        remove req.http.X-Forwarded-For;
>        remove req.http.Accept-Encoding;
>        remove req.http.Accept-Charset;
>        remove req.http.Accept-Language;
>        remove req.http.Referer;
>        remove req.http.Accept;
>        remove req.http.Cookie;
>
>        return (lookup);
> }
>
> sub vcl_pipe {
>        set req.http.connection = "close";
>        return (pipe);
> }
>
> sub vcl_pass {
>        return (pass);
> }
>
> sub vcl_hash {
>        set req.hash += req.url;
>
>        if (req.http.Cache-Control == "no-cache" || req.http.Pragma == "no-cache") {
>                purge_url(req.url);
>        }
>
>        return (hash);
> }
>
> sub vcl_hit {
>        if (!obj.cacheable) {
>                return(pass);
>        }
>
>        return (deliver);
> }
>
> sub vcl_miss {
>        return (fetch);
> }
>
> sub vcl_fetch {
>        /*
>            I hate it when varnish cashes my redirects.
>            Some of them are dynamic.
>        */
>        if (beresp.status == 302 || beresp.status == 301) {
>                return (pass);
>        }
>
>        remove beresp.http.Set-Cookie;
>        set beresp.grace = 2m;
>
>        return (deliver);
> }
>
> sub vcl_deliver {
>        remove resp.http.Via;
>        remove resp.http.X-Varnish;
>
>        if (obj.hits > 0) {
>                set resp.http.X-Cache = "HIT";
>        } else {
>                set resp.http.X-Cache = "MISS";
>        }
>        set resp.http.Server = "static";
>
>        return (deliver);
> }
>
> sub vcl_error {
>        set obj.http.Content-Type = "text/html; charset=utf-8";
>
>        synthetic {"
> <?xml version="1.0" encoding="utf-8"?>
> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
> <html>
>        <head>
>                <title>"} obj.status " " obj.response {"</title>
>        </head>
>        <body>
>                <h1>Error "} obj.status " " obj.response {"</h1>
>                <p>"} obj.response {"</p>
>                <h3>Guru Meditation:</h3>
>                <p>XID: "} req.xid {"</p>
>        </body>
> </html>
> "};
>
>        return (deliver);
> }
>
> #################################################################################################
> #################################################################################################
>
>
> For further details see my ticket: http://varnish.projects.linpro.no/ticket/546
>
>
> @Kristian:
> When the programmers / engineers have some spare time over, they are always welcome to see it in live action.
>
>
> -----Oorspronkelijk bericht-----
> Van: Ken Brownfield [mailto:kb at slide.com]
> Verzonden: woensdag 21 oktober 2009 21:57
> Aan: Henry Paulissen
> CC: varnish at projects.linpro.no
> Onderwerp: Re: Varnish virtual memory usage
>
> Small comments:
>
> 1) We're running Linux x86_64 exclusively here under significant load,
> with no memory issues.
> 2) Why don't you compile a 32-bit version of Varnish; wouldn't this
> have the same effect without the RAM and performance hit of VMs?
> 3) Do you make heavy use of purges?
> --
> kb
>
> On Oct 21, 2009, at 6:22 AM, Henry Paulissen wrote:
>
>> We encounter the same problem.
>>
>> Its seems to occur only on x64 platforms.
>> We decided to take a different approach and installed vmware to the
>> machine.
>> Next we did a setup of 6 guests with x32 PAE software.
>>
>> No strange memory leaks occurred since then at the price of small
>> storage (3.5G max) and limited worker threads (256 max).
>>
>> Opened a ticket for the problem, but the wont listen until I buy a
>> support contract (á €8K).
>> Seems they don’t want to know there is some kind of memory issue in
>> their software.
>>
>> Anyway...
>> Varnish is running stable now with some few tricks.
>>
>>
>> Regards,
>>
>> -----Original Message-----
>> From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-
>> bounces at projects.linpro.no] On Behalf Of Kristian Lyngstol
>> Sent: woensdag 21 oktober 2009 13:34
>> To: Roi Avinoam
>> Cc: varnish-misc at projects.linpro.no
>> Subject: Re: Varnish virtual memory usage
>>
>> On Mon, Sep 21, 2009 at 02:55:07PM +0300, Roi Avinoam wrote:
>>> At Metacafe we're testing the integration with Varnish, and I was
>>> tasked with benchmarking our Varnish setup. I intentionally
>>> over-flooded the server with requests, in an attempt to see how the
>>> system will behave under extensive traffic. Surprisingly, the server
>>> ran out of swap and crashed.
>>
>> That seems mighty strange. What sort of tests did you do?
>>
>>> In out configuration, "-s file,/var/lib/varnish/varnish_storage.bin,
>>> 1G".
>>> Does it mean Varnish shouldn't use more than 1GB of the virtual
>>> memory?
>>> Is there any other way to limit the memory/storage usage?
>>
>> If you are using -s file and you have 4GB of memory, you are telling
>> Varnish to create a _file_ of 1GB, and it's up to the kernel what it
>> keeps in memory or not. If you actually run out of memory with this
>> setup, you've either hit a bug (need more details first), or you're
>> doing something strange like having the mmaped file (/var/lib/
>> varnish/) in tmpfs with a sizelimit less than 1GB or something along
>> those lines. But I need more details to say anything for certain.
>>
>> --
>> Kristian Lyngstøl
>> Redpill Linpro AS
>> Tlf: +47 21544179
>> Mob: +47 99014497
>>
>>
>> _______________________________________________
>> varnish-misc mailing list
>> varnish-misc at projects.linpro.no
>> http://projects.linpro.no/mailman/listinfo/varnish-misc
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>





More information about the varnish-misc mailing list