Patch: Use calloc instead of malloc when running out of VSM space (common_vsm.c)

Devon H. O'Dell dho at
Wed Mar 9 00:03:01 CET 2016

On Tue, Mar 8, 2016 at 12:22 PM, Federico Schwindt <fgsch at> wrote:
> In the case this would be an issue my concerns are not directly related to performance nor latency.
> If the configuration, either the default or modified by the user, would be oversized there could be lot of waste, which might have other side effects (as swapping).

First of all, I respect your point. I just disagree that it is an
argument against using calloc(3).

To echo phk, Varnish doesn't tend to allocate memory it isn't going to
need. Referring specifically to configurations, most VCL and VMODs do
no allocations whatsoever (there are of course exceptions, but those
are not really interesting) -- they obtain memory via workspace, which
was preallocated specifically for that purpose, and is going to be
resident anyway for a peak load. There are tunables to limit the
number of preallocated things like sessions, workspace size,
allocation size from the malloc storage for chunked and EOF bodies,
and more. Varnish is actually very sane in this respect.

Effectively this argument states that using calloc(3) is bad IFF you
expect your users to overcommit on their system resources. If they do
this, then they are unaware of relevant configuration tunables, they
don't understand their workload, or both. That's a documentation /
education / community issue. It shouldn't be an argument against using
a specific standard C API that gives you memory filled with defined

> On 8 Mar 2016 3:53 p.m., "Devon H. O'Dell" <dho at> wrote:
>> Depending on the implementation of the allocator used, calloc may not have any additional overhead. When it does, there are really only a couple cases it shouldn't happen on modern systems:
>> * If an object is allocated and freed in a tight loop. This shouldn't be happening anyway -- reuse / pool objects with this sort of access pattern.
>> * If the object is large. Malloced memory should not be immediately visible to multiple concurrent processes, and objects that consist of only a few cache lines cost very little to zero on modern processors.
>> It may be worth auditing for these situations, but I've done extensive profiling of extremely memory heavy workloads (hundreds of gb) in Varnish over the past few years, and I promise that calloc is not a current limiting factor in terms of latency or throughput.
>> Of course, if you're concerned about swapping, I'd also argue that your cache is not properly sized.
>> On Tue, Mar 8, 2016, 04:28 Poul-Henning Kamp <phk at> wrote:
>>> --------
>>> In message <CAJV_h0YVCRfTOFk=6N3H9jNxnrXM05ht2pQkyJ-FY-LGfu1H_g at>
>>> , Federico Schwindt writes:
>>> >We use calloc in many places, I do wonder how many of them do really need
>>> >it. The downside of using calloc when is not really needed is that by
>>> >zeroing the memory you end up with resident memory and not virtual, which
>>> >in turn might lead to swapping.
>>> This is almost always intentional, as we generally do not over-allocate.
>>> The exception is the malloc stevedore where we do.
>>> --
>>> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
>>> phk at FreeBSD.ORG         | TCP/IP since RFC 956
>>> FreeBSD committer       | BSD since 4.3-tahoe
>>> Never attribute to malice what can adequately be explained by incompetence.
>>> _______________________________________________
>>> varnish-dev mailing list
>>> varnish-dev at

More information about the varnish-dev mailing list