Varnish use for purely binary files

pub crawler pubcrawler.com at gmail.com
Tue Jan 19 00:37:01 CET 2010


> Differences in latency of serving static content can vary widely based on
> the web server in use, easily tens of milliseconds or more.  There are
> dozens of web servers out there, some written in interpreted languages, many
> custom-written for a specific application, many with add-ons and modules and

Most webservers as shipped are simply not very speedy.   Nginx,
Cherokee, Lighty are three exceptions :)
Latency is all over the place in web server software.  Caching is a
black art still no matter where you are talking about having one or
lacking one :)  Ten milliseconds is easily wasted in a web server,
connection pooling, negotiating the transfer, etc.  Most sites have so
many latency issues and such a lack of performance.  Most folks seem
to just ignore it though and think all is well with low performance.

That's why Varnish and the folks here are so awesome.   A band of data
crushers, bandwidth abusers and RAM junkies with lower latency in
mind.  Latency is an ugly multiplier - it gets multiplied by every
request, multiple requests per user, multiplied by all the use in a
period of time.   If your page has 60 elements to be served and you
add a mere 5ms to each element that's 300ms of latency just on serving
static items.  There are other scenarios too like dealing with people
on slow connections (if your audience has lots of these).

>  If you're serving pure static content with no need for application logic,
> then yes, there is little benefit to choosing a two-tier infrastructure when
> a one-tier out-of-the-box nginx/lighttpd/thttpd will do just fine.  But, if
> your content does not fit in memory, you're back to reverse-Squid or
> Varnish.  (Though nginx may have an on-disk cache?  And don't get me started
> on Apache caching. :-)

Static sites will still be aided in scaling fronting them with Varnish
or similar cache front end if they are big enough.  A small for
instance might be offloading images or items that require longer
connection timeouts to Varnish - reducing the disk IO perhaps and
being able to cut your open connections on your web server.  You could
do the same obviously by dissecting your site into multiple servers
and dividing the load ~ lose some of the functionality that is
appealing in Varnish and the ability to dynamically adjust traffic,
load, direction, etc. within Varnish.  Unsure if anything similar
exists in Nginx - but then you are turning a web server into something
else and likely some performance reduction.

Mind you,  most people here *I think* are dealing with big scaling -
busy sites, respectable and sometimes awe inspiring amounts of data.
Then there are those slow as can be app servers the might have to work
around too.  So the scale of latency issues is a huge cost center for
most folks.

Plenty of papers have been wrote about latency and the user
experience.  The slower the load the less people interact and in
commerce terms spend with the site.



More information about the varnish-misc mailing list