<div dir="ltr">Hey Dererk,<div><br></div><div>Nice to see tht Varnish is now part of OLX, I have been a happy user of it. Keep up the good work.</div><div><br></div><div>Another simple tip from my experience will be to distribute the load on different servers and not just one which is also possible with varnish and maybe get down to some browser specifics too. But since you are using hash-based balancing algorithm in front, I am not sure if that will be possible. Do let me know if u need more specifics for those.</div>
<div><br></div><div>Good Day!</div><div>Tousif</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Oct 24, 2013 at 3:36 AM, Dererk <span dir="ltr"><<a href="mailto:dererk@deadbeef.com.ar" target="_blank">dererk@deadbeef.com.ar</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi there!<br>
<br>
I thought It would be useful for some people out there to mention a<br>
"case of successful" on the Persistent branch of Varnish Cache on a<br>
large production scale.<br>
Ours is the replacement of the so-long-time-appreciated Squid Cache for<br>
delivering (mostly) user-provided content, like Images and other kinds<br>
of documents, but static objects like Javascripts and style sheets<br>
belonging to our platform, OLX, as well.<br>
<br>
- TL;DR Version<br>
The development around the persistent branch rocks, *big time*!<br>
<br>
- Full Version<br>
// *Disclaimer*<br>
// Our configuration is not what one could call "conventional", and we<br>
can't really afford loosing large amounts of cache in short periods of time.<br>
<br>
<br>
- The Business<br>
<br>
Our company, OnLine eXchange or simply OLX for short, has established<br>
its core around the Free Classifieds business, enabling Sellers and<br>
Buyers to perform peer-to-peer selling transactions with no commission<br>
required to any parts. Our portal allows sellers to upload images for<br>
better describing their items into our platform and being more<br>
attractive to potential buyers and having better chances of selling<br>
them. Just like Craiglist does in the US or Allegro in Poland.<br>
<br>
Although some particular countries are more popular than others, we have<br>
a large amount of traffic world-wide, due to an extensive net of<br>
operations and localization.<br>
To sum up, this translates into having to tune every single piece of<br>
software we run out there for literally delivering several gbps of<br>
traffic out to the internet.<br>
<br>
<br>
- Massive Delivering<br>
<br>
Even though we heavily rest on content delivery networks (CDNs) for<br>
performing last-mile optimizations, geo-caching and network optimized<br>
delivery, they have limits on what they can handle and they also run<br>
caching rotation algorithms over objects as well (say, something like<br>
LRU). The do all the time, more often that you do actually want.<br>
On the other hand, because of their geo-distribution nature, in reality<br>
several request are performed from different locations before they<br>
effectively cache objects for some small period of time.<br>
<br>
It might be true that we are on average offloaded +80% of the traffic,<br>
but again, in reality even that small portion has average 400 mbits/s,<br>
with frequent 500mbps spikes in business-as-usual levels.<br>
<br>
<br>
- Ancient History<br>
<br>
We used to run this internal caching tier with Squid Cache backing the<br>
cache up, with the help of tons of optimizations and an extremely<br>
experimental backend called Cyclic Object storage system or COSS. This<br>
very very experimental backend made use of a very granular storage<br>
configurations to get the most out of every IOP we could save to our<br>
loaded storage backends.<br>
Due to its immaturity, some particular operations sucked, like a *30<br>
minutes downtime* per instance by restarting the squid engine because of<br>
COSS data rebuild & consistency checks.<br>
<br>
We also rested very heavily on a sibling caching relation ship<br>
configuration by using HTCP, on a way to maximize the caching<br>
availability and saving some objects to get to the origin, which, as I<br>
said, was heavily loaded almost all the time.<br>
Squid is a great tool and believe me when I say, we loved it with all<br>
the downsides it had, we have had it for several years up to very<br>
recently, we knew it backwards and forwards until then.<br>
<br>
<br>
- Modern History<br>
<br>
Things started to change for the worse recently when our<br>
long-standing-loaded storage backend tier, featuring a vendor supposedly<br>
standing at the Storage Top Five ranking, could not face our<br>
business-as-usual operations any longer.<br>
Performance started to fall apart in pieces once we started to hit +95%<br>
on this well-known storage provider's solution CPU usage, unable to<br>
serve objects in form and shape once 95% was reached.<br>
We were forced to start diving into new and radical alternatives on a<br>
very short term.<br>
<br>
We have been using Varnish internally since some years now for boosting<br>
our SOLR backend and some other HTTP caching roles, but not for<br>
delivering static content up to that moment.<br>
Now was the time to give it a chance, "for real" (we have several gbps<br>
of traffic internally too, but you know what I mean).<br>
<br>
<br>
- First Steps<br>
<br>
We started by using the same malloc backend configurations as we were<br>
using on this other areas were Varnish was deployed, with some<br>
performance tunes around sessions (VARNISH_SESSION_LINGER) and threads<br>
capacity (VARNISH_MAX_THREADS &&VARNISH_THREAD_POOLS &&<br>
VARNISH_THREAD_TIMEOUT).<br>
<br>
The server profiles handling this tasks were some huge boxes with 128M<br>
RAM, but since the 50-ish terabytes dataset didn't actually fit on RAM,<br>
once all the memory was allocated we started to suffer same random<br>
panics at random periods of time. Unable to replicate them on demand or<br>
to produce any debugging due to the amount of traffic this huge devils<br>
handled on production, things started to get really ugly really fast.<br>
<br>
<br>
- The Difficult Choice<br>
<br>
At this point we started to consider every possible option available out<br>
there, even switching away to other caching alternatives, like<br>
trafficserver, that provided persistence for cached objects, and we<br>
decided to give the persistent backend a shoot, but, there was a catch:<br>
the persistent backend was (and also currently is) considered<br>
experimental (so did COSS, btw!).<br>
<br>
Effectively, as for what 3.0.4 release on the main stable branch<br>
respects, the persistent backend had many bugs that raised up within<br>
minutes, many of them produced panics that crashed the child process,<br>
but, fortunately for us, the persistence itself worked so well that the<br>
first times went through totally unnoticed, which, compared with<br>
experiencing a 100mb cache melt down was just an amazing improvement by<br>
itself. Now things started to look better for Varnish.<br>
Of course, we learned that in fact we were loosing some cached objects<br>
by some broken silos in the way, but, doing a side-by-side comparison<br>
things looked different as night and day, and the best was yet to come.<br>
<br>
We were advised, in case we were to stick to a persistent backend, on<br>
using the persistent development branch, giving that more improvements<br>
were developed in there and major stability changes were introduced.<br>
But, think again, proposing something that has the "experimental" and<br>
"on development" tags hanging from it usually sells horribly to the<br>
management people on the other side of the table.<br>
<br>
<br>
- Summary<br>
<br>
At the end, with the help of a hash-based balancing algorithm at the<br>
load balancing tier in front of our Varnish caches, we were able to<br>
*almost cut half* of the CPU usage at our storage solution tier, that is<br>
by geting 60% insead of +90% of CPU usage, something similar to a sunny<br>
day for a walk through the park, even for serving the +2.000<br>
request/second arriving at our datacenters.<br>
<br>
We got there by offloading content on a *up to +70% cache hits*,<br>
something that was totally unconceivable for anyone at the very<br>
beginning of the migration, giving that we used to get less than 30% in<br>
the past with Squid.<br>
<br>
We were able to get up to this point with lots of patience and research,<br>
but particularly with the help of the Varnish core development that<br>
constantly supported us at the IRC channels and mailing lists.<br>
<br>
Thanks a lot guys, you and Varnish rock, big time!<br>
<br>
<br>
<br>
A happy user!<br>
<span class="HOEnZb"><font color="#888888"><br>
Dererk<br>
<br>
--<br>
BOFH excuse #274:<br>
It was OK before you touched it.<br>
<br>
<br>
</font></span><br>_______________________________________________<br>
varnish-misc mailing list<br>
<a href="mailto:varnish-misc@varnish-cache.org">varnish-misc@varnish-cache.org</a><br>
<a href="https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc" target="_blank">https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc</a><br></blockquote></div><br></div>