How many simultanious users
Kristian Lyngstol
kristian at redpill-linpro.com
Wed Jul 15 12:14:02 CEST 2009
On Wed, Jul 15, 2009 at 11:38:27AM +0200, Lazy wrote:
> I'm trying to figure out how many simultaneous users a single 8 core
> machine with local apache running as a backend can handle assumming
> that all the requests are cached.
This is actually very difficult to test, as you often end up with
client-side issues. Or issues that are irrelevant to a production
environment. I've done extensive stress testing, and getting the load up on
the Varnish-server is not trivial.
Our stress-testing rig is using a single dual-core opteron _clocked down_
to 1ghz as Varnish-server, and roughly 8-12 cpu cores spread over 3-5
servers as 'clients'. And it's able to handle 18-19k req/s consistently
(these are 1-3byte pages of cache hits). That should give you an idea of
the synthetic performance Varnish can offer.
> testing with ab on a slow 100Mbps link shows 2500 hit/s, locally i got
> 12 000 hit/s with over 200Mbps traffic
How big are the pages requested? Are they all hits? What's the load on the
client and server?
> assuming that each user loads 40 files in 1 minute we get
> 12000*60/40=18 000 users per minute
>
> Is it possible to get half of that 18k users/per minute in real word
> ignoring the amounts of traffic it will generate ?
I'd say so, but it depends on how big the data set is. If you can store it
in memory, varnish is ridiculously fast. I also wouldn't recommend relying
on a single Varnish for more than a few thousand requests per second. If
something goes wrong (suddenly getting lots of misses for instance), it
will quickly spread.
For comparison, I'm looking at a box with roughly 0.4 load serving
2000req/s as we speak, and that's on 2xDual Core Opteron 2212. Going by
those numbers, it should theoretically be able to handle almost ten times
as many requests if Varnish scaled as a straight line.
That'd give you roughly 18000 req/s at peak (give or take a little...) Now
you're talking about 8 cores, that should be 36k req/s. That's _not_
unrealistic, from what we've seen in synthetic tests. If each client
requires 40 items, that means roughly 900 clients _per second_. Or 54k in a
minute. This math is all rough estimates, but the foundation is production
sites and real traffic patterns.
The problem is that getting your Varnish to deal with 36k req/s is rather
difficult, and you quickly run into network issues and similar. And at 36k
req/s you can hardly take any amount of backend traffic or delays before it
all falls over.
> For now it's only theoretical question, but we would like to estimate
> how many machines will it take to handle this kind of load.
>
> Another question how to scale varnish, I'm thinking about setting a 2
> loadbalancers whitch will take care of sessions getting to the same
> server, and 3x8 core machines for www + varnish or maybe 2x4 core
> loadbalancers with varnish and 3x8 core machines for www. I would be
> possible to use varnish as a loadbalancer with some http cookie
> trickery.
I wouldn't recommend using Varnish to implement sticky-sessions, even
though it might be possible. What I've seen people do, though, is put
apache with mod_proxy _behind_ varnish, and let that deal with
sticky-sessions, then varnish only have to know what to cache and what not
to cache. (And for varnish, there'll be only one backend).
--
Kristian Lyngstøl
Redpill Linpro AS
Tlf: +47 21544179
Mob: +47 99014497
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20090715/fd570342/attachment-0003.pgp>
More information about the varnish-misc
mailing list