Poor #requests/second performance
sten at blinkenlights.nl
Wed Jun 3 08:58:22 CEST 2009
On Tue, 2 Jun 2009, Andreas Jung wrote:
> Sorry for the noise. The customer was running a transparent proxy within
> the network
> without telling me. Now I reach a performance of roughly 3000
> requests/seconds which
> is fast enough however still slower than Squid.
If you are testing with 10 open connections then you are not
running a realistic test. In the real world a busy site will
see roughly 10-100 times more connections than requests because
http 1.1 keepalive will keep end-user connections open.
I maintained a site doing 200-500mbit a few years ago and req/s
was in the 10k-100k range, but connections where at least
an order of magnitude higher.
Furthermore for a truly realistic test you need to figure out
what kind of latency/speed clients will have and add that to
the workload. This is why ab is useless for real tests.
The really big problem with http is that even when succesfully loading
90%-95% of the objects a site will feel "broken". If traffic spikes are
expected then you really want to have the capacity to handle them.
"There is a crack in everything, that's how the light gets in."
Leonard Cohen - Anthem
More information about the varnish-misc