[Fryer] experimental-ims FAIL. -1 of 0 tests succeeded.

fryer at oneiros.varnish-software.com fryer at oneiros.varnish-software.com
Wed Feb 29 23:33:37 CET 2012


Tests Failed: 0


Tests OK: 0



2012-02-29 07:15:45 [1,18]: Server tristran checked out varnish-3.0.0-beta2-870-ga18ff47 of branch experimental-ims
2012-02-29 07:17:05 [2,79]: httperf-lru-nostream-gzip(httperf): Starting test
2012-02-29 07:23:16 [1,371]: httperf-lru-nostream-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 365 stat: 3 diff: 362). Did we crash?
2012-02-29 07:23:16 WARNING [0, 0]: httperf-lru-nostream-gzip(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-02-29 07:23:16 WARNING [0, 0]: httperf-lru-nostream-gzip(httperf): Out of bounds: client_req(1470) less than lower boundary 1999720
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Load:  08:23:17 up 19:52,  2 users,  load average: 10.51, 9.60, 7.82

2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Test name: httperf-lru-nostream-gzip
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Varnish options: 
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): -t=3600
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): -w=200,5000
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): -s=malloc,30M
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Varnish parameters: 
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): nuke_limit=250
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): http_gzip_support=on
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Payload size (excludes headers): 10K
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Branch: experimental-ims
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Number of clients involved: 24
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Type of test: httperf
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Test iterations: 1
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Runtime: 365 seconds
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = false;
}

2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Number of total connections: 200000
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Requests per connection: 10
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Extra options to httperf: --wset=1000000,0.1
2012-02-29 07:23:17 [1, 0]: httperf-lru-nostream-gzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-02-29 07:23:24 [2, 7]: httperf-lru-nostream-gzip-deflateoff(httperf): Starting test
2012-02-29 07:30:07 [1,402]: httperf-lru-nostream-gzip-deflateoff(httperf): Varnishstat uptime and measured run-time is too large (measured: 394 stat: 22 diff: 372). Did we crash?
2012-02-29 07:30:08 WARNING [0, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-02-29 07:30:08 WARNING [0, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Out of bounds: client_req(42496) less than lower boundary 1999720
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Load:  08:30:08 up 19:59,  2 users,  load average: 9.03, 14.05, 10.94

2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Test name: httperf-lru-nostream-gzip-deflateoff
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Varnish options: 
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): -t=3600
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): -w=200,5000
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): -s=malloc,30M
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Varnish parameters: 
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): nuke_limit=250
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): http_gzip_support=on
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Payload size (excludes headers): 10K
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Branch: experimental-ims
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Number of clients involved: 24
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Type of test: httperf
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Test iterations: 1
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Runtime: 394 seconds
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = false;
	set beresp.do_gzip = true;
}

2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Number of total connections: 200000
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Requests per connection: 10
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Extra options to httperf: --wset=1000000,0.1
2012-02-29 07:30:08 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-02-29 07:30:16 [2, 7]: streaming(httperf): Starting test
2012-02-29 07:33:02 [2,166]: httperf-lru-default(httperf): Starting test
2012-02-29 07:39:28 [1,386]: httperf-lru-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 381 stat: 34 diff: 347). Did we crash?
2012-02-29 07:39:29 WARNING [0, 0]: httperf-lru-default(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-02-29 07:39:29 WARNING [0, 0]: httperf-lru-default(httperf): Out of bounds: client_req(138610) less than lower boundary 1999720
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Load:  08:39:29 up 20:08,  2 users,  load average: 9.87, 11.01, 10.26

2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Test name: httperf-lru-default
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Varnish options: 
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): -t=3600
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): -w=200,5000
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): -s=malloc,30M
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Varnish parameters: 
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): nuke_limit=250
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Payload size (excludes headers): 10K
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Branch: experimental-ims
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Number of clients involved: 24
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Type of test: httperf
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Test iterations: 1
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Runtime: 381 seconds
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Number of total connections: 200000
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Requests per connection: 10
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Extra options to httperf: --wset=1000000,0.1
2012-02-29 07:39:29 [1, 0]: httperf-lru-default(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-02-29 07:39:37 [2, 7]: memleak(httperf): Starting test
2012-02-29 07:51:06 WARNING [0,688]: memleak(httperf): Out of bounds: client_req(9949559) less than lower boundary 9959800
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Load:  08:51:06 up 20:20,  0 users,  load average: 21.35, 21.69, 16.96

2012-02-29 07:51:06 [1, 0]: memleak(httperf): Test name: memleak
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Varnish options: 
2012-02-29 07:51:06 [1, 0]: memleak(httperf): -t=3600
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Varnish parameters: 
2012-02-29 07:51:06 [1, 0]: memleak(httperf): thread_pool_add_delay=1
2012-02-29 07:51:06 [1, 0]: memleak(httperf): http_gzip_support=on
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Payload size (excludes headers): 512
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Branch: experimental-ims
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Number of clients involved: 24
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Type of test: httperf
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Test iterations: 1
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Runtime: 682 seconds
2012-02-29 07:51:06 [1, 0]: memleak(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-02-29 07:51:06 [1, 0]: memleak(httperf): Number of total connections: 2000
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Requests per connection: 5000
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Extra options to httperf: --wset=100,0.10
2012-02-29 07:51:06 [1, 0]: memleak(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 5000 --num-conns 83 --port 8080 --burst-length 5000 --client 23/24 --server 10.20.100.4 --wset=100,0.10
2012-02-29 07:51:14 [2, 7]: 4gpluss-stream(httperf): Starting test
2012-02-29 07:51:17 WARNING [0, 3]: Varnish failed to start. Fallback attempts starting
2012-02-29 07:51:17 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: thread_pool_add_delay=1
2012-02-29 07:51:18 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: sess_timeout=60000s
2012-02-29 07:51:19 [1, 1]: Fallback worked. Parameter that seemed to cause problems: sess_timeout
2012-02-29 08:13:04 [2,1304]: httperf-lru-stream-default(httperf): Starting test
2012-02-29 08:19:22 [1,378]: httperf-lru-stream-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 372 stat: 21 diff: 351). Did we crash?
2012-02-29 08:19:22 WARNING [0, 0]: httperf-lru-stream-default(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-02-29 08:19:22 WARNING [0, 0]: httperf-lru-stream-default(httperf): Out of bounds: client_req(45527) less than lower boundary 1999720
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Load:  09:19:23 up 20:48,  0 users,  load average: 7.29, 8.53, 7.31

2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Test name: httperf-lru-stream-default
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Varnish options: 
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): -t=3600
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): -w=200,5000
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): -s=malloc,30M
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Varnish parameters: 
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): nuke_limit=250
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Payload size (excludes headers): 10K
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Branch: experimental-ims
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Number of clients involved: 24
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Type of test: httperf
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Test iterations: 1
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Runtime: 372 seconds
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Number of total connections: 200000
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Requests per connection: 10
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Extra options to httperf: --wset=1000000,0.1
2012-02-29 08:19:23 [1, 0]: httperf-lru-stream-default(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-02-29 08:19:30 [2, 7]: httperf-hot(httperf): Starting test
2012-02-29 08:21:47 [2,136]: httperf-lru-nostream-nogzip(httperf): Starting test
2012-02-29 08:27:11 [1,324]: httperf-lru-nostream-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 318 stat: 45 diff: 273). Did we crash?
2012-02-29 08:27:11 WARNING [0, 0]: httperf-lru-nostream-nogzip(httperf): Out of bounds: n_lru_nuked(19142) less than lower boundary 80000
2012-02-29 08:27:11 WARNING [0, 0]: httperf-lru-nostream-nogzip(httperf): Out of bounds: client_req(219723) less than lower boundary 1999720
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Load:  09:27:12 up 20:56,  0 users,  load average: 4.44, 7.39, 7.57

2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Test name: httperf-lru-nostream-nogzip
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Varnish options: 
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): -t=3600
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): -w=200,5000
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): -s=malloc,30M
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Varnish parameters: 
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): nuke_limit=250
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): http_gzip_support=off
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Payload size (excludes headers): 10K
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Branch: experimental-ims
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Number of clients involved: 24
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Type of test: httperf
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Test iterations: 1
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Runtime: 318 seconds
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = false;
}

2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Number of total connections: 200000
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Requests per connection: 10
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Extra options to httperf: --wset=1000000,0.1
2012-02-29 08:27:12 [1, 0]: httperf-lru-nostream-nogzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-02-29 08:27:19 [2, 7]: cold-gzip(httperf): Starting test
2012-02-29 08:30:49 [1,209]: cold-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 201 stat: 6 diff: 195). Did we crash?
2012-02-29 08:38:35 [1,465]: cold-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 669 stat: 12 diff: 657). Did we crash?
2012-02-29 08:38:35 WARNING [0, 0]: cold-gzip(httperf): Out of bounds: uptime(12) less than lower boundary 100
2012-02-29 08:38:35 WARNING [0, 0]: cold-gzip(httperf): Out of bounds: client_req(1880) less than lower boundary 1599640
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Load:  09:38:35 up 21:07,  0 users,  load average: 31.40, 65.62, 38.73

2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Test name: cold-gzip
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Varnish options: 
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): -t=3600
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): -s=malloc,10G
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Varnish parameters: 
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): http_gzip_support=on
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Payload size (excludes headers): 256
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Branch: experimental-ims
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Number of clients involved: 24
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Type of test: httperf
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Test iterations: 2
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Runtime: 669 seconds
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Number of total connections: 80000
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Requests per connection: 10
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Extra options to httperf: --wset=4000000,0.50
2012-02-29 08:38:35 [1, 0]: cold-gzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 3333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=4000000,0.50
2012-02-29 08:38:43 [2, 7]: 4gpluss(httperf): Starting test
2012-02-29 08:38:47 WARNING [0, 3]: Varnish failed to start. Fallback attempts starting
2012-02-29 08:38:47 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: thread_pool_add_delay=1
2012-02-29 08:38:47 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: sess_timeout=60000s
2012-02-29 08:38:49 [1, 1]: Fallback worked. Parameter that seemed to cause problems: sess_timeout
2012-02-29 08:59:54 [2,1265]: httperf-lru-stream-gzip(httperf): Starting test
2012-02-29 09:06:23 [1,388]: httperf-lru-stream-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 379 stat: 123 diff: 256). Did we crash?
2012-02-29 09:06:23 WARNING [0, 0]: httperf-lru-stream-gzip(httperf): Out of bounds: n_lru_nuked(26398) less than lower boundary 80000
2012-02-29 09:06:23 WARNING [0, 0]: httperf-lru-stream-gzip(httperf): Out of bounds: client_req(580170) less than lower boundary 1999720
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Load:  10:06:24 up 21:35,  0 users,  load average: 11.08, 10.35, 11.72

2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Test name: httperf-lru-stream-gzip
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Varnish options: 
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): -t=3600
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): -w=200,5000
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): -s=malloc,30M
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Varnish parameters: 
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): nuke_limit=250
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): http_gzip_support=on
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Payload size (excludes headers): 10K
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Branch: experimental-ims
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Number of clients involved: 24
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Type of test: httperf
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Test iterations: 1
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Runtime: 379 seconds
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Number of total connections: 200000
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Requests per connection: 10
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Extra options to httperf: --wset=1000000,0.1
2012-02-29 09:06:24 [1, 0]: httperf-lru-stream-gzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-02-29 09:06:31 [2, 7]: httperf-lru-stream-nogzip(httperf): Starting test
2012-02-29 09:11:37 [1,305]: httperf-lru-stream-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 299 stat: 48 diff: 251). Did we crash?
2012-02-29 09:11:37 WARNING [0, 0]: httperf-lru-stream-nogzip(httperf): Out of bounds: n_lru_nuked(17675) less than lower boundary 80000
2012-02-29 09:11:37 WARNING [0, 0]: httperf-lru-stream-nogzip(httperf): Out of bounds: client_req(205019) less than lower boundary 1999720
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Load:  10:11:37 up 21:40,  0 users,  load average: 4.84, 8.97, 10.88

2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Test name: httperf-lru-stream-nogzip
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Varnish options: 
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): -t=3600
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): -w=200,5000
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): -s=malloc,30M
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Varnish parameters: 
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): nuke_limit=250
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): http_gzip_support=off
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Payload size (excludes headers): 10K
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Branch: experimental-ims
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Number of clients involved: 24
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Type of test: httperf
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Test iterations: 1
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Runtime: 299 seconds
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Number of total connections: 200000
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Requests per connection: 10
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Extra options to httperf: --wset=1000000,0.1
2012-02-29 09:11:37 [1, 0]: httperf-lru-stream-nogzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-02-29 09:11:45 [2, 7]: basic-fryer(httperf): Starting test
2012-02-29 09:12:00 [2,14]: cold-nogzip(httperf): Starting test
2012-02-29 09:14:44 [1,164]: cold-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 158 stat: 44 diff: 114). Did we crash?
2012-02-29 09:17:01 [1,137]: cold-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 295 stat: 7 diff: 288). Did we crash?
2012-02-29 09:17:01 WARNING [0, 0]: cold-nogzip(httperf): Out of bounds: uptime(7) less than lower boundary 100
2012-02-29 09:17:01 WARNING [0, 0]: cold-nogzip(httperf): Out of bounds: client_req(17930) less than lower boundary 1599640
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Load:  10:17:02 up 21:46,  0 users,  load average: 4.43, 6.06, 9.04

2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Test name: cold-nogzip
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Varnish options: 
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): -t=3600
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): -s=malloc,10G
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Varnish parameters: 
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): http_gzip_support=off
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Payload size (excludes headers): 256
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Branch: experimental-ims
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Number of clients involved: 24
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Type of test: httperf
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Test iterations: 2
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Runtime: 295 seconds
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Number of total connections: 80000
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Requests per connection: 10
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Extra options to httperf: --wset=4000000,0.50
2012-02-29 09:17:02 [1, 0]: cold-nogzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 3333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=4000000,0.50
2012-02-29 09:17:09 [2, 7]: 4gpluss-nostream(httperf): Starting test
2012-02-29 09:17:13 WARNING [0, 3]: Varnish failed to start. Fallback attempts starting
2012-02-29 09:17:13 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: thread_pool_add_delay=1
2012-02-29 09:17:14 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: sess_timeout=60000s
2012-02-29 09:17:15 [1, 1]: Fallback worked. Parameter that seemed to cause problems: sess_timeout
2012-02-29 09:37:55 [2,1240]: lru-random(httperf): Starting test
2012-02-29 09:38:18 WARNING [0,22]: lru-random(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 3000
2012-02-29 09:38:18 WARNING [0, 0]: lru-random(httperf): Out of bounds: cache_hitpass(22439) more than upper boundary 0
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Load:  10:38:18 up 22:07,  0 users,  load average: 1.23, 1.33, 3.23

2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Test name: lru-random
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Varnish options: 
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): -t=3600000
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): -s=malloc,1G
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Varnish parameters: 
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Payload size (excludes headers): 256
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Branch: experimental-ims
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Number of clients involved: 24
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Type of test: httperf
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Test iterations: 1
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Runtime: 16 seconds
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): VCL: 

backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = false;
//	set beresp.do_gzip = true;
}

2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Number of total connections: 10000
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Requests per connection: 3
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Extra options to httperf: --wset=100000,0.25
2012-02-29 09:38:18 [1, 0]: lru-random(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 3 --num-conns 416 --port 8080 --burst-length 3 --client 23/24 --server 10.20.100.4 --wset=100000,0.25
2012-02-29 09:38:26 [2, 7]: siege-test(siege): Starting test
2012-02-29 09:39:40 [2,74]: sky-misc(httperf): Starting test
2012-02-29 09:46:07 [2,387]: httperf-lru-nostream-default(httperf): Starting test
2012-02-29 09:52:21 [1,373]: httperf-lru-nostream-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 368 stat: 20 diff: 348). Did we crash?
2012-02-29 09:52:21 WARNING [0, 0]: httperf-lru-nostream-default(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-02-29 09:52:21 WARNING [0, 0]: httperf-lru-nostream-default(httperf): Out of bounds: client_req(48535) less than lower boundary 1999720
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Load:  10:52:22 up 22:21,  0 users,  load average: 8.39, 13.10, 10.57

2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Test name: httperf-lru-nostream-default
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Varnish options: 
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): -t=3600
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): -w=200,5000
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): -s=malloc,30M
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Varnish parameters: 
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): nuke_limit=250
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Payload size (excludes headers): 10K
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Branch: experimental-ims
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Number of clients involved: 24
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Type of test: httperf
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Test iterations: 1
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Runtime: 368 seconds
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = false;
}

2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Number of total connections: 200000
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Requests per connection: 10
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Extra options to httperf: --wset=1000000,0.1
2012-02-29 09:52:22 [1, 0]: httperf-lru-nostream-default(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-02-29 09:52:29 [2, 7]: httperf-rapid-expire(httperf): Starting test
2012-02-29 09:54:20 [2,111]: streaming-grace(httperf): Starting test
2012-02-29 09:57:07 [2,166]: cold-default(httperf): Starting test
2012-02-29 10:00:29 [1,202]: cold-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 197 stat: 14 diff: 183). Did we crash?
2012-02-29 22:33:37 WARNING [0,45187]: Unknown exception caught.
2012-02-29 22:33:37 [1, 0]: 
Traceback (most recent call last):
  File "/usr/local/bin/fryer", line 59, in <module>
    r, ntests, ok_tests, fail_tests = fryerlib.run_main()
  File "/usr/local/lib/python2.6/dist-packages/fryerlib/__init__.py", line 70, in run_main
    tmp=t.run_tests()
  File "/usr/local/lib/python2.6/dist-packages/fryerlib/tests/__init__.py", line 138, in run_tests
    statruntime= int(stat.stat("uptime"))
  File "/usr/local/lib/python2.6/dist-packages/fryerlib/varnishstat.py", line 50, in stat
    return self.target.stats[key]
KeyError: 'uptime'

2012-02-29 22:33:37 WARNING [0, 0]: Tests finished with problems detected. Failed expectations: 1 Total run time: 55091 seconds 



More information about the varnish-test mailing list