[Fryer] experimental-ims FAIL. -1 of 0 tests succeeded.

fryer at oneiros.varnish-software.com fryer at oneiros.varnish-software.com
Thu Mar 1 21:32:21 CET 2012


Tests Failed: 0


Tests OK: 0



2012-03-01 18:38:13 [1,17]: Server tristran checked out varnish-3.0.0-beta2-870-ga18ff47 of branch experimental-ims
2012-03-01 18:39:34 [2,80]: httperf-lru-nostream-gzip(httperf): Starting test
2012-03-01 18:46:00 [1,386]: httperf-lru-nostream-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 380 stat: 24 diff: 356). Did we crash?
2012-03-01 18:46:00 WARNING [0, 0]: httperf-lru-nostream-gzip(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-03-01 18:46:00 WARNING [0, 0]: httperf-lru-nostream-gzip(httperf): Out of bounds: client_req(42430) less than lower boundary 1999720
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Load:  19:46:01 up  8:53,  0 users,  load average: 8.09, 9.35, 8.20

2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Test name: httperf-lru-nostream-gzip
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Varnish options: 
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): -t=3600
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): -w=200,5000
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): -s=malloc,30M
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Varnish parameters: 
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): nuke_limit=250
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): http_gzip_support=on
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Payload size (excludes headers): 10K
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Branch: experimental-ims
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Number of clients involved: 24
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Type of test: httperf
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Test iterations: 1
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Runtime: 380 seconds
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = false;
}

2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Number of total connections: 200000
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Requests per connection: 10
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Extra options to httperf: --wset=1000000,0.1
2012-03-01 18:46:01 [1, 0]: httperf-lru-nostream-gzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-03-01 18:46:08 [2, 7]: httperf-lru-nostream-gzip-deflateoff(httperf): Starting test
2012-03-01 18:52:55 [1,406]: httperf-lru-nostream-gzip-deflateoff(httperf): Varnishstat uptime and measured run-time is too large (measured: 398 stat: 23 diff: 375). Did we crash?
2012-03-01 18:52:56 WARNING [0, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-03-01 18:52:56 WARNING [0, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Out of bounds: client_req(58238) less than lower boundary 1999720
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Load:  19:52:56 up  9:00,  0 users,  load average: 10.96, 14.43, 11.34

2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Test name: httperf-lru-nostream-gzip-deflateoff
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Varnish options: 
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): -t=3600
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): -w=200,5000
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): -s=malloc,30M
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Varnish parameters: 
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): nuke_limit=250
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): http_gzip_support=on
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Payload size (excludes headers): 10K
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Branch: experimental-ims
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Number of clients involved: 24
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Type of test: httperf
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Test iterations: 1
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Runtime: 398 seconds
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = false;
	set beresp.do_gzip = true;
}

2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Number of total connections: 200000
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Requests per connection: 10
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Extra options to httperf: --wset=1000000,0.1
2012-03-01 18:52:56 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-03-01 18:53:04 [2, 7]: streaming(httperf): Starting test
2012-03-01 18:55:50 [2,166]: httperf-lru-default(httperf): Starting test
2012-03-01 19:02:03 [1,372]: httperf-lru-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 367 stat: 13 diff: 354). Did we crash?
2012-03-01 19:02:03 WARNING [0, 0]: httperf-lru-default(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-03-01 19:02:03 WARNING [0, 0]: httperf-lru-default(httperf): Out of bounds: client_req(16363) less than lower boundary 1999720
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Load:  20:02:04 up  9:09,  0 users,  load average: 8.22, 10.68, 10.36

2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Test name: httperf-lru-default
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Varnish options: 
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): -t=3600
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): -w=200,5000
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): -s=malloc,30M
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Varnish parameters: 
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): nuke_limit=250
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Payload size (excludes headers): 10K
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Branch: experimental-ims
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Number of clients involved: 24
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Type of test: httperf
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Test iterations: 1
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Runtime: 367 seconds
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Number of total connections: 200000
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Requests per connection: 10
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Extra options to httperf: --wset=1000000,0.1
2012-03-01 19:02:04 [1, 0]: httperf-lru-default(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-03-01 19:02:12 [2, 7]: memleak(httperf): Starting test
2012-03-01 19:14:14 WARNING [0,722]: memleak(httperf): Out of bounds: client_req(9951182) less than lower boundary 9959800
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Load:  20:14:14 up  9:21,  0 users,  load average: 20.48, 21.67, 17.20

2012-03-01 19:14:14 [1, 0]: memleak(httperf): Test name: memleak
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Varnish options: 
2012-03-01 19:14:14 [1, 0]: memleak(httperf): -t=3600
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Varnish parameters: 
2012-03-01 19:14:14 [1, 0]: memleak(httperf): thread_pool_add_delay=1
2012-03-01 19:14:14 [1, 0]: memleak(httperf): http_gzip_support=on
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Payload size (excludes headers): 512
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Branch: experimental-ims
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Number of clients involved: 24
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Type of test: httperf
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Test iterations: 1
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Runtime: 715 seconds
2012-03-01 19:14:14 [1, 0]: memleak(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-03-01 19:14:14 [1, 0]: memleak(httperf): Number of total connections: 2000
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Requests per connection: 5000
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Extra options to httperf: --wset=100,0.10
2012-03-01 19:14:14 [1, 0]: memleak(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 5000 --num-conns 83 --port 8080 --burst-length 5000 --client 23/24 --server 10.20.100.4 --wset=100,0.10
2012-03-01 19:14:22 [2, 7]: 4gpluss-stream(httperf): Starting test
2012-03-01 19:14:25 WARNING [0, 3]: Varnish failed to start. Fallback attempts starting
2012-03-01 19:14:25 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: thread_pool_add_delay=1
2012-03-01 19:14:26 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: sess_timeout=60000s
2012-03-01 19:14:27 [1, 1]: Fallback worked. Parameter that seemed to cause problems: sess_timeout
2012-03-01 19:35:10 [2,1242]: httperf-lru-stream-default(httperf): Starting test
2012-03-01 19:41:32 [1,382]: httperf-lru-stream-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 375 stat: 24 diff: 351). Did we crash?
2012-03-01 19:41:33 WARNING [0, 0]: httperf-lru-stream-default(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-03-01 19:41:33 WARNING [0, 0]: httperf-lru-stream-default(httperf): Out of bounds: client_req(46893) less than lower boundary 1999720
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Load:  20:41:33 up  9:49,  0 users,  load average: 8.45, 8.79, 7.60

2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Test name: httperf-lru-stream-default
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Varnish options: 
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): -t=3600
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): -w=200,5000
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): -s=malloc,30M
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Varnish parameters: 
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): nuke_limit=250
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Payload size (excludes headers): 10K
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Branch: experimental-ims
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Number of clients involved: 24
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Type of test: httperf
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Test iterations: 1
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Runtime: 375 seconds
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Number of total connections: 200000
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Requests per connection: 10
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Extra options to httperf: --wset=1000000,0.1
2012-03-01 19:41:33 [1, 0]: httperf-lru-stream-default(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-03-01 19:41:41 [2, 7]: httperf-hot(httperf): Starting test
2012-03-01 19:43:36 [2,114]: httperf-lru-nostream-nogzip(httperf): Starting test
2012-03-01 19:49:04 [1,328]: httperf-lru-nostream-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 323 stat: 52 diff: 271). Did we crash?
2012-03-01 19:49:05 WARNING [0, 0]: httperf-lru-nostream-nogzip(httperf): Out of bounds: n_lru_nuked(22593) less than lower boundary 80000
2012-03-01 19:49:05 WARNING [0, 0]: httperf-lru-nostream-nogzip(httperf): Out of bounds: client_req(254226) less than lower boundary 1999720
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Load:  20:49:05 up  9:56,  0 users,  load average: 3.90, 7.74, 8.01

2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Test name: httperf-lru-nostream-nogzip
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Varnish options: 
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): -t=3600
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): -w=200,5000
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): -s=malloc,30M
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Varnish parameters: 
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): nuke_limit=250
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): http_gzip_support=off
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Payload size (excludes headers): 10K
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Branch: experimental-ims
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Number of clients involved: 24
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Type of test: httperf
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Test iterations: 1
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Runtime: 323 seconds
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = false;
}

2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Number of total connections: 200000
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Requests per connection: 10
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Extra options to httperf: --wset=1000000,0.1
2012-03-01 19:49:05 [1, 0]: httperf-lru-nostream-nogzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-03-01 19:49:13 [2, 7]: cold-gzip(httperf): Starting test
2012-03-01 19:52:44 [1,211]: cold-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 202 stat: 3 diff: 199). Did we crash?
2012-03-01 19:56:03 [1,198]: cold-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 404 stat: 11 diff: 393). Did we crash?
2012-03-01 19:56:03 WARNING [0, 0]: cold-gzip(httperf): Out of bounds: uptime(11) less than lower boundary 100
2012-03-01 19:56:03 WARNING [0, 0]: cold-gzip(httperf): Out of bounds: client_req(10240) less than lower boundary 1599640
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Load:  20:56:04 up 10:03,  0 users,  load average: 4.09, 5.76, 6.99

2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Test name: cold-gzip
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Varnish options: 
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): -t=3600
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): -s=malloc,10G
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Varnish parameters: 
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): http_gzip_support=on
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Payload size (excludes headers): 256
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Branch: experimental-ims
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Number of clients involved: 24
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Type of test: httperf
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Test iterations: 2
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Runtime: 404 seconds
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Number of total connections: 80000
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Requests per connection: 10
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Extra options to httperf: --wset=4000000,0.50
2012-03-01 19:56:04 [1, 0]: cold-gzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 3333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=4000000,0.50
2012-03-01 19:56:11 [2, 7]: 4gpluss(httperf): Starting test
2012-03-01 19:56:15 WARNING [0, 3]: Varnish failed to start. Fallback attempts starting
2012-03-01 19:56:15 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: thread_pool_add_delay=1
2012-03-01 19:56:16 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: sess_timeout=60000s
2012-03-01 19:56:17 [1, 1]: Fallback worked. Parameter that seemed to cause problems: sess_timeout
2012-03-01 20:17:07 [2,1249]: httperf-lru-stream-gzip(httperf): Starting test
2012-03-01 20:23:36 [1,389]: httperf-lru-stream-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 381 stat: 24 diff: 357). Did we crash?
2012-03-01 20:23:36 WARNING [0, 0]: httperf-lru-stream-gzip(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000
2012-03-01 20:23:36 WARNING [0, 0]: httperf-lru-stream-gzip(httperf): Out of bounds: client_req(35466) less than lower boundary 1999720
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Load:  21:23:36 up 10:31,  0 users,  load average: 8.59, 9.13, 6.12

2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Test name: httperf-lru-stream-gzip
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Varnish options: 
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): -t=3600
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): -w=200,5000
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): -s=malloc,30M
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Varnish parameters: 
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): nuke_limit=250
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): http_gzip_support=on
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Payload size (excludes headers): 10K
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Branch: experimental-ims
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Number of clients involved: 24
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Type of test: httperf
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Test iterations: 1
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Runtime: 381 seconds
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Number of total connections: 200000
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Requests per connection: 10
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Extra options to httperf: --wset=1000000,0.1
2012-03-01 20:23:36 [1, 0]: httperf-lru-stream-gzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-03-01 20:23:44 [2, 7]: httperf-lru-stream-nogzip(httperf): Starting test
2012-03-01 20:28:48 [1,304]: httperf-lru-stream-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 298 stat: 41 diff: 257). Did we crash?
2012-03-01 20:28:49 WARNING [0, 0]: httperf-lru-stream-nogzip(httperf): Out of bounds: n_lru_nuked(12444) less than lower boundary 80000
2012-03-01 20:28:49 WARNING [0, 0]: httperf-lru-stream-nogzip(httperf): Out of bounds: client_req(152895) less than lower boundary 1999720
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Load:  21:28:49 up 10:36,  0 users,  load average: 3.88, 8.24, 6.80

2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Test name: httperf-lru-stream-nogzip
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Varnish options: 
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): -t=3600
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): -w=200,5000
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): -s=malloc,30M
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Varnish parameters: 
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): nuke_limit=250
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): http_gzip_support=off
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Payload size (excludes headers): 10K
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Branch: experimental-ims
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Number of clients involved: 24
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Type of test: httperf
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Test iterations: 1
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Runtime: 298 seconds
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Number of total connections: 200000
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Requests per connection: 10
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Extra options to httperf: --wset=1000000,0.1
2012-03-01 20:28:49 [1, 0]: httperf-lru-stream-nogzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=1000000,0.1
2012-03-01 20:28:57 [2, 7]: basic-fryer(httperf): Starting test
2012-03-01 20:29:14 [2,17]: cold-nogzip(httperf): Starting test
2012-03-01 20:31:58 [1,163]: cold-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 158 stat: 0 diff: 158). Did we crash?
2012-03-01 20:32:06 [1, 7]: cold-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 166 stat: 8 diff: 158). Did we crash?
2012-03-01 20:32:07 WARNING [0, 0]: cold-nogzip(httperf): Out of bounds: uptime(8) less than lower boundary 100
2012-03-01 20:32:07 WARNING [0, 0]: cold-nogzip(httperf): Out of bounds: client_req(24612) less than lower boundary 1599640
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Load:  21:32:07 up 10:39,  0 users,  load average: 4.08, 6.23, 6.27

2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Test name: cold-nogzip
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Varnish options: 
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): -t=3600
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): -s=malloc,10G
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Varnish parameters: 
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): http_gzip_support=off
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Payload size (excludes headers): 256
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Branch: experimental-ims
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Number of clients involved: 24
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Type of test: httperf
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Test iterations: 2
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Runtime: 166 seconds
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): VCL: 
backend foo {
	.host = "localhost";
	.port = "80";
}

sub vcl_fetch {
	set beresp.do_stream = true;
}

2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Number of total connections: 80000
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations.
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Requests per connection: 10
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Extra options to httperf: --wset=4000000,0.50
2012-03-01 20:32:07 [1, 0]: cold-nogzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 3333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.4 --wset=4000000,0.50
2012-03-01 20:32:15 [2, 7]: 4gpluss-nostream(httperf): Starting test
2012-03-01 20:32:18 WARNING [0, 3]: Varnish failed to start. Fallback attempts starting
2012-03-01 20:32:18 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: thread_pool_add_delay=1
2012-03-01 20:32:19 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: sess_timeout=60000s
2012-03-01 20:32:20 [1, 0]: Removed (hopefully incompatible) parameter to try to start varnish. Key: send_timeout=60000s
2012-03-01 20:32:21 WARNING [0, 0]: Caught internal fryer-error.Message: Couldn't start Varnish on tristran.varnish-software.com. Contingency measures exhausted.
2012-03-01 20:32:21 WARNING [0, 0]: Tests finished with problems detected. Failed expectations: 1 Total run time: 6865 seconds 



More information about the varnish-test mailing list