On tests
Per Buer
perbu at varnish-software.com
Mon Mar 9 12:25:33 CET 2015
Hi,
On Mon, Mar 9, 2015 at 11:49 AM, Poul-Henning Kamp <phk at phk.freebsd.dk>
wrote:
> --------
> In message <CAOXZevCbv95KkamMUb=
> LPezgP0jO+Njn+NbP5-VBTBwpTuif3g at mail.gmail.com>
> , Per Buer writes:
>
> >Intermittently failing tests seems to be an issue.
>
> Yes, indeed.
>
> But the one thing I would *really* like to get is some kind of statistics
> of *which* test-cases we have this problem with, and I wish Jenkins could
> somehow be persuaded to collect such info.
>
That was the main objective.
Having the testrunner have an optional interface to something like a
database would be trivial. Having something that is able to collect a bit
of context to a testrun in a database (load, IO response, etc) would be
helpful in order to figure out _why_ certain tests are failing on certain
platforms.
>Another option would be to start giving hints on how the test should be
> >executed within the test itself. By having a header on the test that the
> >testrunner could read one can set up certain parameters.
> >
> ># parallel=0,load<1,timeout=15s,platforms=!netbsd,retries=3
>
> I'm not very keen on this, it is an open invitation to not take
> the battle and fix the test-cases properly.
>
> (With respect to "platforms=!netbsd" we already have a "feature"
> keyword, that looks for specific details in the run-environment.)
>
Sure. I'm not at all convinced it is a good idea, it is just an option
available to us if we have a somewhat more sophisticated tool to schedule
tests than what autocrap gives us.
We can of course our own threshold for what we deem an acceptable
configuration of a test. If tests that are unable to finish on a loaded
system or a system without apriori knowledge of how the IO response is are
not acceptable than we shouldn't accept those tests.
>Let me know if I should commit this and start replacement of the current
> >way tests are run.
>
> So my first question is: Why don't we just teach varnishtest to do this ?
>
Because scheduling the tests and running the tests are different tasks.
Scheduling and interfacing with a database is pretty trivial in Python and
not something we should do directly in the varnishtest itself.
The ultimate goal here is to be able to produce reports on how the various
tests are doing. What tests are timing sensitive and what platforms they
seem to be struggling.
It would be really trivial to make varnishtest collect the failing
> tests on a list and then run that list after the main-run in
> single-threaded mode (enabled by some argv)
>
Absolutely. And that is what the proposed solution does. It allows us to
keep varnishtest a relativly simple program that deal with running a single
test and it delegates the scheduling role to something else (not
autotools). And it does it without introducing any new dependencies (we
already are dependant on Python for building and I haven't used any modules
outside the core, AFAIK).
--
*Per Buer*
CTO | Varnish Software AS
Cell: +47 95839117
We Make Websites Fly!
www.varnish-software.com
[image: Register now]
<http://info.varnish-software.com/varnish-summits-autumn-2014-registration>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-dev/attachments/20150309/f956079f/attachment.html>
More information about the varnish-dev
mailing list