fix docker autotest suite task for libtaskotron
ClosedPublic

Authored by garretraziel on Mar 7 2016, 10:41 AM.

Details

Summary

This builds on code from task-dockerautotest. It corrects some things (it
correctly starts docker service) and it also reports differently (one summary result
for whole check instead of check for every subtest). Some simple fixes in docker autotest
repo were needed (https://github.com/autotest/autotest-docker/commit/c75622 and
https://github.com/autotest/autotest-docker/issues/572).

It also installs docker from Koji (I assume that this is how it should work).

Test Plan

Run whole testsuite with
runtask --libvirt -i docker-1.9.1-6.git6ec29ef.fc23 -t koji_build -a x86_64 dockerautotest.yml.
59 subtests pass, 39 fail and 2 error out (see T735), but failing tests are mainly garbage checks
(one of the the tests lefts some image behind and all consequent garbage checks fail because of that).
It's the same result as running on the local machine.

Diff Detail

Repository
rDOCKER task-dockerautotest
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.
garretraziel retitled this revision from to fix docker autotest suite task for libtaskotron.Mar 7 2016, 10:41 AM
garretraziel updated this object.
garretraziel edited the test plan for this revision. (Show Details)
garretraziel added reviewers: tflink, kparal.

I have executed the task and it seems to work OK and the generated artifacts look nice. Good job. Some nitpicks and questions below.

dockerautotest.yml
27

Remark for taskotron developers: We will need to look at arch handling, because currently our disposable minion support does not take it into account at all.

run_dockerautotest.py
8–9

Do we need to use autotest from git, or would it be enough to use the autotest package in Fedora (added to task formula)?

10–11

You're not checking the exitcode of any executed command. Is that intentional? The recommended usage of subprocess is:

output = subprocess.check_output(['some', 'command], stderr=subprocess.STDOUT)

which raises subprocess.CalledProcessError for non-zero exit code.

Also, if you want to print command output automatically, you can use stdout=subprocess.PIPE.

22–23

What's the purpose of this? I think it will fail with KeyError a few lines below.

83

In this case I'd actually argue that NEEDS_INSPECTION makes sense for an overall result, because CRASHED seems to imply the whole task crashed, and not that just 2 out of 80 subtests crashed. OTOH for those individual results, CRASHED is clearer than NEEDS_INSPECTION, or you could could use their own terminology when creating the summary line. It depends whether you want to play with it or not.

garretraziel updated this revision to Diff 1975.Mar 9 2016, 2:31 PM
  • use autotest from repos, rework running commands a little bit
tflink added a comment.Mar 9 2016, 8:04 PM

My only concern about using the packaged version of autotest is if that package doesn't get updated and something changes upstream. I wonder if we should be asking the docker-autotest folks about that.

overall, it looks good to me outside of the inline comments and question about why we're only running a subset of the tests in docker-autotest

run_dockerautotest.py
60

why are we limiting the tests run to just these?

83

yeah, that makes sense to me. if we set one result to CRASHED, the whole result becomes CRASHED, right?

In that case, it may make more sense to use NEEDS_INSPECTION as the status for a single result but keep ERROR in the summary to use upstream terminology instead of stuffing our own classification into text where it's not needed

Seems OK overall, modulo Tim's questions.

run_dockerautotest.py
36–37

You don't want interactive copying here, I believe.

83

Agreed that I would use their own terms in the summary, if it is not too complicated.

  • use autotest from repos, rework running commands a little bit
  • create custom summary (drop AutotestJsonParser), use correct subprocess function
kparal accepted this revision.Mar 11 2016, 1:16 PM
This revision is now accepted and ready to land.Mar 11 2016, 1:16 PM
tflink accepted this revision.Mar 11 2016, 1:29 PM
This revision was automatically updated to reflect the committed changes.