| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
|
|
|
|
|
|
|
|
| |
The capability to handle a variance between prod and staging
here is just temporary while I'm testing the new fixed asset
handling stuff by deploying it on staging. Once it's tested
and merged we'll just have prod and staging do the same thing.
But for now we need to cleanly handle them having the static
disk images in different places.
|
|
|
|
|
| |
We fixed the issue which meant ARM kernel / initrd file names
were colliding, so we don't need this workaround any more.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've enhanced `createhdds check` to exit 1 if all images are
present but some are old, and 2 if any images are missing. We
use this to only create images if any are missing here in the
play; we rely on the daily cron job to rebuild old images.
This is kind of a band-aid for a weird issue on openqa01 where
virt-install runs just don't seem to work properly after the
box has been running for a while, so createhdds doesn't actually
work and any playbook run gets hung up on it for a long time.
This doesn't fix that, but does at least mean we can run the
playbook without being bothered by it. To get createhdds to run
properly and actually regenerate the outdated images, we have
to reboot the system and run it right away, it seems to work
fine right after the system boots up.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We currently can't tell openQA to download the ARM kernel and
initramfs with a filename unique to the build being tested, so
they just get downloaded as `vmlinuz` and `initrd.img`, which
means that when the next compose is tested, we won't download
them again, we'll just use the existing copies (which are no
longer the right ones). Because of this our current 'F25' and
'Rawhide' ARM tests are actually still using some F24 kernel
image. Until the openQA bug which prevents us giving the files
unique names is resolved, here's a hacky workaround: a script
which wipes the files every hour if no openQA jobs are pending.
|
| |
|
|
|
|
|
| |
no longer needed since recent tweak to repository config in
tests.
|
| |
|
|
|
|
| |
...d'oh, one day i'll get this right
|
| |
|
|
|
|
|
|
|
| |
since resultsdb submission was added to the scheduler, we must
disable it here for now (as we don't want to use it yet), and
also update the name of the config directive that controls wiki
result submission.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
we need to install some additional packages for the revised
createhdds (but we no longer need pexpect), and ensure libvirtd
is running before running createhdds.
|
| |
|
|
|
|
|
|
| |
it's not really fatal when it fails (except on first deployment)
and nothing else later depends on it, so we can go ahead and
continue the run even if it fails
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
instead of just relying on it getting run when we do an
ansible run, since that's intermittent and it's annoying
when you want to do an ansible run and it sits there for
hours creating disk images. This way we'll know they'll
get updated regularly and ansible runs should never get
blocked on image creation, though we still do it in the
ansible plays just in case (and for initial deployment).
This should now be safe, with the recent changes to make it
time out gracefully and run atomically. We also use withlock
to make sure we don't stack jobs.
|
|
|
|
|
|
|
| |
this will only work with the new openqa package builds I just
did, but won't break anything with older ones. With a new enough
openQA package, it'll prevent the web UI from showing download
links for ISOs and HDD files.
|
|
|
|
| |
without this, ARM tests do not run (phab T801)
|
|
|
|
|
| |
we don't want these workers to *only* run tap tests, so put the
default classes into their WORKER_CLASS too.
|
| |
|
|
|
|
|
|
|
| |
OK, this GRE crap ain't working. Let's give up! Instead let's
have one tap-capable host per openQA deployment, so all the
tap jobs will go to it. This...should achieve that. Let's see
what blows up.
|
|
|
|
|
|
| |
we have a big mismatch between prod and stg atm (stg has 4
workers, prod has 18). let's make it 14 vs. 8 and also give
stg two worker hosts so we can test multi-worker-host scenarios
|
|
|
|
| |
I think we'll need this to avoid routing loops with the tunnels.
|
|
|
|
| |
duh quotes are hard
|
|
|
|
| |
watch the pretty pretty fireworks
|
| |
|
|
|
|
| |
everyone stand back, this one's gonna go boom.
|
| |
|
|
|
|
|
| |
I think the notify restart of network.service should deal with
this on first deployment, so get rid of it.
|
| |
|
| |
|
|
|
|
| |
srsly, fml
|
|
|
|
|
|
|
| |
NetworkManager entirely ignores the openvswitch devices, the
integration only works with network.service. So turn it on.
Apparently we can have both services enabled and things don't
explode...so far...
|
| |
|
|
|
|
| |
holy crap, this is some ancient magic.
|
| |
|
| |
|
|
|
|
|
|
| |
this is highly experimental and for deployment only to stg at
present...I have this stuff working on happyassassin, now trying
to translate it to stg.
|
|
|
|
|
| |
the COPR stuff is long gone so these weren't doing anything,
they just got left around by accident.
|
|
|
|
| |
srsly, what the hell.
|
|
|
|
|
|
| |
I tweaked the playbook to not patch the templates for non-infra
deployments, but then forgot to make test loading work using
non-patched templates for non-infra...
|
|
|
|
|
| |
rwmj has refreshed the i686 base image now, so let's try this
again.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
necessary updates for openqa roles have gone stable for now, so
disable updates-testing usage (keep the plays around commented,
though, since we'll likely need them again in future). Also, a
bit more attempted support for non-infra use: make the monkey
patching of the repo URLs in the test templates only happen if
deployment_type is defined, actually respect the openqa_consumer
var (don't enable the job scheduling consumer unless it's truey)
and only enable any wiki reporting consumer if deployment_type
is defined.
|
|
|
|
|
|
| |
This reverts commit 9872fe3fc8e7b03b345a278b58a58982b0ccb266.
Looks like the i686 base image hasn't been refreshed yet, so
i686 image generation still fails. Curses!
|
|
|
|
|
|
|
| |
rwmjones says the guestfs / rpm bug has been fixed (a new base
fedora-23 image has been uploaded which should avoid it, anyway)
so let's try turning disk image generation back on and see how
it flies.
|
|
|
|
|
| |
https://bugzilla.redhat.com/show_bug.cgi?id=1320754 is messing
it up. Disable for now so I can get other changes through.
|