| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
TODO: deal with volume removal after VM termination?
* controller/bin/dtf-get-machine.in: Option parsing code added.
($opt_separate_volume): New global.
(show_help): Describe --separate-volume option.
(boot): Use --block-device instead of --image option for nova boot
if --separate-volume is used.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use logging wrapper functions and log into dedicated file per
sub-task (apart from STDERR/STDOUT as usual).
* controller/bin/dtf-controller (log_info, log_error, log_die)
(log_any): New logging functions.
($log_procid, $log_logfile, @log_buffer): New logging related
variables.
(load_runfile): Remove old comment.
(subcommand): Use log_info instead of print.
(child_task): Open child's logging file. Better parse check $run
content.
(main): Use log_info instead of print.
|
|
|
|
|
| |
* controller/bin/dtf-run-remote.in: Check that --setup-playbook
and --taskdir do point to existing files.
|
|
|
|
|
|
|
| |
* controller/bin/dtf-return-machine.in: Do not return machines
which were faked (by DTF_GET_MACHINE_FAKE_IP).
* controller/bin/dtf-run-remote.in: Do not lowercase all option
arguments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the --setup-playbook option for dtf-run-remote which allows us
to submit configuration (or any) playbook which will be included
into the default one, executed before actual testing.
* controller/Makefile.am: s/fedora.yml/default.yml/.
* controller/bin/dtf-controller.in: Call dtf-remote-run with
--setup-playbook option.
* controller/bin/dtf-run-remote.in: Fix the option parsing. Add
new option --setup-playbook.
(error): New function.
(die): Use '$*' instead of '*@'.
* controller/share/dtf-controller/ansible/playbooks/fedora.yml:
Rename to default.yml.
* controller/share/dtf-controller/ansible/playbooks/default.yml:
Moved from fedora.yml.
* controller/share/--/--/playbooks/include/prepare-testenv.yml:
Removed hard-wired configuration.
|
|
|
|
|
|
| |
* controller/bin/dtf-run-remote.in (tarball): First process the
$opt_taskdir with readlink -f to get some real path (instead of
'.' e.g.) and then call basename.
|
|
|
|
| |
* controller/README: Reword something, typo-fixes.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* README: Made as symlink to controller/README.
* controller/Makefile.am: Distribute the README and configuration
file template.
* controller/README: Reworked version of README following current
API.
* controller/doc/dtf-controller/OSID.sh.template: New
configuration template.
* tester/dtf-prepare-testsuite: Prepare also basic taskdir
structure.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Copied from 'postgresql-setup' package. Also do some 'make dist'
fixes.
* controller/Makefile.am: Use $TEST_GEN_FILES_LIST. Also create
the share/ directory during build.
* controller/configure.ac: Initialize testsuite.
* controller/tests/Makefile.am: Bureaucracy for testsuite.
* controller/tests/atlocal.in: Likewise.
* controller/tests/testsuite.at: Add two tests copied from
postgresql-setup project.
|
|
|
|
|
| |
* Makefile.am: Make sure all sources and data files are
distributed.
|
|
|
|
|
|
| |
* bin/dtf-get-machine.in: Exit if option was parsed but was not
handled explicitly by case statement.
* bin/dtf-run-remote.in: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New script 'dtf-return-machine' returns the VM to OpenStack based
on its public IP. In future, this may be abstracted to any VM
provider (or VM pool or whatever), but that requires also some
IP <=> VM mapping shared between dtf-get-machine and
dtf-return-machine.
* controller/.gitignore: Ignore new scripts.
* controller/Makefile.am: Build new scripts.
* controller/bin/dtf-return-machine.in: New script for VM
* deletion.
* controller/libexec/dtf-nova.in: New wrapper around 'nova'
command, showing only data output where fields are separated by
tabulator.
* controller/share/dtf-controller/ansible/playbooks/fedora.yml:
Finally call dtf-return-machine after successful test run.
|
|
|
|
| |
* controller/bin/dtf-controller.in: Print '\n' after error msg.
|
|
|
|
|
|
|
|
|
|
| |
While polling sshd server on remote host, do not use
PasswordAuthentication even if the server allows that. That
causes problem if the new VM already started but cloud-init was
not-yet able to set the authorized_keys file.
* controller/libexec/dtf-wait-for-ssh: Add the
PasswordAuthentication=no ssh option.
|
|
|
|
| |
* controller/libexec/dtf-wait-for-ssh: Syntax lint.
|
|
|
|
|
| |
* controller/bin/dtf-get-machine.in: Do not try to check for IP
address if 'nova boot' command failed.
|
|
|
|
| |
* controller/controller: Removed.
|
|
|
|
|
|
|
| |
* controller/bin/dtf-controller.in (child_task): Do not exit if
dtf-run-remote failed. This allows us commit at least log files.
* controller/libexec/dtf-commit-results.in: Do not try to extract
dtf.tar.gz archive if it does not exist (dtf-run-remote fail).
|
|
|
|
|
| |
* share/dtf-controller/results-stats-templates/html.tmpl: Fields
of the result table are now hyper-linked with particular results.
|
|
|
|
|
| |
* controller/bin/dtf-controller.in: Really generate html instead
of xml.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* controller/bin/dtf-controller.in (subcommand): Generate
stdout and stderr files separately.
(child_task): Generate '*.err' and '*.out' logs for subcommands.
Call dtf-run-remote with --distro/--distro-version options. Call
the dtf-result-stats finally and save its output to results.html.
(main): Simple debugging info and comment adjusting.
* controller/libexec/dtf-commit-results.in: Tak three arguments
now.
* controller/libexec/dtf-result-stats.in: Better read the
'tester/run' output.
* controller/share/dtf-controller/ansible/playbooks/fedora.yml:
Run the 'run --force' instead of 'run' on remote host.
* controller/share/dtf-controller/results-stats-templates/html.tmpl
React on exit_status 2.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
.. rather than by ansible nova_compute module directly. Allows me
implement more variability in VM handling.
* controller/bin/dtf-get-machine.in: Add --quiet option which
causes that only allocated IP is shown. Add also
DTF_GET_MACHINE_FAKE_IP variable usable for faster debugging; when
set, dtf-get-machine prints its content to standard output without
allocating new VM.
* controller/bin/dtf-run-remote.in: Add -v (verbose) option to
ansible-playbook call to get more verbose output.
* controller/share/dtf-controller/ansible/playbooks/fedora.yml:
Use dtf-get-machine. Also remove creds file requirement.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Controller is able to read simple YAML configuration file with
list of task to be performed in parallel (the task actually are
run the testsuite remotely, commit results to DB, count
statistics and upload results).
* controller/bin/dtf-controller.in: New template for binary.
* controller/libexec/dtf-commit-results.in: Copy whole result
directory instead of 'dtf' subdir only.
* controller/.gitignore: Ignore new binary.
* controller/Makefile.am: Build dtf-commit-results.
|
|
|
|
|
|
| |
I had badly configured git so I missed those before.
* controller/.gitignore: Add autoconf/automake related ignores.
|
|
|
|
|
| |
* controller/bin/dtf-get-machine.in: Use $HOME/.dtf/.. rather than
$srcdir/config/...
|
|
|
|
|
|
|
|
| |
The directory structure in this project is done so that you can
run directly from dir (after ./build).
* controller/build: Use --prefix="$(pwd)" instead of --with-git
which was never implemented.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move 'results_stats' and 'commit_results' binaries into libexec
and adjust appropriately. Also html template is now in
$pkgdatadir.
* .gitignore: Add tags/ChangeLog generated files.
* README: Just some random notes. Needs to be rewritten anyway.
* controller/.gitignore: Add newly 'make'd files.
* controller/Makefile.am: Generate libexec/bin files.
* controller/commit_results: Move to controller/libexec as
dtf-result-stats.in.
* controller/configure.ac: Also substitute resulttemplatedir.
* controller/etc/dtf.conf.d/config.sh.template: The DTF_DATABASE
was misleading - use rather DTF_DATABASE_DEFAULT.
* controller/libexec/dtf-commit-results.in: Moved from
controller/commit_results.
* controller/result_stats: Moved to
controller/libexec/dtf-result-stats.in.
* controller/libexec/dtf-result-stats.in: Moved from
controller/result_stats.
* controller/result_templates/html.tmpl: Moved to
controller/share/dtf-controller/results-stats-templates/html.tmpl.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rename the config vairable from DTF_OPENSTACK_ID to
DTF_OPENSTACK_DEFAULT_ID to better match the name with its
purpose.
* controller/bin/dtf-run-remote.in: Use DTF_OPENSTACK_DEFAULT_ID
instead of DTF_OPENSTACK_ID.
* controller/config/config.sh.template: Moved.
* controller/etc/dtf.conf.d/config.sh.template: Document renamed
variable on new place.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First part of converting controller to autoconf/automake solution.
* .gitignore: New gitignore; autotools ignores.
* Makefile.am: New file.
* get_machine: Renamed to template bin/dtf-get-machine.in.
* bin/dtf-get-machine.in: New template based on get_machine.
* run_remote: Renamed to template bin/dtf-run-remote.in.
* bin/dtf-run-remote.in: New binary template from run_remote.
* build: New bootstrap like helper script (git-only).
* configure.ac: New file.
* etc/dtf.sh.in: Likewise.
* ansible_helpers/wait-for-ssh: Renamed to
libexec/dtf-wait-for-ssh.
* share/dtf-controller/parse_credsfile: Reworked script for
parsing OS credentials.
* parse_credsfile: Moved to share/dtf-controller.
* libexec/dtf-wait-for-ssh: Renamed from wait-for-ssh.
* ansible/*: Moved into share/dtf-controller/ansible/*.
* share/dtf-controller/ansible/vars/generated-vars.yml.in: New
template file exporting configure-time variables into playbooks.
|
|
|
|
| |
* get_machine: New option --name and variable $opt_name.
|
|
|
|
|
|
|
| |
.. to reuse remotely generated results and commit them to local
controller-database.
* commit_results: New script.
|
|
|
|
|
| |
* controller/parse_credsfile: Detect $srcdir to be able to read
the correct secret file from any CWD.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make sure that on tester machine everything is put into
$DTF_RESULTDIR. Similarly, on controller machine, everything
should be put into --workdir.
* controller/run_remote: Detect $srcdir.
(workdir_prereq): The $opt_workdir is temporary directory by
default.
* tester/run (run): Task results now go into $DTF_RESULTDIR/tasks.
The main xml result goes into $DTF_RESULT/dtf.xml.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Try to split into three separate components -> controller, tester,
and 'tasks' (postgresql-tasks in our case). The controller
component is the main part which is able to run the task remotely.
Tester is more-like library for 'tasks' component (should be
reusable on the raw git level).
* controller: Almost separated component.
* postgresql-tasks: Likewise.
* tester: Likewise.
|
|
|
|
|
|
| |
* controller: Just rsync.
* config/config.sh.template: Document the DTF_PRESENTER_PLACE
option.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Better define configuration and provide examples.
* controller: Unpack results to correct directory, load the
configuration from new place, call run_remote with proper
arguments, generate 'results.html' with result_stats script.
* ansible/run_include: Adjust to better simulate run_remote.
* ansible/fedora.yml: Adjust for fixed configuration.
* run_remote: Likewise. Also small issues with option parsing
fixed.
* config.sh.template: Moved as config/config.sh.template.
* config/config.sh.template: Copyyed from /config.sh.template,
better documented options.
* run: Fix typo - use 'while read i' instead of 'for i in'.
* config/os/EXAMPLE.sh: New file - exmaple configuration.
* private/os/EXAMPLE.yml: Likewise.
* config/hosts.template: Likewise.
* dist.include: New file with file patterns that should be
distributed to test machine.
* dist: Distribute only those files which are necessary.
* config/.gitignore: New gitignore file.
|
|
Controller script runs the script on remote machine (OpenStack),
downloads the results, stores the result into its own
result-database and re-generates statistics for runs done so far.
It will be able to upload the results to "presenter" machine.
* config.sh.template: New doc file.
* controller: New file (the central script for CI).
* runner/result_stats: New file. Based on downloaded results from
testing machine, it generates single html file with stats.
* runner/result_templates/html.tmpl: New file. Template for ^^^.
|