This adds test for testing QA:Testcase_Anaconda_User_Interface_Text.
I try to avoid using needles as much as I can, so I'm using the
same tactics as with ARM initial setup - I'm using "wait_still_screen"
instead, only checking at the very end whether everything went OK.
Details
- Reviewers
adamwill jskladan - Commits
- rOPENQATESTSdb95bccd5281: add anaconda text UI test
run install_anaconda_text on latest Fedora
Diff Detail
- Repository
- rOPENQATESTS os-autoinst-distri-fedora
- Lint
Automatic diff as part of commit; lint not applicable. - Unit
Automatic diff as part of commit; unit tests not applicable.
did you see if we could use the serial console and use openQA's ability to check serial console output?
That would be perfect, but from what I know, it isn't possible. We can redirect Anaconda output to serial console, but then we don't have any means how to control it, because openQA simply outputs serial to file and then it only seeks inside this file, so there is no metod like "type_string_serial". It would be possible if we could mirror VT and serial console - using "wait_serial" to read state of Anaconda, but then send commands to Anaconda with "type_string". We tried to find if it's possible with @kparal, but we couldn't find anything.
So seems like this actually found a bug in Rawhide:
https://openqa.stg.fedoraproject.org/tests/34013
but the test's behaviour when it failed wasn't very good (it just kept running actions until it happened to cause the system to reboot somehow...). Would be nice to improve that, if possible. Obviously, we need to modify the post_fail_hook to handle the text failure screen as well as the graphical one - it needs to look for the text 'crash' screen and take the appropriate action to generate the crash report in that case.
The simple thing we could do is just more strict checking of the current UI state before taking actions, I guess.
The *clever* way I'm imagining would be if we set up some kind of mechanism so that before each action the test would check for the 'oh noes! we failed!' screen, and go straight to the post fail hook if it saw it.
sorry, to be clear - I was saying to fix the crash handling at least before I'll ack this.
Yeah, I'll look at this. Actually, there is another problem - VNC/Text mode screen also matches "anaconda_main_hub_text" screen, so it didn't failed even when main hub didn't appear. I'll fix also this.
So I've added error handling in post_fail_hook and also modified anaconda_main_hub needle. Now for continuous error checking - is it OK if I call if (check_screen "anaconda_text_error") after every return to main hub? Or after every type_string?
Hmm, I guess it's hard to say. My thought was to write some kind of helper which would do that for you - you pass it a code reference and it runs the error check then runs the code...so something like run_with_error_check(\{assert_screen 'foo';}) , that kinda thing.
I guess we just need enough checking to ensure we never accidentally navigate the error handling screen so far that we can't generate the crash report and upload the logs, so you could figure out which actions in the test might actually do bad things to that screen, and make sure we check for error before running those?
I have added main_common.pm so we could easily merge it with what's in key-fixes branch. It does checking for error before and after running the code. I've added it every time we want to enter a spoke and also when we're sending 4\n, because typing 4 on abrt console screen results in rebooting of whole machine.
Aside from the inline comments, I guess this is OK.
lib/main_common.pm | ||
---|---|---|
15 | the default timeout for check_screen is rather long - IIRC 30 seconds; this will waste a whole minute for each call, i think. I'd suggest explicitly setting a shorter timeout, say 5 or 10 seconds, for each check_screen? | |
tests/install_text.pm | ||
10 | I don't exactly disagree, but I am slightly worried that these could be fragile when run in the real world, on busy worker hosts alongside other tests. But I guess we can try it out and tweak the times as we go, and re-consider the approach if they really prove to be a problem. |
the default timeout for check_screen is rather long - IIRC 30 seconds; this will waste a whole minute for each call, i think. I'd suggest explicitly setting a shorter timeout, say 5 or 10 seconds, for each check_screen?