| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
| |
There's various reasons why these two could fail. For instance, running
lsetfilecon on any filesystem that doesn't support it (vfat is the big one,
but there are others) would result in a failure. This probably shouldn't
take down anaconda.
|
|
|
|
|
|
| |
Account for the /boot existing on its own partition or as part of /.
Output the values of the reipl configuration in linuxrc.s390 before
reboot.
|
| |
|
|
|
|
|
| |
Use glib's data structures and string functions in modules.c since we
already have glib. Add in some safety checks as well.
|
|
|
|
|
|
| |
- Don't remove scriptlets when they've been written out to aid in debugging.
- Always log stdout/stderr.
- On errors, print the messages to anaconda.log as well.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
We don't need hal anymore, so remove it from the initrd.img generation
and stop loader from starting up hald.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Similar to what we have to do for zFCP, write /etc/dasd.conf to the
target system for all DASD devices that have been used during
installation. The device address as well as flags that can be set via
sysfs.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Upstream accepted my patch to change rd_DASD to specify a single device.
If multiple devices need to be brought up at boot time, just give
multiple rd_DASD arguments. Syntax is (from dracut.8):
rd_DASD=<CCW address>[,readonly=X][,use_diag=X][,erplog=X][,failfast=X]
The old rd_DASD has been renamed rd_DASD_MOD and is the same syntax as
the dasd kernel module parameter. However, you can only specify a
single rd_DASD_MOD parameter at boot time.
|
| |
|
| |
|
|
|
|
|
| |
Among other problems, this means that all the partitioning commands can be
in a file generated from a %pre script again.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
kickstart.py never should have been written the way it previously was - with
all the code that actually did something in the parse methods. This
organization prevented the parse methods from being called until we had things
like Storage instances and led to the multiple pass hack.
This better design moves all the code that does anything into apply methods
(that pykickstart lacks, oh well) and keeps the parse methods only setting
values in KickstartCommand/KickstartData objects. This should allow doing
parsing very early, then calling the apply methods when we're set up and
therefore remove the reason for multiple passes.
This patch requires a pykickstart which can pass data objects up from deep
in dispatcher. Note also that this patch does not yet call any of the apply
methods, so kickstart is temporarily busted.
|
| |
|
|
|
|
|
| |
This means we only have to specify those handlers and data objects we
require special versions of, not all of them.
|
| |
|
|
|
|
|
|
|
| |
This allows getting the traceback dialog during kickstart file execution,
instead of just getting a dead UI and an unseen traceback on tty1. We
probably can't move this much earlier due to interface and instdata
requirements.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
For each partition, choose the free space region that provides the
greatest amount of combined growth for the partitions allocated up
to that point.
|
|
|
|
|
| |
Use Request and Chunk instances to calculate growth for all partitions,
then recreates and add the partitions to disk with new geometries.
|
|
|
|
|
| |
Once the bits are in pyparted this function can be made to actually
retrieve a meaningful alignment.
|
| |
|
| |
|
|
|
|
|
| |
This also eliminates the need for the min/max constraint when adding
a new partition.
|
| |
|
|
|
|
|
|
| |
For fixed-size requests, choose the smallest suitable region. For
growable requests, choose the largest suitable region. For bootable
requests, as before, choose the first suitable region.
|
|
|
|
| |
Also give a little bump based on mountpoint.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Some drivers (cpqarray <blegh>) make block device nodes for
controllers with no disks attached and then report a 0 size,
treat this as no media present.
|
|
|
|
|
|
| |
The linuxrc.s390 rewrite changed the behavior of RUNKS on RHEL-5. We've
done away with RUNKS on RHEL-6, but we still need to maintain existing
functionality on RHEL-5.
|
| |
|
|
|
|
|
| |
Also change the no initator set error dialog title from "Error with data"
to just "Error" to be consistent with the other error dialog titles.
|
|
|
|
|
|
|
|
|
| |
Make MDRaidArrayDevice.__init__ raise a value exception when creating
a new (so non existing) raid set and there are not enough members for
the requested raid level.
And catch this value exception in the GUI raid dialog and kickstart raid
commands.
|
|
|
|
|
|
|
| |
string.find('foo') != -1 is a bit klunky. We now have the ability to
just do 'foo' in string and get back a True or False from that. I'm
sure there are more files in anaconda that could use this treatment but
this is the file I was looking at anyway.
|
|
|
|
|
|
|
| |
With rescue copying install.img to /tmp when available_memory>128000k,
this left the machine unable to hold install.img in ram if there is
under 256M of ram. Patch to change that to be based on MIN_GUI_RAM
instead.
|
|
|
|
|
|
| |
Somehow a debug printf of mine from developing:
"Fix EDD BIOS disk order detection in general and make it work with dmraid"
patch ended up in getting committed, this patch removes it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since BIOS RAID sets never change there is no need to deactivate them and
later activate them again. This also fixes problems in case the followng
happens:
1) raid sets get activated, pyblock creates device-mappings for partitions on
the set.
2) The partition table changes while exectuing actions
3) the raid sets gets de-activated, because of devicetree.processActions()
tearing down everything in response to a disklabel commit error caused
by lvm or mdraid using a partiton
4) pyblock tries to remove the partition mappings as it has created them,
but the partition table has changed, and when parted commits partition
table changes of a dmraid set to disk, it also modifies the partitions
device-mappings. pyblock tries to remove a non existing mapping ->
backtrace.
|