|
Bugzilla – Full Text Bug Listing |
| Summary: | software raid + hot spare --> yast installs/reinstalls grub on hot spare (device order problem) | ||
|---|---|---|---|
| Product: | [openSUSE] openSUSE 10.3 | Reporter: | robert spitzenpfeil <rs.opensuse> |
| Component: | YaST2 | Assignee: | Ihno Krumreich <ihno> |
| Status: | RESOLVED FEATURE | QA Contact: | Jiri Srain <jsrain> |
| Severity: | Normal | ||
| Priority: | P4 - Low | CC: | aschnell |
| Version: | Final | ||
| Target Milestone: | --- | ||
| Hardware: | Other | ||
| OS: | openSUSE 10.3 | ||
| Whiteboard: | |||
| Found By: | --- | Services Priority: | |
| Business Priority: | Blocker: | --- | |
| Marketing QA Status: | --- | IT Deployment: | --- |
| Attachments: | install on software raid (level1, 1 spare, 3 disks) | ||
btw, this behavior is also present in SLES10 SP2. maybe not so critical there, as people usually use a hardware raid for the system disks on a server (except me of course). Maybe the problem is that the spare is sda. Do you have the same effect if the spare is sdc? Good question. I could test with openSUSE 11.1 only. I don't use SLES products at my current job. I did a fresh install of openSuSE 11.1 on a manually created software raid (level:1, sda/b spare: sdc) console: mdadm --create --level=1 /dev/md0 --raid-device=2 /dev/sda /dev/sdb --spare-device=1 /dev/sdc GUI: select /dev/md0, choose software, ...install..., ...wait ... later: GRUB installation failed! ... continue ... reboot ... hang (expected) ... rescue system: manual GRUB installation fails due to missing partition tables on sda/b. /dev/sdc1 doesn't show up in /dev/ at all, only using fdisk -l After recreation of sdc1 with fdisk and resync of disks /dev/sdc1 appears. --STOP-- Same thing using partitions, not full devices: create partitions with fdisk. mdadm --create --level=1 /dev/md0 --raid-device=2 /dev/sda1 /dev/sdb1 --spare-device=1 /dev/sdc1 ...install...reboot... GRUB fails to install, same error as before (see image). I'm 100% positive that I've created the RAID with partition tables this time. But after the failure looking at my disks only /dev/sdc1 is left, /dev/sda1 and /dev/sdb1 are GONE. And GRUB refuses to install on just /dev/sda or sdb.... Something stinks... --STOP-- GUI-only (installation without pre-existing RAID array): adding 3 partitions (on 3 disks) to a raid1 array results in an array with 3 active raid members and NO spare disk !?! I thought YaST would be intelligent enough to know that anything more than 2 would be spares on level1. I'm stopping this here... either I'm too blind to see the spare selection tab or it doesn't exist. Created attachment 305640 [details]
install on software raid (level1, 1 spare, 3 disks)
I have extended (for openSUSE 11.4) libstorage/yast2-storage to provide information of spare devices. In the target map for MD and MDPART there is a new field "spares" next to "devices". Spare devices are not listed in the "devices" field anymore. Usedby is also set for spare devices. It is still not possible to create MD RAIDs with spare devices in the UI. I simply don't have a good and realisable idea. If support in the UI is desired please make a feature request. Please request feature from openFATE. |
when installing 10.3 on a preexisting raid1 yast installs grub on the hot spare - which usually does not contain any data. a) new install on preexisting raid1 array: /dev/md0 consists of: /dev/sda1 (spare) /dev/sdb1 raid1 member /dev/sdc1 raid1 member device.map: (hd0) /dev/sda (hd1) /dev/sdb (hd2) /dev/sdc grub ends up on (hd0) ! and there is absolutely no data on this disk ! menu.lst: title blabla root (hd0,0) ... ... ... b) re-grub after some device has failed: before resync: raid members: sd[ab], spare: sdc --> grub on sda is ok ! after say sda has failed it is replaced by sdc. sda is removed + disk replaced + readded to the array --> sda is new spare (EMPTY) and grub gets installed on sda. same as a)