Bug 722224

Summary: YaST2 Partitioner have a Problem with /dev/mapper (Error)
Product: [openSUSE] openSUSE 12.1 Reporter: Forgotten User QtBI7gWTIh <forgotten_QtBI7gWTIh>
Component: YaST2Assignee: David Sterba <dsterba>
Status: RESOLVED FIXED QA Contact: Jiri Srain <jsrain>
Severity: Critical    
Priority: P5 - None CC: aschnell, forgotten_QtBI7gWTIh, mge, plinnell
Version: Beta 1   
Target Milestone: ---   
Hardware: x86-64   
OS: Other   
Whiteboard:
Found By: --- Services Priority:
Business Priority: Blocker: ---
Marketing QA Status: --- IT Deployment: ---
Attachments: The Partitioner Problem Log YaST2
The new YaST2 install Log

Description Forgotten User QtBI7gWTIh 2011-10-05 08:52:47 UTC
Created attachment 454579 [details]
The Partitioner Problem Log YaST2

User-Agent:       Mozilla/5.0 (X11; Linux x86_64; rv:7.0) Gecko/20100101 Firefox/7.0

When I start YaST2 Partition and change any on my mountpoints YaST2 like to set sda2 to root?

sda2 & sdb2 are the /dev/mapper/isw_xxxxxx_vol2_part2

I have Errors!

The logs attachment from this Problem is one of the last YaST2 logs

Reproducible: Always

Steps to Reproduce:
1.
2.
3.
Comment 1 Arvin Schnell 2011-10-07 17:03:44 UTC
The problem seems to be caused by btrfs. "btrfs filesystem show" reports
three btrfs, one is:

Label: none  uuid: 5c0dbce0-4fe9-4898-b228-35aa082585ce
    Total devices 1 FS bytes used 3.77GB
    devid    1 size 65.56GB used 10.79GB path /dev/sda2

But this is wrong. It must report /dev/dm-X instead of sda2 since sda
is part of an DM RAID.

See "dmsetup table":

0 976766976 mirror core 2 131072 nosync 2 8:0 0 8:16 0 1 handle_errors

"blkid" shows:

/dev/sda2: UUID="5c0dbce0-4fe9-4898-b228-35aa082585ce"
/dev/mapper/isw_cdbgjjjeac_vol0_part2: UUID="5c0dbce0-4fe9-4898-b228-35aa082585ce"

To be sure I verified this by creating a btrfs on a mirroring DM RAID.
During installation "btrfs filesystem show" reported a filesystem on
dm-3 but after reboot on sda3 (although dm-3 was present).
Comment 3 Petr Uzel 2011-10-19 10:06:41 UTC
*** Bug 724498 has been marked as a duplicate of this bug. ***
Comment 4 Forgotten User QtBI7gWTIh 2011-10-20 10:40:59 UTC
I installed today SLES 11 SP2 new ( new GTP table) with a other filesystem XFS   Petr Uze like this ;)

I have the same Error in the Boot(Kernel) log?

GPT:Primary Header thinks Alt. header is not at the end of the disk 
GPT:976766975 != 976773167
GPT:Alternate GPT not at end of the disk
GPT:976766975 != 976773167
GPT: Use GNU Parted to correct GPT Errors.
GPT:Primary Header thinks Alt. header is not at the end of the disk 
GPT:976766975 != 976773167
GPT:Alternate GPT not at end of the disk
GPT:976766975 != 976773167
GPT: Use GNU Parted to correct GPT Errors.
Comment 5 Matthias Eckermann 2011-10-20 10:44:34 UTC
Is this about openSUSE 12.1 now or SLE 11 SP2 ?
Comment 6 Forgotten User QtBI7gWTIh 2011-10-20 10:46:27 UTC
Created attachment 457822 [details]
The new YaST2 install Log
Comment 7 Forgotten User QtBI7gWTIh 2011-10-20 10:48:30 UTC
This is in the moment SLES 11 SP2

The Error is in oS 12.1 and SLES
Comment 8 Arvin Schnell 2011-10-20 13:42:21 UTC
AFIAS the message about GPT problems can be ignored, see bug #724498 comment #5.
Comment 9 Forgotten User b5BnQSUi71 2011-11-15 11:24:00 UTC
While trying to understand ths problem better I have a few questions:

a) Is this problem reproducible only when you use raid/dm or is it reproducible even without them?
b) What is the alternate OS that is present?
c) What is the size of the hardisk?
d) I assume the computer uses EFI instead of regular BIOS, right? If yes, is this reproducible on machines with regular BIOS?

Thanks
Comment 10 Forgotten User QtBI7gWTIh 2011-11-17 12:02:43 UTC
a)

This is also with raid/md

b) No alternate OS Present / or oS 12.1 secondary OS same result ?
b.1) No alternate OS Present / or secondary SLES 11 SP2

c)

2 x 500 GB

d) This is a EFI Installation but I tested it also with BIOS Installation it is reproducible :(.


e) I tested it also with diskpart to create a new GPT Table and Partitions?

I destroy the Raid create a new Raid1, GPT Table Partition created with diskpart and Install it New.

Afterward by booting the system, I have again this Error Message?

I mean the Kernel (3.0 or 3.1)  have a Problem to read this correct?
Comment 11 David Sterba 2011-11-29 10:53:55 UTC
I'm not able to reproduce the problem and it works as expected.

What I did:
* created 1 system disk and 4 more for LVM
* started network installation of 12.1 in a kvm
* created LVM setup as suggested, sda1 boot, sda2 swap, all the rest as / with btrfs
* continued installation, until finished
* gui started, I've checked 'btrfs fi show', root is under /dev/dm-0
* rebooted, checked again, everything ok

Test 2:
I tried to created a raid1 on a running system, as a new /dev/md0, created btrfs, fi show was ie. md0.

When I rebooted without mdadm service enabled, fi show did not show md0 within btrfs filesystems and mounting the raid1 device /dev/sda10 directly failed with "unknown linux_raid_member filesystem type", though the device appeared in a list of other fs (previously created as a btrfs multidevice /dev/sda10 ... sda14).

Enabling mdadm service and reboot fixed it and I can see /dev/md0 in fi show, can mount it etc.

I'm not using GPT, just regular fdisk or yast.
Comment 12 Arvin Schnell 2011-11-29 11:03:53 UTC
AFAIR I saw the problem only with DM RAID but not with MD RAID or LVM.
Comment 13 Forgotten User QtBI7gWTIh 2011-11-29 12:13:13 UTC
Hello,

I tested last weekend 12.1 (not full Installation) on a Sandy bridge Board Chipset Intel C206 (ASUS P8B WS) with a Xeon E3 1275 and 16GB Ram with 2 SSD Intel 510 250GB as Raid1, with the Intel Matrix Storage enabled (Windows7) This is a EFI Board! This a preinstalled. with 70GB free space on the SSD

I start the Installation 1x with mdraid and the second with dmraid ?

First the good news ;)

with mdadm the partitions from windows are correct to see! The Installation is created automatic on the second raid1 (disks) I have not Tested to create Partitions on the SSD. 

with dmraid no windows partition are present and the full Disk is for Installation (SSD)?

I mean afterward I have no Window on the SSD?

I hope I can sent a YaST2 Log next weekend (?) and a Test with SLES 11 SP2
Comment 14 David Sterba 2012-03-08 00:05:29 UTC
I have submitted updated btrfsprogs to factory and opensuse 12.1, and it should list the device mapper names instead of plain sdX.
Comment 15 Peter Linnell 2017-01-03 00:23:03 UTC
Old bug fixed.