|
Bugzilla – Full Text Bug Listing |
|
Description
Randy Rouch
2006-12-24 00:31:45 UTC
Can you please attach y2logs. If you are in doubt please follow: http://en.opensuse.org/Bugs/YaST. Thanks! Created attachment 111752 [details]
Raid Configuration: 465.8GB RAID0 striped over 4 disks, named RAID0. Also, a 232.9GB RAID10 over the same 4 disks, named RAID10
Created attachment 111753 [details]
Raid Configuration: 465.8GB RAID10 over 4 disks, named RAID10
Created attachment 111754 [details]
Raid Configuration: 465.8GB RAID10 over 4 disks, named Volume0
Created attachment 111755 [details]
Raid Configuration: 465.8GB RAID0 striped over 4 disks, named Volume0. Also, another 465.7GB RAID0 over the same 4 disks, named Volume1
Created attachment 111756 [details]
Hardware Information
I've run some more tests while retrieving a y2logs set for you, and have found the following and attached the following files: Situation 1: Raid Configuration: 465.8GB RAID0 striped over 4 disks, named RAID0. Also, a 232.9GB RAID10 over the same 4 disks, named RAID10 Problems: RAID10 partition only reports ~116GB, cannot create partitions on RAID10. Log: y2logs1.tgz Situation 2: Raid Configuration: 465.8GB RAID10 over 4 disks, named RAID10 Problems: Could not create partitioning proposal, unable to create partitions on RAID10, RAID10 only reports 232.9GB Log: y2logs2.tgz Situation 3: Raid Configuration: 465.8GB RAID10 over 4 disks, named Volume0 Problems: Only able to see 232.9GB Logs: y2logs3.tgz Situation 4: Raid Configuration: 465.8GB RAID0 striped over 4 disks, named Volume0. Also, another 465.7GB RAID0 over the same 4 disks, named Volume1 Problems: Unable to create partitions on Volume1. Logs: y2logs4.tgz I think what I have discovered is actually three related but separate problems in using Yast2 and either DMRaid or the ICH8R controller: 1. Yast2 Does not like to make partitions on the second raid array in an Intel Matrix RAID array. 2. Yast is incorrectly reporting the size of RAID10 partitions. 3. Yast does not like partitions named RAID10. I'm also including an hwinfo dump because something tells me ou might need it. If there's anything else you need or want me to try, please let me know. Note: Sorry for being a little slow with these, but this is actually my new personal machine at home, so I'm only able to work on these problems after the business day is over. Thanks! Randy R. Rouch Information Technology Consultant Undergraduate Studies California State University, San Bernardino Pretty thourough analysis, thanks for all the details. From what I can see on a first glance is that YaST2 gets confused when the dmraid name ends on a digit (that it works with "0" at the end is pure luck ;-). Could you please retry naming the volumes without ending the names in digits? The problem with raid10 is not YaST2 related, the maps set up via dmraid command already have the too small size. YaST2 cannot do much here. Matthias could you check if dmraid even has working support for raid10? Another problem with dmraid is the following: "dmraid -s -c -c -c" displays the following isw_cabcgdjhe_Volume0:488391936:128:mirror:ok:0:4:0 /dev/sda:isw:isw_cabcgdjhe_Volume0:mirror:ok:488391939:0 /dev/sdb:isw:isw_cabcgdjhe_Volume0:mirror:ok:488391939:0 /dev/sdc:isw:isw_cabcgdjhe_Volume0:mirror:ok:488391939:0 /dev/sdd:isw:isw_cabcgdjhe_Volume0:mirror:ok:488391939:0 but the command "dmraid -s -c -c -c isw_cabcgdjhe_Volume0" displays: No RAID sets and with names: "isw_cabcgdjhe_Volume0" this seems inconsistent to me. Just by chance today came in another dmraid bug concerning raid names ending with digits (#232763). So you need not reproduce the problem with names not ending in digits. Matthias could you have a look at the things I wrote in comment#8. Is there a machine with ICH8R controller we could use for testing. I did a little hunting and researching, testing and trying, and found a few things, maybe important, maybe not: I tested out creating RAID names ending in letters instead of numbers. Creating partitions works fine now, both on the primary or secondary volume of the matrix array. All that's left is the RAID10 half-size problem. Looking at the man page for dmraid, under the flag --list_formats, it says that its supported formats are Span, RAID0, RAID1, RAID10 (mirror on stripes) and RAID10 (stripe on mirrors). This makes it sound like RAID10 is supported, though it could just be for listing metadata? I noticed that the stripe size of the RAID10 array was 64KB by default, while the RAID0 array was 128KB. On a hunch, I tried changing the RAID10 array's stripe size to 32KB. There was no change in size, according to Yast2 (which I now know is getting its info direct from dmraid.) According to the dmraid design document: "Spanning of disks, RAID0, RAID1 and RAID10 shall be supported" However dmraid -l says for Intel Software Raid (isw): isw : Intel Software RAID (0,1) It seems that RAID10 is currently not supported for isw. So the raid10 problems should be explained, probably the dmraid developers simply do not have any information about raid10 metadata format of this controller and did not yet have time or resources to reverse engineer it. About the problem with digits and end of volumes names. I created a fixed package yast2-storage which should also work also with RAID names ending in digits and attached it. If you are able to reproduce this in an installed system you can unpack the attachment and do a "rpm -Fhv yast2-storage*.rpm". Unfortunately so far I could not find a machine with a raid controller allowing arbitrary raid names to be specified by the user. Therefore I would be thankful if you could test if these packages fix the problem. If the problem is not fixed please attach y2log files again. Created attachment 112187 [details]
updated yast2-storage packages
Created attachment 112375 [details]
Situation before installation
Created attachment 112376 [details]
Situation after installation, before patch
Created attachment 112377 [details]
Situation after patch
I didn't have a system set up already with SUSE and RAID capability, but since my XPS 410 is still a blank, I just reinstalled SUSE on it instead so I could give you Yast2 logs before, during and after the patch. The situation I set up was an Intel Matrix RAID Array across 4 disks, with the first half 465.8GB RAID0 named VolumeA and the second half 465.7GB RAID0 named Volume1. During installation: Could not create any partitions on Volume1. VolumeA worked fine and installation proceeded accordingly. See y2logs5.tgz. After Installation: Could not create any partitions on Volume1. See y2logs6.tgz. After Patch: Partitions created on Volume1. Patch successful. See y2logs7.tgz. It appears that the patch works perfectly! BTW, something you might note in y2logs7 is an odd occurrance I've noticed after partitioning the RAID array. It appears as if the RAID controller is writing partition data or some other form of metadata to the MBR area of the first hard drive in the array, sda. Since SUSE not only looks at the MBR of the array but also of the member hard drives, it finds this data in sda but can't make any sense of it. The data doesn't match sda's parameters. So, it pops up an error stating that the partitioning data for sda doesn't make any sense. As far as I can tell, the only other side effect is some odd read/write errors that flash by during boot time, but it doesn't prevent the OS from installing, doesn't prevent partitioning of the array, doesn't even really slow down the boot sequence, the errors fly by so fast. As far as I'm concerned, it's nothing super important to me, but I thought I would point it out in case you wanted to hammer it out for the next revision cycle or point it out to the proper authorities for whichever component it involves. Again, thanks for all your help! Randy R. Rouch Information Technology Consultant California State University, San Bernardino Thanks for thorough testing. The fact of the impossible entries on disks belonging to raid sets is known. The kernel simply does not know that a certain disk belongs to a raid array (dmraid is completely userspace) and therefore searches for a partition table. If there is one (e.g on the first disk of a raid0 or on both disks of a raid1) the kernel displays the partition table and of course these values may look strange (e.g. partition sizes larger than whole disk). This is just confusing but should not have any adverse side effects. YaST2 has build in intelligence to ignore partitions on disks belonging to an active dmraid set. Setting this to fixed now, the problem in YaST2 should be fixed and the RAID10 for ICH8R support is missing from dmraid. |