|
Bugzilla – Full Text Bug Listing |
| Summary: | partitioner: no check when resizing LVM logical volume | ||
|---|---|---|---|
| Product: | [openSUSE] openSUSE 11.4 | Reporter: | macias - <bluedzins> |
| Component: | YaST2 | Assignee: | Thomas Fehr <fehr> |
| Status: | RESOLVED FIXED | QA Contact: | Jiri Srain <jsrain> |
| Severity: | Critical | ||
| Priority: | P5 - None | ||
| Version: | Final | ||
| Target Milestone: | --- | ||
| Hardware: | x86-64 | ||
| OS: | Other | ||
| Whiteboard: | |||
| Found By: | --- | Services Priority: | |
| Business Priority: | Blocker: | --- | |
| Marketing QA Status: | --- | IT Deployment: | --- |
| Attachments: | y2logs | ||
|
Description
macias -
2012-03-01 21:23:52 UTC
Please attach y2logs.If you are in doubt follow: http://en.opensuse.org/openSUSE:Bugreport_YaST Thanks. Created attachment 479768 [details]
y2logs
Reassigned to maintainer of yast2-storage. In general there is no problem extending the size of a logical volume with an ext4 filesystem on it while it is mounted. I did that numerous times and it always worked flawlessly. If I am correctly analyzing the y2log files you did the following: You extend the logical volume /dev/dodatkowa/arch from about 2.5 TB to 5.06 TB since YaST2 detected an ext4 fs on the logical volume which is capable of online resizing, YaST2 automatically resizes the filesystem after resizing the logical volume by calling resize2fs. In your case I can see that the resize of the logical volume /dev/dodatkowa/arch to 5.06 TB succeeds. Afterward YaST2 called resize2fs, this apparently creates lots of IO and needs a long time to succeed. I do not see in the y2log file that resizefs exited, so I assume you booted the machine before resize2fs finished. Of course aborting a running resize2fs command most certainly leaves the resized filesystem in an inconsistent state, so this explains that fsck was run later on the logical volume. Unfortunately I do not have 5 Terabyte unused disk space available, but I will try to do such a resize with a 1TB disk and see how much time it needs and how the machine behaves while it is running. Could you look into /var/log/messages if there are kernel error messages at the time you did the resize. > Could you look into /var/log/messages if there are kernel error messages > at the time you did the resize. I don't see any message even related to the fact I was resizing partition. > I did that numerous times and it always worked flawlessly. However we both know, there is a difference between such operations: a) resize on unmounted partition b) resize on mounted partition while copying 2TB of data It is 100% in reach of system, to detect if the partition is mounted or not, and ask for sure. Please note that programs ask if user is really sure to quit them (Eclipse for example), and I think resizing partition is way more important task. It is common sense even in terms of speed -- it is faster, to quit normally programs, unmount the partition, resize, and run them back, then struggle with all activities going on -- here multitasking does more harm than good. Another thing is, to this date IMHO there underlying problems with scheduling heavy I/O traffic, my current situation is better now, not because kernel is so mature, but because I have maxed out available memory. But as this case shows, when you have (my guess) 2 heavy I/O tasks running and you are doing partitioning computer eventually freezes. I did some test meanwhile. I resized a mounted 200Gig volume to 1.3TB (I do not have more free space in one machine currently) and it needed about 80 Minutes so this is consistent with your case needing more than 3 hours. But I saw no heavy IO during resize (iotop showed about 10MB/s while the same volume can do about 100MB/s with dd). So the system was perfectly usable (it even had only 2Gig memory), so I cannot see anything related to a stuck system as you had. I also could use the mounted volume during resize but of course it was quite slow compared to normal speed. To be honest I was not aware of this incredibly slow resize so far. I normally resize my fs quite regularly in chunks of 5 or 10 Gig so there speed and mounted/umounted does not matter much and so far there were no bug reports about this. Why do you think the system needs to copy 2TB of data when resizing from 2.5TB to 5TB? Or did you do this manually while the resize was running? I will do some more tests regarding timing while mounted/umounted. So far I changed the text display during resize, to make the user aware that it may take long and that it must not be aborted. If resizing in umounted state is significantly faster (did not test this yet) and one resizes a mounted fs by a large amount of space I can add a popup that advises the user that resize can be sped up if he umounts the fs. I will not automatically umount in YaST2 resize code since it depends too much from uses cases of the fs if this even possible of if this is harmful or not. > Why do you think the system needs to copy 2TB of data when resizing from > 2.5TB to 5TB? Or did you do this manually while the resize was running? The latter (2TB is my guess, I had 2 tasks running with copying a lot of data). Please note this is HDD, not SDD, so moving heads is costly operation. > I can add a popup that advises the user that > resize can be sped up if he umounts the fs. I will strongly suggest _confirmation_dialog, instead of any passive popup, stating something that: " Your mounted partition supports resizing FS on-fly, however it is safer and faster, to unmount it first. Do you want to continue with resizing. [cancel] [resize] " Note, user does not have to quit, all it takes is to switch, unmount, and get back clicking "resize". > I will not automatically umount > in YaST2 resize code Yes, sure. With _confirmation_ box, this is enough, user has to click Yes or No, to continue. Resizing an mounted ext4 fs is more than 10 times slower than resizing a fs not mounted. I added now a warning popup when one resizes by more than 50Gig and the filesystem is mounted. |