|
Bugzilla – Full Text Bug Listing |
| Summary: | partitioner: can't shrink filesystem - says it is full but it isn't | ||
|---|---|---|---|
| Product: | [openSUSE] openSUSE Distribution | Reporter: | Kai Dupke <kdupke> |
| Component: | YaST2 | Assignee: | YaST Team <yast-internal> |
| Status: | RESOLVED FIXED | QA Contact: | Jiri Srain <jsrain> |
| Severity: | Normal | ||
| Priority: | P5 - None | CC: | ancor, aschnell, dgonzalez |
| Version: | Leap 15.1 | ||
| Target Milestone: | Leap 15.2 | ||
| Hardware: | Other | ||
| OS: | Other | ||
| URL: | https://trello.com/c/LQVROAgq | ||
| See Also: | http://bugzilla.suse.com/show_bug.cgi?id=1154295 | ||
| Whiteboard: | |||
| Found By: | --- | Services Priority: | |
| Business Priority: | Blocker: | --- | |
| Marketing QA Status: | --- | IT Deployment: | --- |
| Attachments: | yast2 log for shrinking FS | ||
I'm pretty sure this is the reason: https://github.com/openSUSE/libstorage-ng/blob/master/storage/Filesystems/BlkFilesystemImpl.cc#L324 As you can see there, to stay safe libstorage-ng multiplies the 14T that the system reports as the minimum size. Fun enough, in this case that results in the minimum being even bigger than the current and the maximum sizes. For completeness, some relevant lines from the logs. AppUtil.cc:171 path:/tmp/[...] blocks:5461618852 bfree:1728779100 bsize:4096 size:22370790817792 free:7081079193600 BlkFilesystemImpl.cc:288 on-disk resize-info:resize-ok:true reasons:192 min-size:22548544749568 max-size:17592186044416 block-size:1 Arvin, can't we adjust that 1.5 to something more reasonable? Depends on what is reasonable. It will likely need research for every filesystem. Another problem is that the maximal size for ext4 is set to 16 TiB. This is done since ext4 needs the 64bit feature to handle larger devices (https://www.netways.de/blog/2017/12/13/how-to-use-ext4-beyond-16tib/). E.g. an ext4 normally created on SLE12 SP3 cannot be grown beyond 16 TiB. So strictly speaking the 64bit feature must be checked (using dumpe2fs). (In reply to Arvin Schnell from comment #3) > Depends on what is reasonable. It will likely need research for every > filesystem. I wonder if this has to be a factor, or if a fixed amount (i.e. 250GB) would do as well. > Another problem is that the maximal size for ext4 is set to 16 TiB. This > is done since ext4 needs the 64bit feature to handle larger devices > (https://www.netways.de/blog/2017/12/13/how-to-use-ext4-beyond-16tib/). > E.g. an ext4 normally created on SLE12 SP3 cannot be grown beyond 16 TiB. > So strictly speaking the 64bit feature must be checked (using dumpe2fs). In the shrinking case, this can't be an issue, can it be? In case a size bigger then 16TB, 64bit must be already enabled, right? So shrinking does not need to check this flag. I wonder if it is checked for extending, but at least for openSUSE I am not aware of any notice when I extended a FS beyond 16TB (this system alone has 2 FS > 16TB) resp. created a FS that big. (In reply to Kai Dupke from comment #4) > (In reply to Arvin Schnell from comment #3) > > Depends on what is reasonable. It will likely need research for every > > filesystem. > > I wonder if this has to be a factor, or if a fixed amount (i.e. 250GB) would > do as well. Likely not a good idea, e.g. for an almost empty 200 GiB filesystem. > > Another problem is that the maximal size for ext4 is set to 16 TiB. This > > is done since ext4 needs the 64bit feature to handle larger devices > > (https://www.netways.de/blog/2017/12/13/how-to-use-ext4-beyond-16tib/). > > E.g. an ext4 normally created on SLE12 SP3 cannot be grown beyond 16 TiB. > > So strictly speaking the 64bit feature must be checked (using dumpe2fs). > > In the shrinking case, this can't be an issue, can it be? In case a size > bigger then 16TB, 64bit must be already enabled, right? So shrinking does > not need to check this flag. The library provides the possible range for resizing. For the range to be correct the flag is needed and that was the use-case I had in mind. > I wonder if it is checked for extending, but at least for openSUSE I am not > aware of any notice when I extended a FS beyond 16TB (this system alone has > 2 FS > 16TB) resp. created a FS that big. The libstorage provides the maximal allowed value for a FS. Whether the UI code uses it I cannot say. BTW: The resize operation might have to move several TiB on the disk. That could take several hours. The YaST team decided to use the value reported by resize2fs. Since resize2fs was also improved (see bug #1154295) for this and we cannot be sure that the improved version is installed on the system the YaST team also decided to fix this only for Leap 15.2. Now the resize information calculation is more advanced. It does: - use estimation from resize2fs for min size of ext2/3/4 - use 64bit feature for max size of ext4 Thank you all! |
Created attachment 816609 [details] yast2 log for shrinking FS yast2-storage-ng-4.2.37-lp151.381.1.x86_64 I try to shrink a volume and get the message: Cannot be resized. Max size. FS full. No space left in VG. Filesystem Size Used Avail Use% Mounted on /dev/mapper/BACKUP_R6-BACKUP_R6 21T 14T 6.5T 69% /Backup Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/BACKUP_R6-BACKUP_R6 657M 6.8M 650M 2% /Backup PV VG Fmt Attr PSize PFree /dev/sdf BACKUP_R6 lvm2 a-- 20.51t 0 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert BACKUP_R6 BACKUP_R6 -wi-ao---- 20.51t Should it not be possible to *shrink* the FS?