Bug 1155170 - /boot and /boot/efi not mounted
Summary: /boot and /boot/efi not mounted
Status: RESOLVED DUPLICATE of bug 1137373
Alias: None
Product: openSUSE Tumbleweed
Classification: openSUSE
Component: Basesystem (show other bugs)
Version: Current
Hardware: x86-64 All
: P5 - None : Major (vote)
Target Milestone: ---
Assignee: systemd maintainers
QA Contact: E-mail List
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-10-27 19:30 UTC by Michael Hirmke
Modified: 2021-01-03 08:24 UTC (History)
6 users (show)

See Also:
Found By: ---
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments
fuser / before reboot (13.97 KB, text/x-log)
2019-11-03 18:53 UTC, Michael Hirmke
Details
fuser / after reboot (6.96 KB, text/x-log)
2019-11-03 18:53 UTC, Michael Hirmke
Details
two shutdown logs (851.34 KB, application/x-bzip-compressed-tar)
2019-11-26 20:48 UTC, Michael Hirmke
Details
log with "Device root is still in use." (319.11 KB, application/x-bzip-compressed-tar)
2019-11-29 19:37 UTC, Michael Hirmke
Details
contains shutdown-log-boot_not_mounted.txt (272.00 KB, application/x-bzip-compressed-tar)
2019-12-04 19:45 UTC, Michael Hirmke
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Michael Hirmke 2019-10-27 19:30:00 UTC
When rebooting after the last two Tumbleweed snapshots containing new kernel packages, the sytem didn't come up with the just installed kernel, but with the previous one.
Further investigation shows, that the latest kernel has been installed to /boot in the root partition, because /boot hasn't been mounted. Besides that /boot/efi also hasn't been mounted.
Manual mounting both of them worked as expected, but automatic mounts don't take place any longer since then.
systemd mount units boot.mount and boot-efi.mount exist in /run/systemd, though.
Comment 1 Michael Hirmke 2019-10-27 20:30:19 UTC
Seems to happen, when the last shutdown wasn't clean, and the log contains messages like these:

systemd-cryptsetup[960]: Device root is still in use.
systemd-cryptsetup[960]: Failed to deactivate: Device or resource busy
systemd[1]: systemd-cryptsetup@root.service: Control process exited, code=exited, status=1/FAILURE

After a clean shutdown the problem doesn't occur.
Comment 2 Michael Hirmke 2019-10-27 20:37:14 UTC
When the problem occurs, the log shows the following messages for (un)mounting /boot and /boot/efi:

systemd[1]: Mounting /boot...
systemd[1]: Mounted /boot.
systemd[1]: Mounting /boot/efi...
systemd[1]: Mounted /boot/efi.

Besides these messages nothing else shows up for /boot or /boot/efi - no errors, no nothing.
It looks exactly the same as for a boot, where both mounts are present.
Comment 3 Alynx Zhou 2019-10-28 05:00:34 UTC
Could you please paste your /etc/fstab here? Are you sure that you have entries for /boot and /boot/efi on it? Also the log piece you paste shows nothing about the problem, could you please upload full journal?
Comment 4 Michael Hirmke 2019-10-28 09:42:22 UTC
(In reply to Alynx Zhou from comment #3)
> Could you please paste your /etc/fstab here? Are you sure that you have
> entries for /boot and /boot/efi on it? Also the log piece you paste shows

If not, why would I have

systemd[1]: Mounting /boot...
systemd[1]: Mounted /boot.

in the logs?
And why would it work, when the shutdown before was clean?

fstab:

UUID=bf69d826-e322-4c61-b071-a57180042bca  /boot      ext4  defaults        0  2
UUID=F8C7-52FF                             /boot/efi  vfat  defaults        0  0

> nothing about the problem, could you please upload full journal?

As I wrote - there are no more (error) messages regarding this problem in the journal. Believe me - I am a maniac regarding warnings or errors in my logas.
Comment 5 Michael Hirmke 2019-11-03 18:52:31 UTC
This weekend I spent some time to analyze this problem in depth.
The results are:

1. The problem seems to happen each and every time for the first reboot after running "zypper dup" and installing some packages; it doesn't matter what kind of package was installed - the problem occurs whether a new kernel or some application package has been installed.

2. When rebooting after this installation I get:

systemd-cryptsetup[27794]: Device root is still in use.
systemd-cryptsetup[27794]: Failed to deactivate: Device or resource busy
systemd[1]: systemd-cryptsetup@root.service: Control process exited, code=exited, status=1/FAILURE

The root partition on this system is encrypted, boot is not.

3. When the system comes up then, the following messages are shown:

systemd[1]: Unmounting /boot/efi...
systemd[1]: boot-efi.mount: Succeeded.
systemd[1]: Unmounted /boot/efi.
systemd[1]: Unmounting /boot...
systemd[1]: boot.mount: Succeeded.
systemd[1]: Unmounted /boot.
systemd[1]: Stopping Cryptography Setup for root...
systemd[1]: systemd-fsck@dev-disk-by\x2duuid-bf69d826\x2de322\x2d4c61\x2db071\x2da57180042bca.service: Succeeded.
systemd[1]: Stopped File System Check on /dev/disk/by-uuid/bf69d826-e322-4c61-b071-a57180042bca.
systemd-fsck[794]: /dev/nvme0n1p3: clean, 392/131072 files, 95367/524288 blocks
systemd[1]: Starting Flush Journal to Persistent Storage...
systemd-journald[774]: Time spent on flushing to /var is 87.902ms for 1337 entries.
systemd-cryptsetup[942]: Device root is still in use.
systemd-cryptsetup[942]: Failed to deactivate: Device or resource busy

4. When the system is up and running, /boot and /boot/efi are not mounting.

5. After rebooting the system again, everything is back to noemal.

I tried to stop various services like nfs, unload various modules, but that didn't solve the problem.
I then added "ExecStopPre=fuser /" to the cryptsetup unit.
The result can be seen in the attachements.
I can't recognize the culprit, though.
Comment 6 Michael Hirmke 2019-11-03 18:53:22 UTC
Created attachment 823091 [details]
fuser / before reboot
Comment 7 Michael Hirmke 2019-11-03 18:53:58 UTC
Created attachment 823092 [details]
fuser / after reboot
Comment 8 Michael Hirmke 2019-11-03 18:56:38 UTC
[...]
> 
> 4. When the system is up and running, /boot and /boot/efi are not mounting.
                                                                    mounted.
                                                        should read ^^^^^^^
>
Comment 9 Michael Hirmke 2019-11-21 20:13:38 UTC
no one?
Comment 10 Thomas Blume 2019-11-25 09:21:03 UTC
(In reply to Michael Hirmke from comment #9)
> no one?

We need more information.
Could you follow:

https://freedesktop.org/wiki/Software/systemd/Debugging/

section:

Shutdown Completes Eventually

and create a shutdown log from a failing shutdown?
In addition, please provide the subsequent full boot log (e.g. journalctl -axb) where /boot and /boot/efi are not mounted.
Comment 11 Michael Hirmke 2019-11-25 18:42:38 UTC
I've configured everything as described on this website.
Now I have to wait for next update - it only happens during the first reboot after an update (zypper dup).
Comment 12 Michael Hirmke 2019-11-26 20:48:32 UTC
Created attachment 825014 [details]
two shutdown logs

archive contains two files:

- shutdown-log-err.txt from the first reboot after a zypper dup
- shutdown-log-ok.txt from the second reboot after a zypper dup

After the first reboot /boot and /boot/efi are not mounted, after the second reboot they are mounted.
Comment 13 Thomas Blume 2019-11-27 08:04:16 UTC
(In reply to Michael Hirmke from comment #12)
> Created attachment 825014 [details]
> two shutdown logs
> 
> archive contains two files:
> 
> - shutdown-log-err.txt from the first reboot after a zypper dup
> - shutdown-log-ok.txt from the second reboot after a zypper dup
> 
> After the first reboot /boot and /boot/efi are not mounted, after the second
> reboot they are mounted.

Thanks for the logs.
In the failing case, grep boot-efi.mount shutdown-log-err.txt shows a recurring loop of mount jobs for boot-efi.mount:

-->
[50688.243995] systemd[1]: Pulling in boot-efi.mount/start from local-fs.target/start
[50688.243998] systemd[1]: Added job boot-efi.mount/start to transaction.
[50688.244001] systemd[1]: Pulling in -.mount/start from boot-efi.mount/start
[50688.244003] systemd[1]: Pulling in boot.mount/start from boot-efi.mount/start
[50688.244006] systemd[1]: Pulling in system.slice/start from boot-efi.mount/start
[50688.244008] systemd[1]: Pulling in dev-nvme0n1p1.device/start from boot-efi.mount/start
[50688.244446] systemd[1]: Pulling in dev-disk-by\x2duuid-F8C7\x2d52FF.device/start from boot-efi.mount/start
[50688.244449] systemd[1]: Pulling in umount.target/stop from boot-efi.mount/start
[50688.244885] systemd[1]: Found redundant job boot-efi.mount/start, dropping from transaction.
--<

So, systemd tries to add a mount job, but before the jobs succeeds, it gets dropped, because there is already such a job.
The same happens for the boot.mount job.
In the working log, I can see:

-->
[   37.011264] systemd[1]: Keeping job boot-efi.mount/start because of local-fs.target/start
[   37.023043] systemd[1]: Keeping job boot.mount/start because of boot-efi.mount/start
--<

instead of the dropping message.

Obviously this is the culprit for the missing mount.
Franck, have you ever seen such a behavior?
Comment 14 Franck Bui 2019-11-27 10:10:44 UTC
No sorry, I never encountered a similar issue before.

(In reply to Michael Hirmke from comment #5)
> This weekend I spent some time to analyze this problem in depth.
> The results are:

Thanks for your time and analysis, it will definitively help pinpointing your problem.

> 
> 1. The problem seems to happen each and every time for the first reboot
> after running "zypper dup" and installing some packages; it doesn't matter
> what kind of package was installed - the problem occurs whether a new kernel
> or some application package has been installed.
> 
> 2. When rebooting after this installation I get:
> 
> systemd-cryptsetup[27794]: Device root is still in use.
> systemd-cryptsetup[27794]: Failed to deactivate: Device or resource busy
> systemd[1]: systemd-cryptsetup@root.service: Control process exited,
> code=exited, status=1/FAILURE
> 
> The root partition on this system is encrypted, boot is not.

Maybe we should focus on this shutdown issue first ?

The fact that /boot and /boot/efi are not mounted on the next reboot seems to be a consequence of it. I'm not saying that we shouldn't look at it but maybe understanding the first issue (the shutdown one) will help us understanding (and reproducing) the second one (/boot and /boot/efi not mounted) ?
Comment 15 Michael Hirmke 2019-11-27 22:22:45 UTC
(In reply to Franck Bui from comment #14)
> No sorry, I never encountered a similar issue before.
> 
> (In reply to Michael Hirmke from comment #5)
> > This weekend I spent some time to analyze this problem in depth.
> > The results are:
> 
> Thanks for your time and analysis, it will definitively help pinpointing
> your problem.
> 
[...]
> > 2. When rebooting after this installation I get:
> > 
> > systemd-cryptsetup[27794]: Device root is still in use.
> > systemd-cryptsetup[27794]: Failed to deactivate: Device or resource busy
> > systemd[1]: systemd-cryptsetup@root.service: Control process exited,
> > code=exited, status=1/FAILURE
> > 
> > The root partition on this system is encrypted, boot is not.
> 
> Maybe we should focus on this shutdown issue first ?
> 
> The fact that /boot and /boot/efi are not mounted on the next reboot seems
> to be a consequence of it. I'm not saying that we shouldn't look at it but
> maybe understanding the first issue (the shutdown one) will help us
> understanding (and reproducing) the second one (/boot and /boot/efi not
> mounted) ?

Whatever may be necessary to solve this problem :)
Comment 16 Franck Bui 2019-11-28 13:30:13 UTC
@Michael, the shutdown logs that show the failure is quite big and hard to parse. Apparently the system was resumed from hibernation at least once.

Can you try to reproduce the shutdown issue after rebooting and without suspending your system so the logs will be much shorter and easier to parse ?

To provide the logs of the previous boot, please use "journalctl -b -1"

Thanks.
Comment 17 Michael Hirmke 2019-11-28 13:33:01 UTC
(In reply to Franck Bui from comment #16)
> @Michael, the shutdown logs that show the failure is quite big and hard to
> parse. Apparently the system was resumed from hibernation at least once.
> 
> Can you try to reproduce the shutdown issue after rebooting and without
> suspending your system so the logs will be much shorter and easier to parse ?
> 
> To provide the logs of the previous boot, please use "journalctl -b -1"
> 
> Thanks.

ok - have to wait for the nex snapshot, though.
Comment 18 Franck Bui 2019-11-28 13:34:31 UTC
Can't you simply install a new package instead ?
Comment 19 Michael Hirmke 2019-11-28 22:15:14 UTC
(In reply to Franck Bui from comment #18)
> Can't you simply install a new package instead ?

No, doesn't seem so.
Just tried to install, reinstall and uninstall a package, but the problem didn't occur after a reboot.
Perhaps the problem only occurs, if the install itself requires a reboot!?!?
Comment 20 Franck Bui 2019-11-29 07:06:17 UTC
Can you try to restore a previous snapshot so you can replay the upgrade that lead to the boggus shutdown ?
Comment 21 Michael Hirmke 2019-11-29 19:25:36 UTC
(In reply to Franck Bui from comment #20)
> Can you try to restore a previous snapshot so you can replay the upgrade
> that lead to the boggus shutdown ?

I never reverted a snapshot - isn't this possible with btrfs only?
The problem occurs on every zypper dup, though, so the next one should be sufficient.
Comment 22 Michael Hirmke 2019-11-29 19:37:11 UTC
Created attachment 825240 [details]
log with "Device root is still in use."

Latest Tumbleweed snapshot has been installed.
Comment 23 Michael Hirmke 2019-11-30 13:36:29 UTC
A lot of testing and many boot cycles later with and without running zypper, hibernating and resuming in between, it seems, that I have two problems.

1. "Boot device is still in use" on a reboot - the reason for this seems to be an installation via zypper.
2. /boot and /boot/efi are not mounted after a reboot - this doesn't occur, if the system is rebooted before running zypper. It only occurs, if the system has been hibernated/resumed after the last reboot and before running zypper.

I found this, because you wanted me to reboot before running zypper, because the log was too large.

Does this make any sense to you?
Comment 24 Franck Bui 2019-12-02 13:16:19 UTC
(In reply to Michael Hirmke from comment #23)
> A lot of testing and many boot cycles later with and without running zypper,
> hibernating and resuming in between, it seems, that I have two problems.

yes that was my assumption too but I thought that 2. was triggered by 1. which appears to not be the case according to your new findings.

Just before providing more feedback, can you tell me which FS is used for / ?

Also did you provide any logs when /boot wasn't mounted as it should have ? I don't think so but just prefer making sure.

Thanks.
Comment 25 Michael Hirmke 2019-12-02 14:43:10 UTC
(In reply to Franck Bui from comment #24)
> (In reply to Michael Hirmke from comment #23)
> > A lot of testing and many boot cycles later with and without running zypper,
> > hibernating and resuming in between, it seems, that I have two problems.
> 
> yes that was my assumption too but I thought that 2. was triggered by 1.
> which appears to not be the case according to your new findings.
> 
> Just before providing more feedback, can you tell me which FS is used for / ?

All filesystems except of course /boot/efi are ext4.

> 
> Also did you provide any logs when /boot wasn't mounted as it should have ?
> I don't think so but just prefer making sure.

The first log "shutdown-log-err.txt" in the attachement named "two shutdown logs" shows this problem.
Because this problem only occurs after hibernating/resuming the system, I assumed the log mentioned would be sufficient even if it is big. A new one wouldn't be much smaller.
Comment 26 Franck Bui 2019-12-02 14:57:41 UTC
(In reply to Michael Hirmke from comment #25)
> The first log "shutdown-log-err.txt" in the attachement named "two shutdown
> logs" shows this problem.

Ok I'll take a closer look then.
Comment 27 Franck Bui 2019-12-02 15:03:53 UTC
(In reply to Michael Hirmke from comment #23)
> 1. "Boot device is still in use" on a reboot - the reason for this seems to
> be an installation via zypper.

So regarding this issue, the problem is that systemd shouldn't attempt to detach the root device at all since it's going to be used until the very last end, when the system will switch back to initrd.

Since this problem is not specific to openSUSE, I opened an issue against upstream and it will be tracked at https://github.com/systemd/systemd/issues/14224 from now on.

That said the warning should be harmless and the root device should be unmounted and detached by dracut (Thomas, please correct me if I'm wrong).
Comment 28 Franck Bui 2019-12-02 17:13:58 UTC
(In reply to Michael Hirmke from comment #23)
> 2. /boot and /boot/efi are not mounted after a reboot - this doesn't occur,
> if the system is rebooted before running zypper. It only occurs, if the
> system has been hibernated/resumed after the last reboot and before running
> zypper.

Could you just check that after hibernating/resuming /boot and /boot/efi are still mounted ?
Comment 29 Michael Hirmke 2019-12-02 20:34:29 UTC
(In reply to Franck Bui from comment #28)
> (In reply to Michael Hirmke from comment #23)
> > 2. /boot and /boot/efi are not mounted after a reboot - this doesn't occur,
> > if the system is rebooted before running zypper. It only occurs, if the
> > system has been hibernated/resumed after the last reboot and before running
> > zypper.
> 
> Could you just check that after hibernating/resuming /boot and /boot/efi are
> still mounted ?

They must have been mounted, the last times zypper dup has installed new kernels. Otherwise the kernel files would have been installed in the /boot directory on the root partition instead in the /boot partition.
And because I always hibernate/resume this system, and reboots only happen, when a zypper dup requires them, I'm pretty sure, that zypper has run always after at least one hibernate/resume cycle.
But I'll double check, if on a simple hibernate/resume cycle the partitons are mounted or not.
Comment 30 Michael Hirmke 2019-12-02 20:37:59 UTC
(In reply to Franck Bui from comment #27)
> (In reply to Michael Hirmke from comment #23)
> > 1. "Boot device is still in use" on a reboot - the reason for this seems to
> > be an installation via zypper.
> 
> So regarding this issue, the problem is that systemd shouldn't attempt to
> detach the root device at all since it's going to be used until the very
> last end, when the system will switch back to initrd.
> 
> Since this problem is not specific to openSUSE, I opened an issue against
> upstream and it will be tracked at
> https://github.com/systemd/systemd/issues/14224 from now on.

Thx a lot!

> 
> That said the warning should be harmless and the root device should be
> unmounted and detached by dracut (Thomas, please correct me if I'm wrong).

At least there is no message, that root has been unmounted correctly.
Instead you can see in the log, that the last fuser output shows lots of processes blockkung the root filesystem. Instead you can see messages like

systemd-cryptsetup@root.service: Unit entered failed state.

Remember: root is encrypted!
Comment 31 Franck Bui 2019-12-03 10:09:59 UTC
(In reply to Michael Hirmke from comment #30)
> At least there is no message, that root has been unmounted correctly.
> Instead you can see in the log, that the last fuser output shows lots of
> processes blockkung the root filesystem. Instead you can see messages like
> 
> systemd-cryptsetup@root.service: Unit entered failed state.
> 
> Remember: root is encrypted!

Sorry but I don't see your point.
Comment 32 Franck Bui 2019-12-03 15:28:13 UTC
(In reply to Michael Hirmke from comment #25)
> The first log "shutdown-log-err.txt" in the attachement named "two shutdown
> logs" shows this problem.

I double checked and this appears to be wrong: "shutdown-log-err.txt" includes only 2 suspend/resume cycles and one single shutdown.

It doesn't show what happened after the shutdown, i.e the next boot where /boot and /boot/efi are supposed to not be mounted.

So I would suggest to reproduce the issue one more time but with debug logs enabled only when rebooting the system with /boot and /boot/efi not mounted.

Also please make sure to answer the question in comment #28.

Thanks.
Comment 33 Michael Hirmke 2019-12-03 20:39:47 UTC
(In reply to Franck Bui from comment #32)
> (In reply to Michael Hirmke from comment #25)
> > The first log "shutdown-log-err.txt" in the attachement named "two shutdown
> > logs" shows this problem.
> 
> I double checked and this appears to be wrong: "shutdown-log-err.txt"
> includes only 2 suspend/resume cycles and one single shutdown.
> 
> It doesn't show what happened after the shutdown, i.e the next boot where
> /boot and /boot/efi are supposed to not be mounted.

Oops 8-<

> 
> So I would suggest to reproduce the issue one more time but with debug logs
> enabled only when rebooting the system with /boot and /boot/efi not mounted.

Ok. I'll do it after the next zypper dup.

> 
> Also please make sure to answer the question in comment #28.

Wasn't the answer from comment #28 sufficient?

In the meantime I checked again - and yes, both filesystems are mounted after even a few hibernate/resume cycles.
Comment 34 Michael Hirmke 2019-12-03 20:40:43 UTC
(In reply to Michael Hirmke from comment #33)
[...]
> > Also please make sure to answer the question in comment #28.
> 
> Wasn't the answer from comment #28 sufficient?

#29 of course.
Comment 35 Michael Hirmke 2019-12-03 20:48:30 UTC
(In reply to Franck Bui from comment #31)
> (In reply to Michael Hirmke from comment #30)
> > At least there is no message, that root has been unmounted correctly.
> > Instead you can see in the log, that the last fuser output shows lots of
> > processes blockkung the root filesystem. Instead you can see messages like
> > 
> > systemd-cryptsetup@root.service: Unit entered failed state.
> > 
> > Remember: root is encrypted!
> 
> Sorry but I don't see your point.

I'm not sure how to express in a better way 8-<

You said, the messages are harmless, but an encrypted filesystem could be damaged, as far as I know, if not properly unmounted.
And obviously it is not unmounted correctly - according to the message.
Comment 36 Franck Bui 2019-12-04 08:57:47 UTC
(In reply to Michael Hirmke from comment #35)
> You said, the messages are harmless, but an encrypted filesystem could be
> damaged, as far as I know, if not properly unmounted.

As I wrote it should be harmless *because* dracut is supposed to unmount it at the end even if systemd failed to do so before.
Comment 37 Michael Hirmke 2019-12-04 17:27:57 UTC
(In reply to Franck Bui from comment #36)
> (In reply to Michael Hirmke from comment #35)
> > You said, the messages are harmless, but an encrypted filesystem could be
> > damaged, as far as I know, if not properly unmounted.
> 
> As I wrote it should be harmless *because* dracut is supposed to unmount it
> at the end even if systemd failed to do so before.

Got it - thx!
Comment 38 Michael Hirmke 2019-12-04 19:45:26 UTC
Created attachment 825510 [details]
contains shutdown-log-boot_not_mounted.txt

shutdown.log from a boot, where /boot and /boot/efi were not mounted afterwards
Comment 39 Franck Bui 2019-12-05 11:31:20 UTC
Thanks.

So it's again this bug... the one about the false device transitions "plugged -> dead -> plugged" after PID reloading:

> [   18.786855] systemd[1]: Switching root.
> [   19.188536] systemd[1]: dev-nvme0n1p3.device: Changed dead -> plugged
> [   19.300921] systemd[1]: boot.mount: About to execute: /usr/bin/mount /dev/disk/by-uuid/bf69d826-e322-4c61-b071-a57180042bca /boot -t ext4
> [   19.415483] systemd[1]: Reloading.
> [   19.728841] systemd[1]: dev-nvme0n1p3.device: Changed dead -> plugged
> [   19.729160] systemd[1]: boot.mount: Changed dead -> mounted
> [   19.729896] systemd[1]: dev-nvme0n1p3.device: Changed plugged -> dead
> [   19.773765] systemd[1]: boot.mount: About to execute: /usr/bin/umount /boot -c

Unfortunately this one is pretty nasty but is still not addressed.

It's been reported to us already several times, see https://bugzilla.suse.com/show_bug.cgi?id=1137373 for the original report and is tracked by upstream here: https://github.com/systemd/systemd/issues/12953.

I'm not really sure why you're only facing it after doing an upgrade, but since this issue is a race I guess this is possible.

So I'm going to close your bug report as a duplicate of bsc#1137373.

*** This bug has been marked as a duplicate of bug 1137373 ***
Comment 40 Michael Hirmke 2019-12-05 19:06:44 UTC
I found out, that I don't even have to install something.
It is enough to run zypper dup. Whenever the rpeos are updated, the problem occurs - in every single case. Running zypper dup, where the repos didn't change, doesn't give the problem.
So how can it be a race condition, when it happens every time.
It is completely reproducable 8-<
Comment 41 Michael Hirmke 2019-12-05 19:24:05 UTC
(In reply to Michael Hirmke from comment #40)
> I found out, that I don't even have to install something.
> It is enough to run zypper dup. Whenever the rpeos are updated, the problem
> occurs - in every single case. Running zypper dup, where the repos didn't
> change, doesn't give the problem.
> So how can it be a race condition, when it happens every time.
> It is completely reproducable 8-<

Uh, I see - Lennart was postponing the fix again.
Ok, I'll wait.

*** This bug has been marked as a duplicate of bug 1137373 ***
Comment 42 Philipp Reichmuth 2021-01-03 08:24:41 UTC
For the record, since yesterday's update to TW 20210101, after running zypper dup and rebooting, upon reboot /opt is gone. I also saw a "failed unmounting /var" message upon bootup. After another reboot /opt is back.