-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs-mount.service is called too late on Debian/Jessie with ZFS root #4474
Comments
LGTM. I'll include this in the upcoming 0.6.5.6 package. Thanx. |
@arcenik what might the side effects be to always running this before Here's a link to the documentation for this service for reference: https://www.freedesktop.org/software/systemd/man/systemd-remount-fs.service.html |
A first side effect for this solution is that zfs-mount.service cannot write anything to the file system. But I don't think this is a problem because error, warning and other messages should be sent to logging system and not written to / filesystem. A second side effect could be to reverse the problem, preventing the proper mount of /data is a zfs volume is mounted in /data/zfs (for example). A nice solution could be to have zpool loaded before systemd-remount-fs and zfs volumes (or zpool ?) declared in /etc/fstab . |
We really should make zfs work in /etc/fstab. This is the only way it would work if the system is mixed with zfs and other fs. |
@behlendorf Perhaps we would want to have a zpool.target which would look into fstab and know which pool we need to import, rather than the current "import all pool we've scanned". |
Right, there are definitely significant differences which will need to be carefully worked through. I didn't mean to imply it's going to be a 1-1 mapping, but it may provide some useful functionality for us to leverage. |
@tuxoko I think that adding a dependency for /etc/fstab to zpool is not a good idea. Another way is to have a autoimport property on each zpool (enabled by default and older version). |
Here some examples for /etc/fstab. For a zfs root :
Mounting zpool and zfs volumes:
Of course, when a zpool is specified, all volumes are mounted. To mount only the root volume of a zpool you could use myzpool/ |
@arcenik, isn't /usr also missing in your picture? @behlendorf, doesn't the use of the overlay feature solve this? ...Or does the use of the overlay function create other problems? On my VirtualBox ZFS "/" test using the feature
Not sure if
@behlendorf would it be safer/better to always use |
* Include fix from openzfs/zfs#4474. * Include fix for #196
@azeemism /usr is not missing, is it not mounted because of canmount=off. Furthermore you can see that it is empty. |
I am a newbie at this, but this particular change to
The result is that I have confirmed that removing |
Yep, it indeed creates an ordering cycle. Depending on which unit systemd deletes to break the cycle, I get all sorts of whacky results on system. From the system not auto-logging in (Kodi not starting), to the ZFS volumes not being mounted. I built a tool to analyze the cycles better: https://github.com/aktau/findcycles. With this I reduced the following: $ systemd-analyze verify default.target |&
perl -lne 'print $1 if m{Found.*?on\s+([^/]+)}' |
xargs --no-run-if-empty systemd-analyze dot > remount-cycle.dot
$ dot -Tsvg < remount-cycle.dot > remount-cycle.svg
$ <remount-cycle.dot grep -P 'digraph|\}|green' | findcycles | dot -Tsvg > cycles.svg
$ <remount-cycle.dot findcycles | dot -Tsvg > cycles2.svg The results: The most concise one probably being cycles.svg. Curious, I printed out the units that looked suspicious: $ systemctl cat systemd-remount-fs zfs-mount
# /lib/systemd/system/systemd-remount-fs.service
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Remount Root and Kernel File Systems
Documentation=man:systemd-remount-fs.service(8)
Documentation=http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
DefaultDependencies=no
Conflicts=shutdown.target
After=systemd-fsck-root.service
Before=local-fs-pre.target local-fs.target shutdown.target
Wants=local-fs-pre.target
ConditionPathExists=/etc/fstab
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/lib/systemd/systemd-remount-fs
# /lib/systemd/system/zfs-mount.service
[Unit]
Description=Mount ZFS filesystems
DefaultDependencies=no
Wants=zfs-import-cache.service
Wants=zfs-import-scan.service
Requires=systemd-udev-settle.service
After=systemd-udev-settle.service
After=zfs-import-cache.service
After=zfs-import-scan.service
Before=local-fs.target
Before=systemd-remount-fs.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zfs mount -a So that made me look for systemd-remount-fs.service, which led me to this thread. It would seem that adding this line provokes a bug on some systems. My system is an up-to-date Debian Stretch (9/testing). I don't have any encrypted devices, just a regular SSD on which I have an ext4 and a zfs partition, and an external drive which is entirely zfs. If any more info is needed, I'll be glad to supply it. |
I think it's not good to force zfs as root fs and leave everything behind. The normal debian dependencies are (zfsonlinux/pkg-zfs#205): After each update I have to edit zfs-mount.service to get an usable system. Perhaps you have to change zfs-import-scan.service which can break other systems. It's like @behlendorf said. It's not easy to have all possibilities covered. |
I am having a similar issue where |
This should be resolved. The Root-on-ZFS HOWTO includes a work-around, and with 0.8.x's mount generator, this is correctly solved. I'm going to close this. If this is still an issue for someone, try the workaround of setting |
problem
As working on a script to install Debian on a ZFS root, I've encountered a bug.
The script is here : https://github.com/arcenik/debian-zfs-root .
On a Debian Jessie installation with ZFS root, the zfs-mount.service is called too late, therefore some stuff are being created in /var preventing the proper mount of rpool/var.
As you can see, /var is not mounted. And some content like /var/lib is missing.
proposed solution
Here is a working solution : to mount the rest of zfs volumes while the root is still read-only.
--- zfs-mount.service.orig
+++ zfs-mount.service
@@ -8,6 +8,7 @@
After=zfs-import-cache.service
After=zfs-import-scan.service
Before=local-fs.target
+Before=systemd-remount-fs.service
[Service]
Type=oneshot
The text was updated successfully, but these errors were encountered: