-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zpool expand destroys whole pool if old zpool signature exists #16144
Comments
I just have seen, it wasn't the old data from HDD 16. HDD 1 - 15 were used in a pool before. |
I have signatures from 3 zpools:
|
Is it possible to wipe wrong labels from VDEVs? |
It is possible to wipe, but I wouldn't, because I do not think that is your problem. Offhand, I would suggest you show the lvs output, not the pvs output. |
I resized back to 1TiB and zdb shows now a single top_guid and the pool was able to be imported. |
|
I would imagine that what you would probably like to do, if you want to remove whatever was there, is something like trimming the "unused" space underneath LVM. The simplest way to do that might be to configure the rest of the space in each PV in a new LV, and then zero the whole thing. (I think you can do a similar game of not showing the underlying contents until first allocation with thin volumes on LVM, but that might be too much hassle to manage.) You could probably extend I'm not sure whether I think extending My idea of what's happening here is basically this:
Unfortunately, in your case, it seems like what happened is that it did 3, then at 4, it errored because it noticed valid, non-destroyed labels for two different pools at the two correct locations on the partition, and promptly errored out. One could imagine triggering something like labelclear on spots for 2 and 3 before doing the close-then-reopen dance. But this situation is so uncommon I'm worried there might be some case that would cover up breaking. (I also don't really suggest using LVM under ZFS, but that's a different discussion.) |
@behlendorf ZFS should ignore (and refresh?) wrong top/pool GUIDs on reopening VDEVs if there a multiple labels. I created a 2TiB spare LV and LVM wiped the zfs signatures while creating. There are two facts about LVM:
So I have to zero 32TiB for expanding ZFS.
|
System information
Describe the problem you're observing
I use LVM for VDEVs.
I used a used HDD where an old zpool with the same name was used.
I moved the VDEV with pvmove from a weak HDD to these already used HDD.
After expanding the LVs from 1TiB to 3TiB the whole pool is unavailable.
I think while expanding, the old zpool label was found or while inserting the used disk the old zpool label was found.
At the used disk the old zpool was on a partition.
While initializing LVM on the used disk it wiped some of the old signatures:
After expanding all the LVs/VDEVs I got this:
But zpool clear or a reboot didn't work.
I renamed /dev/VG1/ZFS01-16 to ZFS_01-16 and tried an import:
The text was updated successfully, but these errors were encountered: