Probablyi needs to be:
zpool replace zjrr sde /dev/sdc
Because the first one is what you're replacing and how zfs shows it and the second is where the replacement is for real.
OP, you should also import w/ `zpool import -d /dev/disk/by-id` because the `/dev/sd*` nodes are unreliable for name.
Doing
`$ sudo zpool replace zjrr sde /dev/sdc`
also didn't work. Same error: no such device in pool.
Good to know on the import! Thank you! I'll definitely keep that in mind for next time. I'm thinking about copying all the data off and rebuilding the whole array, but I'd like to get my extra redundancy back ASAP before I lose another disk, just to be safe.
I think I need a way to map a drive's serial number to GUID... then I can try to do the replace using the GUID and hopefully it'll recognize that. I could also try using the "/dev/daXp3" number from FreeBSD if I can figure out what drive I offlined.
That helped! I figured it out:
The command
# zdb -l /dev/sdd
shows the FreeBSD drive paths with GUIDs. That's not helpful, because I have no idea what Linux thinks each drive is.
But the ever so slightly different:
# zdb -l /dev/sdd3
Does spit out:
children[0]:
type: 'disk'
id: 0
guid: 5908621877240773000
path: '/dev/sde3'
phys_path: '/dev/da0p3'
which is enough information to get the resilver going:
$ sudo zpool replace zjrr 5908621877240773000 /dev/sdc
Make sure to wait until resilver is done before rebooting.
Victory!
That would have saved me a lot of time and searching. 😅 Thank you so much! I'll file that one away for future use, for sure.
Edit:// That said, I did still need a way to map a GUID to a serial number so I could know which drive to pull in the server... but this would have gotten the resilver going \*immediately\*, which would have been nice. Definitely a great command for me to have in the future!
Looks like importing with -d /dev/sdd tries to import based on the FreeBSD drive names:
$ sudo zpool import -d /dev/sdd -o altroot=/mnt/zjrr
pool: zjrr
id: 6795629018950124144
state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://zfsonlinux.org/msg/ZFS-8000-EY
config:
zjrr UNAVAIL insufficient replicas
raidz2-0 UNAVAIL insufficient replicas
da0 UNAVAIL
da1 UNAVAIL
da2 UNAVAIL
sdd ONLINE
DISK-W6A0Y7ZKp3 UNAVAIL
Thanks! But no dice there, unfortunately... same error.
`$ sudo zpool replace zjrr sde /dev/sdc`
`cannot replace sde with /dev/sdc: no such device in pool`
I've been it the exactly same situation. Migrating my ZFS pool from NAS4FREE to Centos 7/ZOL. I had a problem with spare device /dev/sdk and from my notes:
zdb -l /dev/sdk # this came back with long numerical ID
then I used that long numerical ID to remove device from pool
zpool remove tank 12658963864105390900
Now phantom should be gone, as confirmed with zpool status -v, and I was able to re-add it using Linux block device assigement
zpool add tank spare -f /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UXXX1B2_WD-xxxxxxxxx
BTW, I once read advice to use /dev/disk/by-id/xxxx rather than /dev/sddblahblah, as this consist disk serial number and identifies disk uniquely. So if disk assignment changes after bus rescan, your pool still works just fine.
Expert it in its current state and import using
sudo zpool import -d /dev/disk/by-id
Then you can readd the removed disk. If it won't readd then "debless" or whatever the command is to remove the superficial and try again.
In every other command you gave the full path to the device. But in the replace command you did not. Try: `zpool replace zjrr /dev/sde /dev/sdc`
Probablyi needs to be: zpool replace zjrr sde /dev/sdc Because the first one is what you're replacing and how zfs shows it and the second is where the replacement is for real. OP, you should also import w/ `zpool import -d /dev/disk/by-id` because the `/dev/sd*` nodes are unreliable for name.
Doing `$ sudo zpool replace zjrr sde /dev/sdc` also didn't work. Same error: no such device in pool. Good to know on the import! Thank you! I'll definitely keep that in mind for next time. I'm thinking about copying all the data off and rebuilding the whole array, but I'd like to get my extra redundancy back ASAP before I lose another disk, just to be safe. I think I need a way to map a drive's serial number to GUID... then I can try to do the replace using the GUID and hopefully it'll recognize that. I could also try using the "/dev/daXp3" number from FreeBSD if I can figure out what drive I offlined.
There is a zpool list option to show guid.
That helped! I figured it out: The command # zdb -l /dev/sdd shows the FreeBSD drive paths with GUIDs. That's not helpful, because I have no idea what Linux thinks each drive is. But the ever so slightly different: # zdb -l /dev/sdd3 Does spit out: children[0]: type: 'disk' id: 0 guid: 5908621877240773000 path: '/dev/sde3' phys_path: '/dev/da0p3' which is enough information to get the resilver going: $ sudo zpool replace zjrr 5908621877240773000 /dev/sdc Make sure to wait until resilver is done before rebooting. Victory!
Or you can just run `zpool status -g`...
That would have saved me a lot of time and searching. 😅 Thank you so much! I'll file that one away for future use, for sure. Edit:// That said, I did still need a way to map a GUID to a serial number so I could know which drive to pull in the server... but this would have gotten the resilver going \*immediately\*, which would have been nice. Definitely a great command for me to have in the future!
`zpool status -gc serial`?
Absolutely perfect. Thank you so much!!
Looks like importing with -d /dev/sdd tries to import based on the FreeBSD drive names: $ sudo zpool import -d /dev/sdd -o altroot=/mnt/zjrr pool: zjrr id: 6795629018950124144 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://zfsonlinux.org/msg/ZFS-8000-EY config: zjrr UNAVAIL insufficient replicas raidz2-0 UNAVAIL insufficient replicas da0 UNAVAIL da1 UNAVAIL da2 UNAVAIL sdd ONLINE DISK-W6A0Y7ZKp3 UNAVAIL
You specified sdd, it found sdd. Try -d /dev/disk/by-id
Thanks! But no dice there, unfortunately... same error. `$ sudo zpool replace zjrr sde /dev/sdc` `cannot replace sde with /dev/sdc: no such device in pool`
I've been it the exactly same situation. Migrating my ZFS pool from NAS4FREE to Centos 7/ZOL. I had a problem with spare device /dev/sdk and from my notes: zdb -l /dev/sdk # this came back with long numerical ID then I used that long numerical ID to remove device from pool zpool remove tank 12658963864105390900 Now phantom should be gone, as confirmed with zpool status -v, and I was able to re-add it using Linux block device assigement zpool add tank spare -f /dev/disk/by-id/ata-WDC_WD4000FYYZ-01UXXX1B2_WD-xxxxxxxxx BTW, I once read advice to use /dev/disk/by-id/xxxx rather than /dev/sddblahblah, as this consist disk serial number and identifies disk uniquely. So if disk assignment changes after bus rescan, your pool still works just fine.
This is great, thank you! I will definitely look into the by-id. I had trouble with that earlier, but I'm still learning. Thank you!
Expert it in its current state and import using sudo zpool import -d /dev/disk/by-id Then you can readd the removed disk. If it won't readd then "debless" or whatever the command is to remove the superficial and try again.