Proxmox VE ZFS系统分区从无冗余状态转为mirror镜像池

最初安装PVE的时候用的单盘SSD做的ZFS无冗余池,后续nas换了MX500 1TB作为高速缓存,原有的铠侠TC10 500G退役下来没地方放,想着PVE服务器系统盘缺了点冗余,于是决定把整块500G丢上去

原有ZFS配置为

root@vServer:~# zpool status
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:00:18 with 0 errors on Sun Nov 14 00:24:19 2021
config:

    NAME                                                 STATE     READ WRITE CKSUM
    rpool                                                ONLINE       0     0     0
      ata-KINGSTON_SV300S37A120G_50026B7754003826-part3  ONLINE       0     0     0

 

这块是128G的金士顿SV300,拿500G丢上去做镜像基本上等于500G当128G用了,后续有机会再把128G换成500G的吧
原来的磁盘上是没有分区的,所以我们得先给这块500G磁盘做分区

先看下原来128G的分区配置

Disk /dev/sda: 111.79 GiB, 120034123776 bytes, 234441648 sectors
Disk model: KINGSTON SV300S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5F1C3C61-880D-4319-ABAB-5473758015E5

Device       Start       End   Sectors   Size Type
/dev/sda1       34      2047      2014  1007K BIOS boot
/dev/sda2     2048   1050623   1048576   512M EFI System
/dev/sda3  1050624 234441614 233390991 111.3G Solaris /usr & Apple ZFS

 

直接用fdisk照着弄就行,记得把分区类型修改成对应的就行

下面的分区分好的,如果你不用传统引导的话,BIOS分区不用也是可以的

Disk /dev/sdc: 447.13 GiB, 480103981056 bytes, 937703088 sectors
Disk model: KIOXIA-EXCERIA S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B6DA0448-B8A1-FD49-8222-D19331722BCF

Device       Start       End   Sectors   Size Type
/dev/sdc2     2048   1050623   1048576   512M EFI System
/dev/sdc3  1050624 937703054 936652431 446.6G Solaris /usr & Apple ZFS

然后查找对应磁盘ID名字

root@vServer:~# ls /dev/disk/by-id
ata-KINGSTON_SV300S37A120G_50026B7754003826
ata-KINGSTON_SV300S37A120G_50026B7754003826-part1
ata-KINGSTON_SV300S37A120G_50026B7754003826-part2
ata-KINGSTON_SV300S37A120G_50026B7754003826-part3
ata-KIOXIA-EXCERIA_SATA_SSD_61IB837QKA93
ata-KIOXIA-EXCERIA_SATA_SSD_61IB837QKA93-part2
ata-KIOXIA-EXCERIA_SATA_SSD_61IB837QKA93-part3

 

磁盘名字后面接着的partx就是对应的fdisk里面的Partition number

然后把新分区好的part3加入进zfs pool里面,存储池就会进入重建状态

zpool attach rpool ata-KINGSTON_SV300S37A120G_50026B7754003826-part3 ata-KIOXIA-EXCERIA_SATA_SSD_61IB837QKA93-part3
root@vServer:~# zpool status
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 20 18:52:34 2021
    2.82G scanned at 481M/s, 510M issued at 85.0M/s, 2.82G total
    549M resilvered, 17.65% done, 00:00:27 to go
config:

    NAME                                                   STATE     READ WRITE CKSUM
    rpool                                                  ONLINE       0     0     0
      mirror-0                                             ONLINE       0     0     0
        ata-KINGSTON_SV300S37A120G_50026B7754003826-part3  ONLINE       0     0     0
        ata-KIOXIA-EXCERIA_SATA_SSD_61IB837QKA93-part3     ONLINE       0     0     0  (resilvering)

 

到这一步基本上ZFS的冗余功能就正常使用了,但是500G磁盘上是没有引导的,假如128G完全挂了,重启之后是无法引导的,所以得把引导安装到500G的磁盘上

我这里采用的是UEFI引导,UEFI引导应该使用proxmox-boot-tool

具体使用方式可以参照官网https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

在此前我们已经分区了EFI分区为/dev/sdc2

在这里我们先对该EFI分区进行格式化

root@vServer:/boot/grub# proxmox-boot-tool format /dev/sdc2
UUID="" SIZE="536870912" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sdc" MOUNTPOINT=""
Formatting '/dev/sdc2' as vfat..
mkfs.fat 4.2 (2021-01-31)
Done.

 

显示Done之后将UEFI引导安装进sdc2中

root@vServer:/boot/grub# proxmox-boot-tool init /dev/sdc2
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
UUID="9EE8-66BA" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sdc" MOUNTPOINT=""
Mounting '/dev/sdc2' on '/var/tmp/espmounts/9EE8-66BA'.
Installing systemd-boot..
Created "/var/tmp/espmounts/9EE8-66BA/EFI/systemd".
Created "/var/tmp/espmounts/9EE8-66BA/EFI/BOOT".
Created "/var/tmp/espmounts/9EE8-66BA/loader".
Created "/var/tmp/espmounts/9EE8-66BA/loader/entries".
Created "/var/tmp/espmounts/9EE8-66BA/EFI/Linux".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/9EE8-66BA/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/9EE8-66BA/EFI/BOOT/BOOTX64.EFI".
Random seed file /var/tmp/espmounts/9EE8-66BA/loader/random-seed successfully written (512 bytes).
Successfully initialized system token in EFI variable with 512 bytes.
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot..
Unmounting '/dev/sdc2'.
Adding '/dev/sdc2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Copying and configuring kernels on /dev/disk/by-uuid/794B-2A48
    Copying kernel and creating boot-entry for 5.11.22-5-pve
    Copying kernel and creating boot-entry for 5.4.143-1-pve
Copying and configuring kernels on /dev/disk/by-uuid/9EE8-66BA
    Copying kernel and creating boot-entry for 5.11.22-5-pve
    Copying kernel and creating boot-entry for 5.4.143-1-pve

在安装引导前应该是默认只有一条引导

root@vServer:/boot/grub# proxmox-boot-tool status 
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
794B-2A48 is configured with: uefi (versions: 5.11.22-5-pve, 5.4.143-1-pve)

安装好之后应该和下面类似

root@vServer:/boot/grub# proxmox-boot-tool status 
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
794B-2A48 is configured with: uefi (versions: 5.11.22-5-pve, 5.4.143-1-pve)
9EE8-66BA is configured with: uefi (versions: 5.11.22-5-pve, 5.4.143-1-pve)
root@vServer:/boot/grub# 

 

到这一步,两块硬盘理论上是都支持引导的了,具体验证的话,自己可以把另外一块盘拔掉,然后插到另外一台电脑上测试看看能不能启动就好了,我这里就懒得测了,拆了之后估计存储池会降级啥的,恢复估计又要研究一段时间。。。。。

 

 

  1. mjiang99说道:
    Google Chrome Mac OS X 10.15.7
    教程很好。我碰到的问题是出现错误:pve status: One or more devices could not be used because the label 我是小白,找了好多资料。最后发现需要先 zpool detach ,然后才能重建。

发表回复

电子邮件地址不会被公开。必填项已用 * 标注