Skip to main content

Full Disk Encryption with Proxmox and ZFS

·4 mins

I’ve recently bough a MicroServer Gen 10+ from HP to use as a lab and also as a NAS. I wanted to install Proxmox on it and also use the opportunity to discover ZFS.

In the server, I’ve got:

  • 4x 4TB HDD for data storage
  • 2x 256GB NVMes for the root system and also zfs cache/log volumes

I used the two NVMe in a standard software RAID 1 and created a RAIDZ2 on the 4 HDD.

Install Proxmox #

I’ve started by installing a classic debian on the two nvmes. Using the installer, I could easily use the following disk setup:

sda                           8:0    0   3.7T  0 disk  
sdb                           8:16   0   3.7T  0 disk  
sdc                           8:32   0   3.7T  0 disk  
sdd                           8:48   0   3.7T  0 disk  
nvme0n1                     259:0    0 232.9G  0 disk  
|-nvme0n1p1                 259:1    0   953M  0 part  
| `-md0                       9:0    0   952M  0 raid1 /boot
|-nvme0n1p2                 259:2    0   954M  0 part  /boot/efi
`-nvme0n1p3                 259:3    0   231G  0 part  
  `-md2                       9:2    0 230.9G  0 raid1 
    `-md2_crypt             253:0    0 230.9G  0 crypt 
      |-vgbender-swap       253:1    0  14.9G  0 lvm   [SWAP]
      `-vgbender-root       253:2    0   100G  0 lvm   /
nvme1n1                     259:4    0 232.9G  0 disk  
|-nvme1n1p1                 259:5    0   953M  0 part  
| `-md0                       9:0    0   952M  0 raid1 /boot
|-nvme1n1p2                 259:6    0   954M  0 part  
`-nvme1n1p3                 259:7    0   231G  0 part  
  `-md2                       9:2    0 230.9G  0 raid1 
    `-md2_crypt             253:0    0 230.9G  0 crypt 
      |-vgbender-swap       253:1    0  14.9G  0 lvm   [SWAP]
      `-vgbender-root       253:2    0   100G  0 lvm   /

As you can see, I only created a 100G logical volume for my root system. This is to make sure I have space to create the log and cache volumes for ZFS.

Now to install Proxmox on top of my system, I used this documentation, I won’t detail all the steps there but it boils down to adding the proxmox repository and doing a full-upgrade.

ZFS #

Now that proxmox is installed, time to configure ZFS. I started by creating my encrypted volumes:

# my disks are sda, sdb, sdc, sdd
for s in a b c d; do cryptsetup luksFormat -y /dev/sd$s; done

And opened them:

for s in a b c d; do cryptsetup luksOpen /dev/sd$s sd$s\_crypt; done

Then, I added them to /etc/crypttab, first by getting the UUID for each of the disks:

root@bender:~# blkid | grep /dev/sd
/dev/sdb: UUID="73046cb0-e0cc-4040-9a00-dc98f4890745" TYPE="crypto_LUKS"
/dev/sdd: UUID="a397857f-74d6-4c80-876b-e0351a842f9b" TYPE="crypto_LUKS"
/dev/sdc: UUID="504b8860-0abb-43d4-b82a-811df672c3ae" TYPE="crypto_LUKS"
/dev/sda: UUID="25289490-38d1-4fda-9018-93329ddf4ebd" TYPE="crypto_LUKS"

And add them to /etc/crypttab:

root@bender:~# cat /etc/crypttab 
md2_crypt UUID=eaa3206c-6dbd-4338-a5ea-ff8d6fac7ee1 none luks,discard
sda_crypt UUID=25289490-38d1-4fda-9018-93329ddf4ebd none luks,discard
sdb_crypt UUID=73046cb0-e0cc-4040-9a00-dc98f4890745 none luks,discard
sdc_crypt UUID=504b8860-0abb-43d4-b82a-811df672c3ae none luks,discard
sdd_crypt UUID=a397857f-74d6-4c80-876b-e0351a842f9b none luks,discard

A reboot later to check that my configuration was opening luks volume as expected, I created the raidz2 array:

zpool create -f -o ashift=12 zbender raidz2 /dev/mapper/sda_crypt /dev/mapper/sdb_crypt /dev/mapper/sdc_crypt /dev/mapper/sdd_crypt
  • zbender is the name of my zpool
  • ashift=12 here should be of the same size or larger than the underlying sector size.

The zpool is created, time to create a volume for my VMs and get proxmox to detect it:

zfs create zbender/vmdata
zfs set compression=on zbender/vmdata
pvesm zfsscan

Now, I needed to add the corresponding storage on on https://proxmox:8006, in Datacenter > Storage > Add > ZFS. In the form, I used:

  • ID: vmdata
  • ZFS Pool: zbender/vmdata
  • Context: Disk image, Containers

And I was finally able to create my first VM.

Adding NVMe backed cache and log #

As I said ealier, I had kept a bit of space on my NVMe for the zfs log and cache volume. I started by created the two logical volumes:

lvcreate -L25G -n zfs-log vgbender                                
lvcreate -L75G -n zfs-cache vgbender

And then added them as such to my pool:

zpool add zbender log /dev/vgbender/zfs-log
zpool add zbender cache /dev/vgbender/zfs-cache

I can now see this configuration using zpool status:

root@bender:~# zpool status
  pool: zbender
 state: ONLINE
  scan: scrub repaired 0B in 0 days 03:12:50 with 0 errors on Sun Jan 10 03:36:52 2021
config:

        NAME           STATE     READ WRITE CKSUM
        zbender        ONLINE       0     0     0
          raidz2-0     ONLINE       0     0     0
            sda_crypt  ONLINE       0     0     0
            sdb_crypt  ONLINE       0     0     0
            sdc_crypt  ONLINE       0     0     0
            sdd_crypt  ONLINE       0     0     0
        logs
          zfs-log      ONLINE       0     0     0
        cache
          zfs-cache    ONLINE       0     0     0

Conclusion #

My homeserver is now full disk encrypted, with ZFS and Proxmox for me to experiment! I’ve since created multiple subvolumes for my backups and I must say I like the experience so far. I still have a lot to learn about ZFS though which I will probably write about as I go.

References #

This is day 14/100 of #100DaysToOffLoad!