Here's how I am setting up a backup system on FreeBSD.

Goals

Backup system has following description:

Behind the Scenes

I found it a challenge to figure out what I needed to do. Documentation helped a lot, and I found good documentation, and I still found it difficult to apply the documentation to my context.

Documentation

FreeBSD documentation (handbook and man pages) was my primary resource, helping me the most.

Reading Absolute FreeBSD, 3rd edition, especially chapters 10 (Disks, Partitioning, and GEOM) and 12 (The Z File System) was also helpful.

I had read Absolute FreeBSD first, and could not understand how I would be able to create a pool without an existing pool or dataset. Reading the ZFS section of the handbook cleared that up.

In retrospect, I wish I had read the following resources in the following order before attempting creation of the mirrored storage system:

  1. FreeBSD Handbook Chapter 18 - Storage including sections on Adding Disks, USB Storage Devices, Backup Basics
  2. FreeBSD Handbook Chapter 20 - The Z File System pretty much everything in it
  3. zpool man page
  4. Absolute FreeBSD, chapters 10 and 12, for bits and pieces to fill in gaps in handbook coverage

Resources that I did not find useful included:

Setup

Without mentioning everything I learned through all the wrong turns and assumptions I made, here is how I set up the system:

  1. choose features
  2. obtain root access
  3. configure for disk drive sector size
  4. ensure ZFS enabled on target computer
  5. connect disk drives to computer
  6. determine names of devices associated with external disk drives
  7. choose a name for the ZFS pool to create
  8. create the mirror (pool and dataset)
  9. enable compression on the mirror

The following sections provide details on these steps.

In addition, I describe how I brought the mirror online after rebooting.

Choose Features

The handbook section 20.6.2.1, ZFS on i386, while it applies to 32-bit Intel platform, has recommendations on system memory that apply to the AMD64 platform. 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, ... 5 GB RAM per TB storage to be deduplicated.

I'll plan on not using the deduplication feature.

I plan to run backups at times when the computer isn't being used for other resource-intensive tasks, so the 16 GB RAM I have should be sufficient.

Obtain Root Access

You'll need root access to use zfs and zpool command options that modify ZFS.

You can run non-modifying queries using ZFS commands as a non-privileged user.

Configure for Disk Drive Sector Size

Absolute FreeBSD cautions the reader to know what the sector size of the drives is, stating that performance decreases if filesystem acts as if sector size is 512 bytes when its actual sector size is 4KB. WD Red Pro 6 TB Review says the native sector size is 4KB (I could not find the native sector size on the Western Digital web site). Based on that, run sysctl vfs.zfs.min_auto_ashift=12 and set this value in /etc/sysctl.conf. See the ZFS and Disk Block Size subsection of the Managing Pools section of Chapter 12 of Absolute FreeBSD. Also, see 20.6.1 Tuning section of FreeBSD Handbook for the description of the vfs.zfs.min_auto_ashift tunable.

Ensure ZFS Enabled on Target Computer

I had run zfs list as regular user - received response internal error: failed to initialize ZFS library. This likely was from ZFS not being enabled/running (I didn't verify this at the time).

Section 20.2, Quick Start Guide, in FreeBSD Handbook says to add zfs_enable="YES" to /etc/rc.conf and then run service zfs start. After I added the line to /etc/rc.conf, service -e showed the zfs service as enabled without me having to run the service zfs start command. After editing that file to remove the line I just added, the service -e command no longer shows the zfs service as enabled. Looking at the man page for service, it says -e List services that are enabled.; reading the rest of that entry, it looks like the system reads the script files to see what's enabled, not necessarily what's running.

Connect Disk Drives to Computer

I used 2 4TB Western Digital Red Pro hard drives, each one mounted in a Sabrent EC-DFFN USB 3.0 docking station plugged into a USB 3.0 port on my Intel NUC that uses UFS on its main SSD.

Connect docking stations to power, and turn on their power switches.

Determine Names of Devices Associated with External Disk Drives

Run camcontrol devlist - my devices are at da0 and da1.

Other commands that could disclose the device names include:

Choose Name for ZFS Pool to Create

The zpool man page describes constraints involving naming pools.

The pool name must begin with a letter, and can only contain alphanumeric characters as well as underscore ("_"), dash ("-"), and period ("."). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are names beginning with the pattern "c[0-9]"

In addition, the zpool man page says:

Unless the -R option is specified, the default mount point is "/pool". The mount point must not exist or must be empty, or else the root dataset cannot be mounted. This can be overridden with the -m option.

I chose backups as the name of my pool. The root directory of the computer has no backups subdirectory.

Create the Mirror (Pool and Dataset)

I chose to use entire drives without partititioning, and followed the instructions in handbook section 20.3.1, Creating and Destroying Storage Pools.

I ran this command to set up a mirror pool for my backup:

zpool create backups mirror /dev/da0 /dev/da1

Running zpool status created the following output:

pool: backups state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM backups ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da0 ONLINE 0 0 0 da1 ONLINE 0 0 0 errors: No known data errors

Then, running df shows:

Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ada0p2 10143484 369884 8962124 4% / devfs 1 1 0 100% /dev /dev/ada0p4 20307196 5857840 12824784 31% /var /dev/ada0p5 20307196 32828 18649796 0% /tmp /dev/ada0p6 101556508 19122664 74309324 20% /usr /dev/ada0p7 288001412 142944936 122016364 54% /home ... backups 3770679044 96 3770678948 0% /backups

The backups pool is already mounted in the path I intended based on the pool name. In addition, there's already a dataset for it.

# zfs list NAME USED AVAIL REFER MOUNTPOINT backups 288K 3.51T 96K /backups

So, I didn't create a dataset by running a subsequent command.

If You Want to Use Partitions

I didn't do this, so instructions here are of questionable value. The references to documentation should be relevant to get you started.

The section on ZFS pool creation in Absolute FreeBSD says to use GPT-labeled partitions.

The section of Absolute FreeBSD named The MBR Partitioning Scheme says that disks larger than 2 TB must use GPT partitioning, not MBR partitioning.

Section 19.3, RAID1 - Mirroring, in subsection 19.3.1., Metadata Issues, says Many disk systems store metadata at the end of each disk. Old metadata should be erased before reusing the disk for a mirror. Most problems are caused by two particular types of leftover metadata: GPT partition tables and old metadata from a previous mirror.

The following set of commands creates a GPT partitioning scheme for each disk. The -s flag specifies the type of partitioning scheme (GPT).

  1. gpart create -s gpt da0
  2. gpart create -s gpt da1

The following set of commands adds the only slice for the disk. The -t flag specifies the type of slice (ZFS), the -a flag specifies that the start of the partition is on an even 1 MB boundary in order to ensure the filesystem starts on the beginning of a 4KB sector, the -l flag specifies the label for the partition. The absence of a size argument indicates the command should include all space remaining on the disk.

  1. gpart add -t freebsd-zfs -a 1m -l backup da0
  2. gpart add -t freebsd-zfs -a 1m -l backup da1

Enable Compression on the Mirror

I ran zpool set compression=on backups to enable default compression on my mirror. Default compression is lz4, since the lz4_compress feature is enabled. I ran zpool get compression backups to verify that ZFS accepted my previous command and that my mirror would be compressed.

Reconnecting Mirror After Reboot

After rebooting for an unrelated reason, ZFS was enabled. Running zfs list resulted in:

no datasets available

and running zpool status resulted in:

pool: backups state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM backups UNAVAIL 0 0 0 mirror-0 UNAVAIL 0 0 0 213899940312296193 UNAVAIL 0 0 0 was /dev/da0 1324504314508520867 UNAVAIL 0 0 0 was /dev/da1

Nice how the command suggests how to resolve the problem.

I applied power to the external drive associated with /dev/da0 and reran zfs list after a while, and got output:

pool: backups state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: scrub repaired 0 in 0 days 00:00:00 with 0 errors on Thu Dec 31 09:22:18 2020 config: NAME STATE READ WRITE CKSUM backups DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 da0 ONLINE 0 0 0 1324504314508520867 UNAVAIL 0 0 0 was /dev/da1 errors: No known data errors

I applied power to the other docking station containing the second disk, and ran zpool status. I got the same result; ZFS didn't recognize the second disk. I wondered if ZFS needed some time to recognize the second disk, so waited and retried, but got the same result. Then, I ran zpool online backups da1 which generated no output (following the convention of generating no output for a successful result). I then reran zpool status which resulted in:

pool: backups state: ONLINE scan: resilvered 36K in 0 days 00:00:00 with 0 errors on Sat Jan 2 09:43:06 2021 config: NAME STATE READ WRITE CKSUM backups ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da0 ONLINE 0 0 0 da1 ONLINE 0 0 0 errors: No known data errors

Using the zpool online command doesn't make the mirror show up in the output of the mount command. Run zfs mount backups to make the mount show up in the output of mount. Run zfs umount backups to remove the mount, as seen in its absence in the outputs of the mount and df commands.

Even after running zfs umount, the directory associated with the pool still exists.


Back to FreeBSD main page