Here's how I am setting up a backup system on FreeBSD.
Backup system has following description:
I found it a challenge to figure out what I needed to do. Documentation helped a lot, and I found good documentation, and I still found it difficult to apply the documentation to my context.
FreeBSD documentation (handbook and man pages) was my primary resource, helping me the most.
Reading Absolute FreeBSD, 3rd edition, especially chapters 10 (Disks, Partitioning, and GEOM) and 12 (The Z File System) was also helpful.
I had read Absolute FreeBSD first, and could not understand how I would be able to create a pool without an existing pool or dataset. Reading the ZFS section of the handbook cleared that up.
In retrospect, I wish I had read the following resources in the following order before attempting creation of the mirrored storage system:
Resources that I did not find useful included:
Without mentioning everything I learned through all the wrong turns and assumptions I made, here is how I set up the system:
The following sections provide details on these steps.
In addition, I describe how I brought the mirror online after rebooting.
The handbook section 20.6.2.1, ZFS on i386, while it applies to 32-bit Intel platform, has recommendations on system memory that apply to the AMD64 platform. 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, ... 5 GB RAM per TB storage to be deduplicated.
I'll plan on not using the deduplication feature.
I plan to run backups at times when the computer isn't being used for other resource-intensive tasks, so the 16 GB RAM I have should be sufficient.
You'll need root access to use zfs and zpool command options that modify ZFS.
You can run non-modifying queries using ZFS commands as a non-privileged user.
Absolute FreeBSD cautions the reader to know what the sector size of the drives is, stating that performance decreases if filesystem acts as if sector size is 512 bytes when its actual sector size is 4KB. WD Red Pro 6 TB Review says the native sector size is 4KB (I could not find the native sector size on the Western Digital web site). Based on that, run sysctl vfs.zfs.min_auto_ashift=12 and set this value in /etc/sysctl.conf. See the ZFS and Disk Block Size subsection of the Managing Pools section of Chapter 12 of Absolute FreeBSD. Also, see 20.6.1 Tuning section of FreeBSD Handbook for the description of the vfs.zfs.min_auto_ashift tunable.
I had run zfs list as regular user - received response internal error: failed to initialize ZFS library. This likely was from ZFS not being enabled/running (I didn't verify this at the time).
Section 20.2, Quick Start Guide, in FreeBSD Handbook says to add zfs_enable="YES" to /etc/rc.conf and then run service zfs start. After I added the line to /etc/rc.conf, service -e showed the zfs service as enabled without me having to run the service zfs start command. After editing that file to remove the line I just added, the service -e command no longer shows the zfs service as enabled. Looking at the man page for service, it says -e List services that are enabled.; reading the rest of that entry, it looks like the system reads the script files to see what's enabled, not necessarily what's running.
I used 2 4TB Western Digital Red Pro hard drives, each one mounted in a Sabrent EC-DFFN USB 3.0 docking station plugged into a USB 3.0 port on my Intel NUC that uses UFS on its main SSD.
Connect docking stations to power, and turn on their power switches.
Run camcontrol devlist - my devices are at da0 and da1.
Other commands that could disclose the device names include:
The zpool man page describes constraints involving naming pools.
In addition, the zpool man page says:
I chose backups as the name of my pool. The root directory of the computer has no backups subdirectory.
I chose to use entire drives without partititioning, and followed the instructions in handbook section 20.3.1, Creating and Destroying Storage Pools.
I ran this command to set up a mirror pool for my backup:
Running zpool status created the following output:
Then, running df shows:
The backups pool is already mounted in the path I intended based on the pool name. In addition, there's already a dataset for it.
So, I didn't create a dataset by running a subsequent command.
I didn't do this, so instructions here are of questionable value. The references to documentation should be relevant to get you started.
The section on ZFS pool creation in Absolute FreeBSD says to use GPT-labeled partitions.
The section of Absolute FreeBSD named The MBR Partitioning Scheme says that disks larger than 2 TB must use GPT partitioning, not MBR partitioning.
Section 19.3, RAID1 - Mirroring, in subsection 19.3.1., Metadata Issues, says Many disk systems store metadata at the end of each disk. Old metadata should be erased before reusing the disk for a mirror. Most problems are caused by two particular types of leftover metadata: GPT partition tables and old metadata from a previous mirror.
The following set of commands creates a GPT partitioning scheme for each disk. The -s flag specifies the type of partitioning scheme (GPT).
The following set of commands adds the only slice for the disk. The -t flag specifies the type of slice (ZFS), the -a flag specifies that the start of the partition is on an even 1 MB boundary in order to ensure the filesystem starts on the beginning of a 4KB sector, the -l flag specifies the label for the partition. The absence of a size argument indicates the command should include all space remaining on the disk.
I ran zpool set compression=on backups to enable default compression on my mirror. Default compression is lz4, since the lz4_compress feature is enabled. I ran zpool get compression backups to verify that ZFS accepted my previous command and that my mirror would be compressed.
After rebooting for an unrelated reason, ZFS was enabled. Running zfs list resulted in:
and running zpool status resulted in:
Nice how the command suggests how to resolve the problem.
I applied power to the external drive associated with /dev/da0 and reran zfs list after a while, and got output:
I applied power to the other docking station containing the second disk, and ran zpool status. I got the same result; ZFS didn't recognize the second disk. I wondered if ZFS needed some time to recognize the second disk, so waited and retried, but got the same result. Then, I ran zpool online backups da1 which generated no output (following the convention of generating no output for a successful result). I then reran zpool status which resulted in:
Using the zpool online command doesn't make the mirror show up in the output of the mount command. Run zfs mount backups to make the mount show up in the output of mount. Run zfs umount backups to remove the mount, as seen in its absence in the outputs of the mount and df commands.
Even after running zfs umount, the directory associated with the pool still exists.Software development | Tutoring | Computer Security | Video | Contact |