ZEVO ZFS over Thunderbolt on a Mac

After years of threatening, I finally set up a ZFS pool in a Mac OS X (10.8.5) environment. Today, I downloaded ZEVO Community Edition, which is (self)-described as "a momentous, much needed and long-overdue improvement over Apple’s status quo file system (HFS+) that was designed in the mid 1980s — before the Internet existed!" I totally agree. HFS+ is a turd, allowing "bit-rot" to silently corrupt your data over time. I am a working photographer with many terabytes of data that I need to store securely. Although I keep many copies of the data (and versioned copies of important stuff), I have, on occasion, gone back to old pictures only to find them to be corrupted. This scares me. Luckily, ZFS is a file system that has been designed not to allow silent corruption. If you want to know more about ZFS, read its Wikipedia entry.

I've been warned that ZEVO may not be supported in the future, and that their version of ZFS for the Mac has the following limitations:

  • No GUI
  • No Deduplication
  • Limited storage capacity (16 TB)
  • Other natural limitations ("resource diet," they say)

Still, people seem to be successfully using ZEVO on a daily basis, and I'm told it's very stable, so here I am.

I'm not really a unix person, and I hate configuring storage via the command line. But ZEVO and ZFS is really brain-dead simple. Anyone can get a ZFS volume up and running by following instructions carefully.

Here's what I did:

  • I bought an Areca ARC-8050 Thunderbolt RAID 8-Bay. I am pleasantly surprised by how quiet this box is. When you first power it on, it sounds like a jet, but it's apparently just doing a fan test. The fan is adaptive and typically runs quietly. I can hear the box, but it's not annoyingly loud.

  • I filled the box with 8 x 4TB Western Digital Red SATA NAS hard drives.

  • I revived an old-ish Mac Mini and upgraded it to 16GB of RAM and an inexpensive SSD. This is the Mac I am going to use with the Areca box because my Mac Pro doesn't have Thunderbolt. When the new Mac Pro comes out, I'll move the connection over, assuming that OS X Mavericks doesn't do something stupid like disallow such things.

  • I registered at the ZEVO website, downloaded ZEVO Community Edition, and installed it.

  • I downloaded the latest firmware for the ARC-8050, unzipped it, and applied the firmware updates using the web interface for the Areca. The firmware update comes with 3 .bin files for the ARC1882 (which is correct), and you have to apply all of them. There is no feedback from the web GUI after you hit "Submit" until the update completes (or fails!). Scared, yet? I was, when I got here. The documentation is poor.

  • I went to Physical Drives->Create Pass-Through Disk in the Areca configuration interface and created a pass-through disk for each of the 8 drives. Creating a pass-through disk allows Mac OS X to see the drive, but doesn't allow you to use the disk in a RAID set. This is fine because we are going to use ZFS to manage the RAID instead of the Areca's RAID controller.

  • I initialized and partitioned each disk as GUID and formatted as Mac Extended (Journaled). I don't think it matters what you format as, but ZEVO definitely wants you to initialize and partition the drives.

  • At this point, running "zpool showdisks" returned:

DISK DEVICE      SIZE  CONNECTION    DESCRIPTION  
/dev/disk1    3.64TiB  SAS           WDC WD40EFRX-68WT0N0 Media  
/dev/disk2    3.64TiB  SAS           WDC WD40EFRX-68WT0N0 Media  
/dev/disk3    3.64TiB  SAS           WDC WD40EFRX-68WT0N0 Media  
/dev/disk4    3.64TiB  SAS           WDC WD40EFRX-68WT0N0 Media  
/dev/disk5    3.64TiB  SAS           WDC WD40EFRX-68WT0N0 Media  
/dev/disk6    3.64TiB  SAS           WDC WD40EFRX-68WT0N0 Media  
/dev/disk7    3.64TiB  SAS           WDC WD40EFRX-68WT0N0 Media  
/dev/disk8    3.64TiB  SAS           WDC WD40EFRX-68WT0N0 Media
  • I created a RAID-Z2 (2-drive fault tolerance) by running:
sudo zpool create -f -o ashift=12 copepodzfs raidz2 /dev/disk1 /dev/disk2 /dev/disk3 /dev/disk4 /dev/disk5 /dev/disk6 /dev/disk7 /dev/disk8

I used "-o ashift=12" because pretty much every consumer drive is an "Advanced Format (AF) Drive," which means that it has large, 4K sectors, but fools computers into thinking that it uses the old, 512-byte logical sector. ZFS can be told to align with a 4K sector size by giving it an ashift of 12. This results in better performance.

I used "-O casesensitivity=insensitive" after being given advice by Graham Perrin. Some applications in Mac OS X do not do well with case sensitivity, which is the default setting in ZEVO. You cannot change this after the fact, so you should decide during pool creation time.

You can verify that your drive is telling the OS that is uses 512-byte block sizes by running "diskutil info /dev/disk1" (assuming one of your drives is "/dev/disk1") and looking for "Device Block Size." Mine says, "Device Block Size: 512 Bytes"

Creating the RAID-Z2 was instantaneous. ZFS is amazing.

  • I checked my ZFS pool status by running "zpool status copepodzfs" (my pool is called "copepodzfs"):
pool: copepodzfs  
state: ONLINE  
scan: none requested  
config: 
 
NAME                                           STATE     READ WRITE CKSUM  
copepodzfs                                     ONLINE       0     0     0  
raidz2-0                                     ONLINE       0     0     0  
GPTE_BB07001A-8B58-4C54-AF77-D71CEE3BE391  ONLINE       0     0     0  at disk1s2  
GPTE_FF882147-9E69-4CD2-AD64-EE216275F239  ONLINE       0     0     0  at disk2s2  
GPTE_BE799326-E888-4EDE-9CFD-4D604FB728C5  ONLINE       0     0     0  at disk3s2  
GPTE_22475434-3E60-491A-BD9D-8BE9EDF3239D  ONLINE       0     0     0  at disk4s2  
GPTE_957351BC-43EC-4F2F-9120-1791090539EF  ONLINE       0     0     0  at disk5s2  
GPTE_03AB5A7A-BD0A-4EF1-8613-FAB64EFBBFE4  ONLINE       0     0     0  at disk6s2  
GPTE_EAD32B39-2FEA-4B62-BD7C-E0FA115706C5  ONLINE       0     0     0  at disk7s2  
GPTE_E66C7105-DF1B-4B4A-9C72-CB74E722C1B9  ONLINE       0     0     0  at disk8s2  

errors: No known data errors
  • I claimed ownership of the new volume using "sudo chown echeng:staff /Volumes/copepodzfs".

Here's the volume (below). One thing that is strange (but consistent with what others have seen) is that it is reporting 22.4TB even though ZEVO Community Edition has a 16TB cap.

volume
volume

Speed Tests

Local speed test on the Mac Mini, using Blackmagic Disk Speed Test. This thing is FAST! It's a little freaky that writes are faster than reads.

blackmagic speed test-local
blackmagic speed test-local

Speed mounted over SMB / gigabit ethernet:

blackmagic speed test-nas
blackmagic speed test-nas

Note that this is 100% Mac Mini-limited, since running the same speed test to the Mac Mini's internal SSD yields similar results:

blackmagic speed test-nas to macmini ssd
blackmagic speed test-nas to macmini ssd

I get 100MB/s over my wired network when talking to a Synology DS1812+ NAS box, so the network is capable of running at full speed. Hopefully, accessing the device over the network is temporary. If the new Mac Pro and OS X 10.9 works with ZEVO, I'll be connected directly. Fast, redundant, corruption-immune and rebuild-friendly? If this works, I'll be super happy!