ZFS Build Checklist

I’ve decided to replace the Windows Home Server Vail server with something capable of handling newer builds of ZFS and the inherent deduploication.

Here’s a quick kit list and build diary I’ll try to keep up-to-date as I go along.

Kit:

  • Dell Perc6i – this is essentially a port multiplier. I scored it from eBay on the cheap, though it was delivered from Israel, took awhile, and had neither cables nor mounting bracket.
  • OCZ RevoDrive 120GB – Though the RAID controller on this card is not supported in Linux/Solaris, the drives show up as two separate devices as long as you make sure to put it in the right PCIe slot. That means it’s perfect for both ZIL (log) and L2ARC (cache).
  • 2x Intel 80GB X25-M SSDs – these will house the virtual machine files to be deduped. Very reliable drives, and though they might not be the fastest in terms of writes, the speeds are relatively constant which is quite handy compared to solutions that attempt compression like SandForce controllers. ZFS will take care of that, thanks.
  • (IN TRANSIT) 2x Dual Port 1gbit Intel PCIe NICs – I’ll use these for the direct connection to the virtual machine host. Currently one link is used, but when reading from the SSD drives the line is saturated.
  • (IN TRANSIT) 32 Pin SAS Controller To 4x SATA HDD Serial Cable Cord – This is needed to plug in 8 drives to the LSI controller.
  • 5x 1.5TB Seagate hard drives – These will be the bread-and-butter storage running in RAID-Z2 (similar to RAID 6).
  • 3x 3TB Seagate hard drives – These might simply be a large headache, but the plan was to have an extra 3TB RAID-Z2 for backups in another machine. Unfortunately there seem to be issues with drives that are 4k presenting themselves as 512b. I may be able to get around this by hacking or waiting as they become more popular. For now 2 of them are in software RAID1 on a Windows 7 host, and the other remains in the external USB 3 case and is used as a backup drive.
  • NetGear GS108T Switch – A cheap VLAN-capable switch should I decide to use more than 2 bonded ports (I doubt it), currently running the lab.

Join the Conversation

2 Comments

  1. Looks like you got stuck in a spam loop Chris, sorry about that!

    The build went extremely well, but there were issues with Infiniband and OFED. Namely, there is currently no OpenSM port for either Nexenta or ESXi that would run the software switch needed to have both boxes online at the same time.
    In order to get around this, I ran two cards on a Windows 7 box and it was basically a “man in the middle”. Not at all ideal.

    What I have been working on lately is building a FreeBSD server that will be able to run OFED/OpenSM and ZFSv28 at the same time. This is mostly due to the fact that I needed two ESXi hosts in order to complete the VCP5 studies I’m in the midst of.

    I’ll be posting an update about the build shortly, and some of the caveats involved with moving from Nexenta to FreeBSD 9.

Leave a comment

Your email address will not be published. Required fields are marked *