Rebuilding My NAS - What Are You Running?

On the back of the recent 12TB deals I'm rebuilding my NAS and wondering what you guys are running…

Currently I have two Microservers.

N40L G7 - 4x 3TB raidz1, 200G Intel S3700 partitioned for l2arcand slog. Running nas4free 9.2.0.1 - Shigawire (revision 943)
G1610T Gen8 - 4x 3TB raidz1, no SSD for cache/slog. Running Solaris 11.3

The plan is to juggle the data (both are at/near capacity) build a new Z1 pool and then migrate all of the data onto the new pool.

The new machine will be:
Microserver Gen8
E3-1225 v2 (already have)
16G ECC (already have)
4x 12tb raidz1 (have 2, waiting for 2)
l2arc?
Slog?
OS?

I think i'll use a 16G optane that I already have for the L2ARC and the S3700 for ZIL

OS, I would like input on.

Would like to hear your thoughts.

Comments

  • HP N36L Microserver too with HD5450 low profile and Windows Home Server 2011. Been on 24/7 for nearly a decade working flawlessly.

    Bought another N54L recently and stuck in a GT710 and TX50E and Win10 for convenience. Brilliant machines and just work.

    • What are you using the GPU's for?

      • 4K video playback over HDMI with hardware acceleration.

        • That's what I was guessing. I use a separate beelink box that has a j5005 and I use the iGPU for playback. I'm only playing back at 1080p but I think that will play 4k no problem too.

  • I'm using OpenMediaVault booting from a stubby USB drive in an old N40L. 5 HDDs of varying sizes in a single BTRFS array. BTRFS is simply amazing.

  • +1

    I have 1xN36L, 1xN40L and a custom built server:
    - Asus E3 Pro Gaming V5 motherboard
    - 64Gb DDR4 ECC RAM (4x16Gb Samsung sticks)
    - Intel(R) Xeon(R) CPU E3-1235L v5
    - Antec 300 case
    - Antec 850W PS

    5x4TB NAS Drives
    3x12TB WD Elements (Helium white label)
    3x8TB WD Elements (Helium white label)

    I use FreeNAS on all three NAS machines. While I have Solaris sysadmin background, I settled on FreeNAS since it offers jails (like Solaris Zones), Hypervisor (bhyve) and ZFS and a beatufil UI. It's rock-solid stable, much like Solaris and very easy to use.

  • Unraid.

    It's fantastic.

  • I'm using 4x ODROID HC2 units, each loaded with a 4TB CMR WD RED, using MooseFS to allow me to treat it as one big NAS.

    Hopefully in the next few weeks I'll be expanding that system with a Kobol Helios64.

    • Never heard of it but it looks pretty cool. Does it distribute files or blocks to the chunk servers?

      • Files are split into "chunks" that are replicated to the chunkservers, then for each given file or folder you have a "goal" which defines how many nodes the chunks should live on. Most of my files are goal=2, so they take up twice the space but can survive losing one of the nodes serving that goal.

        First HC2 runs MooseFS Master and MooseFS Chunkserver
        Remaining 3 run MooseFS Chunkserver and MooseFS Metalogger

        All metadata operations happen on the master, all data storage operations happen on the chunkserver, and the metaloggers provide a backup of the master in case the master ever dies.

        Then I'm running a Samba server to allow Windows and Linux clients to store and access data, and a Netatalk server to allow Mac clients to store and access data. Also running a Plex server directly on one of the HC2s.

        Having 4 machines instead of 1 is nice, as you can have each do different roles and spread the load a bit.

        Other than that, "rebuilds" are automatic - I can lose a node and everything that was stored on it will be re-replicated and spread amongst the remaining nodes… as long as I have enough free space to handle that data, the cluster is VERY resilient… in addition, my partner's stuff is stored with goal=4 and is therefore almost impossible to lose!

        There are other similar systems like LizardFS that do cool stuff like better master failover and erasure coding (which can both save space and improve resiliency) but MooseFS has a huge performance advantage over most.

        There are also folks running GlusterFS on the HC2, but I can't speak to that as I haven't done it and I don't trust GlusterFS.

        • I'm also going to check out OpenMediaVault at some point… might be able to make it work with Moose.

          I'm also aware it sounds very complicated, but it's fairly easy to do and requires basically zero maintenance once up and running.

          • @Zorlin: I door storage admin in my job. Doesn't sound difficult. It sounds like how Lefthand, or StoreVirtual as HP call it now, does it's 'network raid'.

            • @drew442: I mostly meant the complication of the solution overall as compared to a traditional NAS. If you're a storage admin you'll be fine.

              Hoping to write a blog post on how to do all this some day…

  • Had an N54L with 4x2TB drives and a P410 controller running Windows Home Server 2011. I found the power lacking (Plex among other things).

    Later I scored a QNAP TS-670 Pro with 4x4TB drives. I swapped out the i3 for a Xeon 1265Lv2 and upped the RAM to 16GB. Does everything I need it to and draws less than 60W.

    Sure, I have my share of 2U servers but I've grown sick of the noise and space they take up so they are packed away in the shed now.

  • I use a 32gb SanDisk USB thumb drive, no noise and draw probably 0.2w.

Login or Join to leave a comment