Please Bash My Home Server Build

Hi, forum legends,

Here is my build:

CPU: Ryzen 5600G
RAM: Kingston Server Premier KSM26ED8/16HD (x1)
Mobo: ASUS PRIME B550M-A
SSD: Silicon Power P34A60 2TB NVME (x2)

Case: Deepcool CH370
Cooler: Thermalright Phantom Spirit 120
PSU: Cooler Master MWE 450

My usage scenario: home desktop & server.

I am thinking of slowly migrating from MacOS to FreeBSD/Linux for productivity. There is no strict budget limit. However, one can tell that it is a budget build. The 2 SSDs will be a ZFS mirror for a bit of redundancy. I need ECC RAM because I value my data. It will also be a home server that is up 7x24 for Plex, Pihole, etc. It will be placed in the living room and hopefully, the fans are not noisy at all. The case has a height limit of 43cm, which is satisfied by CH370.

Any feedback is appreciated.

EDIT: typo CPU-> Ryzen 5600G

EDIT 2:

Thanks to many useful feedback. Now I have to abandon the idea of ECC, because it is hardly satisfiable with consumer-grade hardware. Notably, I need to use Ryzen Pro 5650G instead of 5600G to get bit-flip protection.

Here is the revised build:

  • CPU: Core i5 12400 (use iGPU as I don't play hardcore games)

  • RAM: Corsair Vengeance LPX 32GB (2x16GB) DDR4 3200MHz

  • Motherboard: ASUS Prime B660M-A

  • Cooler: Thermalright Phantom Spirit 120

  • SSD: Kingston KC3000 M.2 NVMe Gen4 SSD 1TB x 2 (taking advantage of the deal these will be raid 1 for the OS and my essential data)

  • HDD: 4x4T 3.5", delayed to a later stage when I set up the media server.

  • Case: Thermaltake Core V21

  • PSU: Cooler Master MWE 80Plus White 450W (I will explore some 80+ Gold PSUs)

Comments

  • +6

    Please Bash My Home Server Build

    Won't that void warranty?

    • unless you do it orally :)

      • wouldn't that get it too wet?

        • It will be fine if it's water cooled.

      • Username checks out

    • +4

      No silly, not bash

      to FreeBSD/Linux

      bash.

      • -2

        korny….

      • s/bash/zsh/

  • Specs are overkill for just Plex / Pihole. Considered just running a Synology? Specs won't be as good but managing it is a breeze and you can virtualise.

    • not a fan of the Synology OS (and not sure it can be run as a desktop?)

  • Looks fine to me.

  • +1

    Looks fine apart from the whole Plex part. Not sure if they even properly support AMD transcode on non-Windows platforms yet, and it's been quite a while since people first started asking. That and the whole "Hi we're banning the entirety of Linode" was a bad look. Making people pay for hardware transcode rubs me the wrong way too.

    I presume you are using TrueNAS Core/Scale or Unraid etc. if you are going ZFS? (You could also potentially throw in a couple spinning rust HDDs and use the ssds as SLOG)

    Also note that you're CPU limited to PCI-E 3.0 speeds for the NVMe drives, but you've also selected 3.0 drives so that's not an issue. (Unless you find a use-case for more speed and want 4.0 drives and speeds later)

    • I plan to use vanilla FreeBSD with ZFS, which is less messy and easier to use as a desktop OS.

      I use two m.2 NVME as a mirror, in the future can plug 3 (or so) Sata SSDS into the box for raidz, or even 3.5" HDDs.

      "CPU limit to PCI-E 3" I haven't noticed that. Thanks for letting me know.

    • Just found a typo in the original post: CPU should be 5600G (not 4600G!)

      Plex: have no idea about the compatibility with OS/hardware but alternatives are fine

  • +1

    Don't think you'll be matching ECC RAM in a budget consumer motherboard; not that bit flips will be the cause of any data loss regardless. Much more likely for those SSDs to fail on you - I've exclusively had Silicon Power SSDs be the ones that have failed over the last 10 years.

    • ECC is free to have, considering it is currently on sale at a similar price to regular ones, and why not have it? It does no harm, does it?

      Regarding Silicon Power SSDs, they are picked as they appeared on Amazon deals. What are the recommended alternatives for PCI 3.0 NVME SSDs?

      • +1

        I don't think ECC will work with the 5600G, but would work with a 5650G. ECC is pro only for the APUs I think. I would personally just get normal RAM for this use case.

        • +1

          A bit of search and you are absolutely right. In that case, I may have to abandon the idea of ECC at all and switch to an intel build.

  • I always thought "server" implied lots of storage via internal drives.

    2TB is enough for fileserver purposes, or are you not using for storage?

    • Well, my ideal setup is to have NVMEs for the system and have a bunch of 3.5" HDDs for the data. However, the setup costs significantly more, and having those spinning disks is not friendly to my EnergyAustralia bills and living room mates.

    • Depends, if it's a VM host then they have maybe two drives with the rest running off SAN.

      • Reading the original post, and the context provided by the thread, what do you think the chances are that OP is running a dedicated SAN box in the shared house?

        Even reading the OP's reply right above yours is a pretty big clue.

  • +1

    i would avoid Silicon Power ssd's, had one fail within 4mths. Also does mobo/cpu support ecc ?

  • +4

    Please Bash My Home Server Build

    am thinking of slowly migrating from MacOS to FreeBSD/Linux for productivity.

    Thankfully both Mac and Linux supports bash

  • +2

    Why not an intel built for iGPU transcoding and use jelllyfin instead of Plex?

    • Alright. In my no-discrete-GPU setup, I just need to change the motherboard (to Asus Prime H610M-A), and CPU (to i3 12100)? How does that sound?

      • Yeah 12100 will do fine, depending if you also want to run VMs you may want more cores

    • Another vote for Intel iGPU. Not only will it support hardware transcoding, but many other *nix based tasks that you may want to run on a server (such as go2rtc camera stream ingestion, ffmpeg decoding, etc).

  • Can that cpu mobo combo effectively use ECC? When I built mine AMD was much harder to find an ECC compatible build where it would function properly. I ended up going with an Intel w1370 and supermicro board for ECC and IPMI.

    • It is officially supported. See https://www.asus.com/support/FAQ/1045186/

      However, others are concerned that the proposed budget build cannot match the value of the ECC RAM (?)

      • +1

        Your link doesn't mention 5000 G series, the 4000 G only the PRO versions were supported. You specify there is no real budget so I'm guessing you are looking for value not outright cost. Some people value the benefits of ECC differently, but if you get it make sure it's useful as more than regular ram otherwise it's not good value. I'd suggest two sticks for bandwidth but your use case will work ok without it.

  • For only 2TB of storage it seems to be a bit of a waste of money.

    8TB spinning rust is the cheapest these days. Get 4+ of them and put them in a RAID6 or equivalent array.

    And you mentioned power usage — hard drives don't use as much power as you seem to expect.

    If you are using Linux you can use a USB thumb drive as the OS drive. It doesn't need to be big fast or redundant as it only has OS stuff on it, once it boots it basically does nothing.

    • Seagate IronWolf 4TB $140 on Amazon (raidz x4=$560) and it triples the cost of my dual NVME setup, not to mention 8TB. I could however cut down to 2x1TB OS+Data dual NVME to save further.

      • I'm really wondering what the point of your build is.

        1TB is basically nothing these days.

        If you're just after backup Microsoft 365 Personal is $109/yr, comes with Office and 1TB OneDrive storage.

        You could probably get something better than that too, that was just the first thing that I though of.

        • well, all my personal data (photos, documents, …) is roughly 500G. I build the system and services, and add in the 3.5s at a latter stage.

  • +2

    for DDR, get two sticks to maximise the bandwidth.

  • ZFS server with only 2x3.5" bays? How's that supposed to work? Surely just two 22TB drives would not be enough and it does not leave any room for expansion.

    • I was thinking of adding in the 3.5s at a later stage. For now, only a pair of NVMEs. Obviously, the choice of the case may not be extensible. What are the alternatives? I can only host a tower of height 43 or HTPC.

      • +1

        HP Microservers are excellent for 3.5" bays. iirc you can also add an extra internal SATA disk direct connected to the motherboard.

        Downsides: No slots of NVMe though, CPUs are pretty low power. Almost certainly more expensive than DIY.

Login or Join to leave a comment