The Elastic NAS - 56TB* of Storage for $1,800 AUD

Hey everyone!

I help people build MooseFS based "Elastic" Network Attached Storage devices (gratis).

You throw together 4x ODROID HC2 units (or equivalent) plus 4x 14TB shuckable hard drives (or equivalent) and bam, 56TB of very reliable and flexible - open source - storage.

Happy to help anyone who's interested. I've been running mine for 4+ years and love it.

EDIT: As requested, my association is just with my personal blog where I blog about using the Elastic NAS. I don't have any connection with any of the companies or components involved in building one of these things.

Related Stores

raptorswithhats.com
raptorswithhats.com

Comments

  • +1 vote

    More details available upon request, or I'm happy to post full details here depending on what people want or need.

  • This is Ozbargain. You better have some links for your drives.

    For anyone thinking it's a great idea, just grab an old computer, shove some hard drives in it, install whatever flavour of Linux you want and call it a day.

    • +1 vote

      Oof. I'm trying to be helpful here. I'm happy to provide links.

      How much power does that "old computer" burn? Guessing more than 30W…

      • I mean you're trying to get viewers onto your site. Otherwise you'd put all the info in your post.

        About 20W, less if I bothered to fiddle with it. Probably should disconnect that front panel LED. Also works as an actual server for hosting applications and media conversion/"acquisition", so probably a bit more when it's doing that sort of stuff.

        Be honest about the upsides and downsides of your approach compared to others, makes for a better blog. Also try a few other things so you can appropriately compare and contrast things that solve the problem.

        •  

          I didn't even link to my post and my site doesn't even have ads and never will.

          The HC2s can run anything Ubuntu for ARM can run.

          I have tried literally 10 different distributed storage systems, work on all kinds of enterprise storage systems etc. I've run QNAP, Synology, Windows Home Server. I've run Thecus. I've run lots and lots of different things.

          I'm happy to put as much detail into this post as anyone wants - it's the ODROID HC2, there's a 3D printable fan shroud on Thingiverse…

          • @Zorlin: After you were forced to delete the link. It's not exactly hard to notice.

            Each individual one can sure, cluster computing isn't super simple so you're probably limited to the power of one. For example, you probably can't live transcode video on it.

            Write about that on your site. It's very very useful to compare things without just doing them yourself.

            •  

              @Zephyrus: I was not forced in any way, I simply edited it out about 10 seconds after posting it because I wanted to avoid exactly the kind of thing you seem to be accusing me of.

              No, cluster computing isn't super simple, but because this is based on Linux and MooseFS you're actually getting the power of all 4 in most tasks.

              Yes, you can transcode on these. I wouldn't do more than 1x 1080p transcode per unit though at a time.

          • @Zorlin:

            work on all kinds of enterprise storage systems etc

            Not trying to have a go at you but none of the above you mentioned are considered Enterprise Storage Systems - You are talking NetApp, Dell/EMC, HPE, IBM, etc., etc.

            •  

              @websterp: Fair, but I work with those…

              NetApp (at home, to be fair), Dell/EMC, HPE, IBM are literally all things I have dealt with, and half of those on a daily basis. I'm working on a Cray…

  • +1 vote

    These are the drives I use - https://www.ozbargain.com.au/node/591720 - 14TB for $315.

  • How exactly are you "associated"? At least disclose your connection to the product.

    • +1 vote

      I run a blog which posts about the solution, but have zero commercial interest in or association with the components involved etc. It's all open source and dirt cheap.

  • So you are not actually selling a product? You are simply wanting to give people computing advice?

    •  

      Yes.

      I'd love to sell something like this, but my time is not free and I'd much rather spend it teaching people how to do this stuff. I'm sharing this here because I really, really like my own one.

  • How does it compare to Freenas or unraid?

    • +1 vote

      Both FreeNAS and UnRAID are excellent and I'd recommend them. This is more hands-on, but I feel like it gives you a much cooler solution.

      You end up with something with a theoretical 4gbps of throughput (1gbps for each ODROID HC2) which can ACTUALLY push that much* (assuming you're using multiple physical clients!) and since MooseFS redundancy is set per file and folder you have a lot more flexibility in that you can decide something is not worth keeping safe (1 copy) or is SUPER IMPORTANT (4 copies).

      My partner's music collection? 4 copies…

      • Ah yes of course because each ODROID would have their own gigabit ethernet port. Looks like I need to do some research into it as I've never used Elasticsearch or MooseFS before. I once looked at Kobol but this seems to be a more flexible solution.

        In my own homelab my work provided me with an out of box solution, an 8 bay Synology NAS so that I can provide extra offsite backup for our datacentre. Works well when you have a secondary ABB gigabit link.

        •  

          Kobol are great, I have one, but it kinda sucks. It'll get a LOT better once the firmware bugs are sorted.

          • @Zorlin: I love the appearance of the Kobol but the Rockchip RK3399 is a bit of a turn off.

            •  

              @Clear: The RK3399 is a lovely chipset, actually! The major issue with the H64 is just the unstable kernel. I'm really looking forward to their second batch and whatever they do next. Considering getting 3 more units if they can sort stability.

      • So from what I can understand each HC2 has 2 disks and 1gbit connect, and you pair these up to create 8 disk 4gbit thru put array? Interesting solution, have been on the look out for replacement low wattage storage to my ageing external raid boxes for Linux distros.

        •  

          1x disk per HC2, 1x gigabit per HC2. There are other options for sure though.

      • Sure but 4 copies in the same room is as good as zero if there's a localised disaster (fire/flood/kids).

        2 local and one or 2 offsite is better.

        •  

          For sure, which is why my offsite is on another continent. Singapore. It's about as good as you can get before going to another planet.

    • +2 votes

      Synology or QNAP are, again, great solutions. I'm not saying otherwise.

      They're not budget Raspberry Pis, they're octa-core boards specifically made for this purpose. Hell, the Raspberry Pi didn't even have real gigabit ethernet until recently. That said they're also BADLY in need of a refresh, 2015 processor is a bit weak. A lot weak.

      As for cooling, if you read the darn blog I strap a 120mm fan to the metal casing the HC2s already come with, which provides more than adequate cooling.

      What do you mean "I act like"?

      Also, my previous version used decently-priced WD REDs, so you seem to be making assumptions here.

      The whole point of a NAS is convenience and simplicity. It should be fire-and-forget, take no longer than 30 minutes to set up and be completely comprehensible to a tech-illiterate.

      Your option is really aimed at those who work in IT and they would be springing for microservers or full-on rack mount arrays for this level of storage capacity.

      Sure. This takes about an hour, which is longer than 30 minutes. Yes, I've made it clear that this is a lot more DIY. But this is a LOT lower power, takes up less space, has faster networking and is just damn cooler. As for fire-and-forget, MooseFS provides a LOT more flexibility than a normal NAS, so overall it's worth it for anyone remotely technical.

      • I'm saying you act like you've split the atom in the field of home storage, yet your "elastic NAS" is basically a bunch of daisy-chained bare HDDs, slotted into terrible metal caddies, hooked up to some Raspberry Pi controllers running some FreeNAS equivalent.

        You don't seem to know of or even have a defined target audience; people buy NASs for simplicity and convenience, period. Not to tinker, not to faff around with open-source software and not to have a bundle of crap looking like a test bench at a PC hardware store, sitting in a corner somewhere.

        People that are competent with home storage and hardcore data-hoarders would be looking past this measly nonsense for something a lot more serious.

        Hell, the Raspberry Pi didn't even have real gigabit ethernet until recently.

        Neither do your ODROID HC2s from what I can see, so what's the difference?

        At least QNAP offer a large variety of fairly affordable, entry-level NASes with 2.5/4/5/10GbE connectivity.

        if you read the darn blog

        The blog at the link that you haven't posted in this thread? Good one.

        I strap a 120mm fan to the metal casing the HC2s already come with

        So, you have to BYO your own cooling for $1,800 dollars?

        This is a classic case of tech enthusiasts having zero concept of the average home computer user. Installing 120mm fans is beyond the capability of 75% of buyers. You've completely lost people at this point.

        Achieving incredibly high scores on storage benchmarks means nothing to people just looking to dump movies, music and photos into a central network share. More to the point, would they even notice the difference between your "Elastic NAS" and a run-of-the-mil 4-bay NAS? I think not.

        What do you mean "I act like"?

        Exactly what I said. "Just get some 4x 14TB shuckable hard drives". Where mate? Do you have a deal for 4 x 14TB shuckable drives for a good price? Are you on OzBargain a lot? They don't come up every other day.

        •  

          Oh wow, where to begin?

          I'm saying you act like you've split the atom in the field of home storage, yet your "elastic NAS" is basically a bunch of daisy-chained bare HDDs, slotted into terrible metal caddies, hooked up to some Raspberry Pi controllers running some FreeNAS equivalent.

          Where did I say I've split the atom or even imply that I'm doing anything significantly better? Also, they're not particularly terrible metal caddies, nor are they daisy chained.

          You don't seem to know of or even have a defined target audience; people buy NASs for simplicity and convenience, period. Not to tinker, not to faff around with open-source software and not to have a bundle of crap looking like a test bench at a PC hardware store, sitting in a corner somewhere.

          I'm doing this as a curiosity and to try to help people. Keep your comments in your pocket.

          People that are competent with home storage and hardcore data-hoarders would be looking past this measly nonsense for something a lot more serious.

          Again, smaller, cheaper and quieter.

          Hell, the Raspberry Pi didn't even have real gigabit ethernet until recently.

          They absolutely have gigabit, per unit.

          At least QNAP offer a large variety of fairly affordable, entry-level NASes with 2.5/4/5/10GbE connectivity.

          Sure, and I have a 10GbE chunkserver in the mix already.

          if you read the darn blog

          The blog at the link that you haven't posted in this thread? Good one.

          The blog link I didn't end up posting.

          I strap a 120mm fan to the metal casing the HC2s already come with

          So, you have to BYO your own cooling for $1,800 dollars?

          No, that's included in the $1,800.

          This is a classic case of tech enthusiasts having zero concept of the average home computer user. Installing 120mm fans is beyond the capability of 75% of buyers. You've completely lost people at this point.

          Strapping four screws in is beyond the capability of 75% of buyers? And you forget I am a home computer user…

          What do you mean "I act like"?

          Exactly what I said. "Just get some 4x 14TB shuckable hard drives". Where mate? Do you have a deal for 4 x 14TB shuckable drives for a good price? Are you on OzBargain a lot? They don't come up every other day.

          Yes, I am on OzBargain every day. No, they are not on every day. No, I never said they are.

          • @Zorlin:

            I'm doing this as a curiosity and to try to help people.

            You don't seem to have any particular "people" in mind and that's the problem.

            There exist dominant market players in the NAS space who've captured most of the consumer demographic with carefully-thought out and well-designed products that have all of the flexibility of your solution and more.

            I hate to burst your sanctimonius little bubble here but your knowledge of marketing is sorely lacking.

            The people that your product might appeal to don't bother reading personal storage blogs of nobodies on the Internet for advice; they either work first-hand in some hardware-oriented tech role or they have extensive knowledge from years of at-home tinkering.

            They absolutely have gigabit, per unit.

            What I meant to say was these ODROID HC2s only have Gigabit Ethernet, when for this level of investment you could get a QNAP with 2.5/4GbE connectivity.

            Strapping four screws in is beyond the capability of 75% of buyers? And you forget I am a home computer user…

            Have you ever worked as a Sysadmin or in a support department/help desk/service team for an ISP/MSP?

            People are by and large absolute morons when it comes to technology.

            You expecting soccer mums, boomers and iPad-raised millennials to be able to buy a bunch of bare HDDs, enclosures, power supplies, fans and slap them together and then tinker with software that has the UI and usability of a Unix shell from the 1990s is like expecting the average car owner to be able to do a major service on their car. You're dreaming.

            Yes, I am on OzBargain every day. No, they are not on every day. No, I never said they are.

            Precisely.

            So your entire claim of "56TB of Storage for $1,800 AUD"* is complete nonsense. It could be double that.

            You've just written a love letter to your storage setup and expected everyone else to be as gushing about it as you are. I really don't get the point of this thread.

            •  

              @Gnostikos:

              You don't seem to have any particular "people" in mind and that's the problem.

              Do I need to?

              There exist dominant market players in the NAS space who've captured most of the consumer demographic with carefully-thought out and well-designed products that have all of the flexibility of your solution and more.

              Find me a single NAS with per-file redundancy levels. I'll wait.

              I hate to burst your sanctimonius little bubble here but your knowledge of marketing is sorely lacking.

              I'll get in trouble for this, but you're misspelling the word "sanctimonious".

              The people that your product might appeal to don't bother reading personal storage blogs of nobodies on the Internet for advice; they either work first-hand in some hardware-oriented tech role or they have extensive knowledge from years of at-home tinkering.

              "Nobodies on the Internet"? Nice personal attack.

              What I meant to say was these ODROID HC2s only have Gigabit Ethernet, when for this level of investment you could get a QNAP with 2.5/4GbE connectivity.

              4x gigabit is just as nice depending on your use case.

              … Have you ever worked as a Sysadmin or in a support department/help desk/service team for an ISP/MSP?

              Yes.

              You expecting soccer mums, boomers and iPad-raised millennials to be able to buy a bunch of bare HDDs, enclosures, power supplies, fans and slap them together and then tinker with software that has the UI and usability of a Unix shell from the 1990s is like expecting the average car owner to be able to do a major service on their car. You're dreaming.

              No, I'm not expecting that at all. You seem to really be reaching here. As for a Unix shell from the 1990s, evidently you've never used one.

              So your entire claim of "56TB of Storage for $1,800 AUD"* is complete nonsense. It could be double that.

              [citation needed]

              You've just written a love letter to your storage setup and expected everyone else to be as gushing about it as you are. I really don't get the point of this thread.

              Nope, nope, nope.

              • @Zorlin: Far out this is like talking to a brick wall.

                Do I need to?

                Yes.

                In order to sell a product or service, one must first have a willing buyer to market to.

                Let me get an MS Paint diagram to help you out.

                Find me a single NAS with per-file redundancy levels. I'll wait.

                No one cares about per-file redundancy, I promise you. Once again you're off in your own little bubble. 99% of people want a network share to dump their life's data onto and to easily access from multiple devices. That's it.

                Classic nerd mentality; so far gone in their sea of obscura that they have no capacity to relate their needs or priorities to the average person's.

                I'll get in trouble for this, but you're misspelling the word "sanctimonious".

                The only point you've made that I agree with. Maybe linguistics might be your forte? Because marketing really doesn't seem like it.

                "Nobodies on the Internet"? Nice personal attack.

                Once again, perception =/= intention. We're all nobodies. I've never heard of "Zorlin's Famous Storage/NAS Blog" and chances are most people haven't either.

                No, I'm not expecting that at all. You seem to really be reaching here. As for a Unix shell from the 1990s, evidently you've never used one.

                I'm reaching?

                FFS, who on earth do you think the average home user is? They have difficulty power-cycling their devices to troubleshoot basic issues.

                You're really on another planet if you really believe your solution is at all intuitive or accessible to most people.

                As for a Unix shell from the 1990s, evidently you've never used one.

                This looks about as user-friendly as that screenshot you posted.

                [citation needed]

                Yes, from you. Post the goddamned source for 4 x 14TB HDDs + OROID HC2s + PSUs for $1,800 AUD or else you have nothing to backup your claims with.

                Nope, nope, nope.

                Well that's convincing, I'm sold.

                Look, the lack of marketing 101 is perhaps your biggest problem but second to that is a severe lack of self-awareness and relatability to the average Joe.

                Take it from someone who's been on OzBargain since 2009 and has seen plenty of "Zorlins" come on here spouting off a bunch of technobabble nonsense from the laymen's perspective, only to be met with crickets and indifference when no one really understands what they're trying to say or what the benefits of their particular solution is because they're so far removed from the vast majority of tech consumers that they just have no conception of people that don't need or can't use their incredibly niche technology.

                You mean well but your approach really needs some work.

                • -3 votes

                  @Gnostikos:

                  Far out this is like talking to a brick wall.

                  You're telling me.

                  In order to sell a product or service, one must first have a willing buyer to market to.

                  Who am I selling to?

                  No one cares about per-file redundancy, I promise you. Once again you're off in your own little bubble. 99% of people want a network share to dump their life's data onto and to easily access from multiple devices. That's it.

                  Well, I know at least a dozen who care, so…

                  Classic nerd mentality; so far gone in their sea of obscura that they have no capacity to relate their needs or priorities to the average person's.

                  Not aiming at the average person. Ironic that you called me a brick wall.

                  You're really on another planet if you really believe your solution is at all intuitive or accessible to most people.

                  Did I ever claim this was accessible to most people? I've been saying over and over again that for 95% of people they should just buy a Synology.

                  This(upload.wikimedia.org) looks about as user-friendly as that screenshot you posted.

                  Sure, if you hate GUIs with a burning, fiery passion.

                  [citation needed]

                  Yes, from you. Post the goddamned source for 4 x 14TB HDDs plus the OROID HC2s and PSUs required or else you have nothing to backup your claims with.

                  Sure, so…

                  https://www.hardkernel.com/shop/odroid-hc2-home-cloud-two/

                  ODROID HC2 = $54 USD

                  Power supply = $5.50 USD

                  Drive = $330 AUD

                  Plus shipping, plus the fan, works out to about $1,800 AUD.

                  Look, the lack of marketing 101 is perhaps your biggest problem but second to that is a severe lack of self-awareness and relatability to the average Joe.

                  Again, please point out where I was aiming at the average Joe.

                  You mean well but your approach really needs some work.

                  Please find the nearest mirror, and then take a good, hard long look into it.

                  Seriously, you've done nearly nothing but attack me in this thread.

                  • @Zorlin:

                    Seriously, you've done nearly nothing but attack me in this thread.

                    Or you're just being ridiculously thin-skinned and defensive of your precious "Elastic NAS".

                    Your thread is literally a Reddit TIL post. This is like the cliched non-IT manager who finds some new shiny on a website or hears about it from a friend of a friend and then pesters the IT department about why they should get it; when it's completely redundant and just different-for-the-sake-of-different.

                    I think you'd be better served posting about this on Whirlpool and debating its merits there, as you'll get more interest and technical discussion.

                    I would actually sympathise more with you if this was an attempt at blatant self-promotion but instead it seems to be more of an invitation to an argument from someone incredibly set in their ways who gets quite passive-aggressive when defending their technological miracle from the slightest critique.

                    I've been saying over and over again that for 95% of people they should just buy a Synology.

                    You actually haven't said that once until I prompted you but whatever.

                    Please find the nearest mirror, and then take a good, hard long look into it.

                    Dude, relax. It's just some hard drives, you're the one taking someone's criticism of your favourite NAS setup as personal affront.

                    • +1 vote

                      @Gnostikos:

                      Or you're just being ridiculously thin-skinned and defensive of your precious "Elastic NAS".

                      Really?

                      Dude, relax. It's just some hard drives, you're the one taking someone's criticism of your favourite NAS setup as personal affront.

                      attacks someone dozens of times for an entire thread

                      "Dude, relax."

                      their technological miracle

                      If you can find a single place in this thread where I called my solution a technological miracle, I will send you a crisp $100. That is, if you can learn to be nice.

                      • @Zorlin:

                        attacks someone dozens of times for an entire thread

                        There you go again with the "I'm being attacked" hysteria.

                        If you can't handle anything other than out-and-out fanboy worship, I'm not sure why you're even here.

                        You honestly seem to be looking for a fight and yet curiously shocked when someone takes you up on your offer.

                        Every response of yours to all of the very mild criticisms about your Elastic NAS has been pretty hostile and passive-aggressive, so it's clear you're not really here to help but more to ram your opinion down other people's throats and force them to agree with you.

                        I get that you run the Perth Linux Users group and you have a certain preferred way of doing things but this kind of attitude is what creates this stereotype about people in IT as being so incredibly socially-awkward and difficult to deal with, and that's coming from someone in the field themselves.

  •  

    To make things a bit more concrete, here's a screenshot from the web panel of my personal cluster (but keep in mind mine has an additional 4x 4TB drives in a Kobol Helios64 joined to the cluster). https://i.imgur.com/xgWiU9K.png

  • So, parallel to this, and for no reason am I sharing this but I'm just transitioning my home plex media server to a Pi4.

    I had 4 shucked 10TB HDDs in a PC with Win10 (using storage spaces with 1 drive redundancy) and now want something as light and small as possible as I might need to bring it with me if I move countries 2 or 3 times over the next year or two (Yay for quarantines). So my solution is to connect:
    RPi4 8GB (4GB probably more than plenty as it is)
    Two Dual USB HDD Docks (boards shucked to remove stupidly large enclosure)
    120mm fan hooked up to 5v pins on Pi just to have some active airflow past the drives.
    Then want to get a case 3d printed to keep it all nice and tidy.

    Im tempted to continue running Win10 on the Pi4 cos I think it's an absurd idea and that appeals to me in a stupid way.

    • +2 votes

      Linux (and optionally MooseFS) is a much better option than Windows for most data storage tasks.

      Other than that, that sounds like a solid plan to me :)

      As for absurdity, I ran a MooseFS chunkserver on WSL… on an NTFS filesystem… pretty sure the devs would have been HORRIFIED.

      • Oh I know it makes more sense, I first started running Linux in 1997 on home server and desktop.

        Hence why the Win10 ARM server appeals to me so much. You'd have to be an absolute madman to do it.

        • Hey if it works, it works. My HP Microserver still runs WHS2011 24/7 for years and been flawless.

  • What does the whole setup look like? Is it a clean single unit box like a DAS/NAS/Microserver or just jumble of parts?

    • just jumble of parts?

      That.

      It looks like utter crap from what I can find online.

      •  

        Your negativity is astonishing.

        https://raptorswithhats.com/img/2020-10-23-LACK-testfit.jpg

        Ignore the mess.

        • I'm being realistic mate. No one is calling that pretty or mistaking it for an off-the-shelf OEM product.

          You could 3D print enclosures that look better than those.

          •  

            @Gnostikos: That's the worst possible angle, since it's from the back.

            Can you honestly call this ugly? https://raptorswithhats.com/img/2020-10-23-hc2-finalfit.jpg

            • @Zorlin: Considering the number of cables coming out of that thing, yes. Not to mention there isn't sufficient cooling on those drives.

              •  

                @Trance N Dance: There's a fast Noctua 120mm on the front. The cables are a pity for sure, but provide 4x gigabit which is super handy.

                Due to the massive metal heatsinks, you've got a lot of ambient cooling - the fan simply wicks away heat from the heatsinks. You've got about 30 minutes before the hard drives get uncomfortably cool without the fan.

                I would NOT want to run 4 of these with no fan.

                • @Zorlin: If there is (I can't see it) doesn't look like it's cooling the top drive. Also looks are subjective, so some (including me) will think that's ugly as opposed to your thoughts.
                  4x gigabit is kinda pointless unless they're being routed through a multigig backbone, no? You're going to be limited by the lowest common denominator and that's going to be the link to the accessing computer and/or that computer's drive speed.

                  •  

                    @Trance N Dance: It is cooling the top drive.

                    And yes, subjective is fine, but some people think it looks nice and it's basically just a big black monolith anyways so, who cares?

                    As for pointless gigabit, it's not pointless for me. I use this in my Proxmox cluster to provide fast storage to hundreds of VMs.

                    • @Zorlin: Correct me if I'm wrong, but isn't that modem/router bottlenecking you to gigabit anyway?

                      Those metal enclosures aren't acting as a heatsink for the drives, or did you put thermal pads between the drives and the enclosures?

                      •  

                        @Trance N Dance: They do actually heatsink the drives, they get damn hot if you don't use a fan on them.

                        No, I'm using a gigabit switch with a bunch of machines connected to it. I get the combined speed of all connected HC2s, just never over a single connection.

                        • @Zorlin: That's the metal enclosures heating up from convection heating and not actually acting like a heatsink, unless the drives are directly touching the enclosures they're being baked as air is a good insulator.

                          I don't understand how the 4 x gigabit connection is not pointless for you then, you're being bottle necked by your switch and would only get gigabit throughput to the VMs.

                          •  

                            @Trance N Dance: The drives directly touch the enclosures at various points.

                            As for the switch, switches have cross-sectional throughput… so my VMs each get a gigabit. My overall cluster gets to use all the throughput.

                            Additionally, I get extra throughput between the nodes vs having a single gigabit.

                            • @Zorlin: Wait, are the VMs being run on the HC2s?

                            • @Zorlin: If the drives don't have a proper thermal interface, either paste or pads, they will not be able to transfer heat as effectively to the enclosures.
                              Trance N Dance is correct in that the drives will be much hotter than the enclosures will due to the poor thermal interface, simply touching the enclosures in a few points does not constitute a good thermal interface.

                              • +1 vote

                                @Cartman2530: Sure, and I don't disagree with any of that. What I'm saying is that the drives directly contact the chassis in a whole bunch of places, and I regularly check the temps of the drives and the chassis to make sure they're not going nuts.

                                I would seriously consider adding some TIM though to make it more trustworthy now that you bring it up.

                                • @Zorlin: Some TIM would be for the best in terms of longevity and reliability and it doesn't cost very much either.

                                  Another question, is the fan monitored for failure? Just say on the off chance the fan died while I was hammering the array on a hot summers day, worst case scenario, would there be any notification and/or a thermal shutdown limit?

                                  •  

                                    @Cartman2530: Agreed.

                                    Yes… manually… by me ;)

                                    I think the HC2s will shut down if there's a thermal issue, but since they can read the temps of the drive it couldn't hurt to have them shut down if the drives go above 30 or 40c.

                                    Normally the drives sit at about ambient temp or a few degs above.

      • Hmm, that's not bad. Why can't a single board handle multiple drives?

        I'm not sure about having to get and setup/maintain separate $150-200 kits for each hard drive when you can smack them into a NAS enclosure and just pick a RAID config.

        •  

          You buy a power supply with each HC2, but you can also buy a centralised power supply to run all of them if you're crazy like me and have 12 of the damn things (which I will soon). I've also get a NetApp 24 disk shelf, among many other toys.

          You can get other boards that can take more drives etc, but I like that these are stackable and that you can add and remove entire machines without your filesystem going down.

          It's a PITA compared to a normal Synology RAID in terms of initial setup, I'll happily admit. But I love the flexibility.

          • @Zorlin: I can see that you enjoy the open-source tinkering and flexibility of it all but I suppose there's a reason pre-made NAS units are more popular. I might checkout MooseFS and see how it goes from there. Thanks.

            • +1 vote

              @Hybroid: For sure - I run Perth Linux Users Group.

              For 95% or more of users it makes WAY more sense to buy a QNAP or Synology and have a pre-made solution. For people who have specific needs, this might be a better option.

              Personally I've got (a licence for) 150TiB worth of erasure coded MooseFS Pro, with 2-disk redundancy that would normally cost me 450TiB. I'm planning on expanding to that full size, then filling it. That's enough storage for me for now.

              • @Zorlin:

                For 95% or more of users it makes WAY more sense to buy a QNAP or Synology and have a pre-made solution.

                That's literally what I've been telling you several posts above and yet you keep bleating until your blue in the face that I'm wrong.

                For people who have specific needs, this might be a better option.

                Might be?

                From the incredibly glowing praise you heap on your "Elastic NAS" setup you act as if this the best solution for anyone.

                I've never heard of this jumble of things mentioned once on r/DataHoarders, Serverbuilds.net, StackExchange, et al.

                This honestly feels like an incredibly convoluted attempt at self-promotion.

                •  

                  @Gnostikos:

                  That's literally what I've been telling you several posts above and yet you keep bleating until your blue in the face that I'm wrong.

                  I've been saying it throughout this entire thread that for 95% of users they should just buy a QNAP.

                  From the incredibly glowing praise you heap on your "Elastic NAS" setup you act as if this the best solution for anyone.

                  Where the hell did I say it was the best solution? And for some, it is the best solution for the price.

                  This honestly feels like an incredibly convoluted attempt at self-promotion.

                  Please point out the dozen places I've been promoting myself. Again, I'll wait <3

              • @Zorlin: Thanks @zorlin for all the info on your website. It’s compelled me to buy a Odroid HC4 to give it a test and finally upgrade my ageing Ds2411+.

                Any reason you didn’t use the HC4? Cheaper than multiple HC2s.

                Do you run any other services on your setup? Do you run them on the master, slave or separate server?

                Eg im going to have to move away from Synology photostation - any good alternative in Linux?

                Apache/Nginx will be easy , as too will be Plex. I think that’s all I need!

                • @akunno:

                  Eg im going to have to move away from Synology photostation - any good alternative in Linux?

                  Paid PhotoSync it just works with almost anything. I tried SFTP and WebDAV it works beautiful. iOS Android client app gallery is nice to use. Good thing about paidware is it works.

                  Last month was Nextcloud I just wasn't happy with the client software it's not cutting it.

                  • @skillet: I'm also using PhotoSync, but was meaning "what's the best way to 'display' the images when someone wants to find all the christmas photos from 2 years ago"?

                    Appreciate the response, at least it confirms PhotoSync was a good purchase in the day!

  • Hey @Zorlin, interesting concept - might be a bit ambitious for my humble needs but I did want to say thanks for sharing (love some good nerdery) and replying to all the comments no matter their tone.

    •  

      Thanks man! I really like the OzBargain community.

      I think it's important to address things even if they're negative. A lot of the points are really, really good! People perhaps need to learn how to phrase things more nicely, though.

      • I feel like you got treated like a war criminal in this thread. Please bring back the link so I can read your article.

        •  

          Sure thing.

          https://raptorswithhats.com/tags/elasticnas/

          Again, no ads :)

          I'm hoping to have the next few posts done soon, but life has been getting in the way for months. I've got 3+ years worth of running it, the posts cover the first 6 months…

          Also, if there's anything worth posting here from the posts, please feel free to do so or I can. I just didn't want to flood OzB with details.

  • Plot twist. @Zorlin and @Gnostikos are the same person! :D

  • Ignore all the naysayers, qnap synology fanboys. If you want to learn Linux, like to build a project from parts and pieces this heresy is the way to go.
    You will learn about cluster storage, you will learn about disaster recovery, you will pull your hair and lose them. You will do perfomance testing. Hardware troubleshooting. You will learn how to admin network storage. You will tear it down every few months to try out new ideas. You will explore filesystems, backup, snapshots, caching, firewall(if you go cloud), smb shares, permissions, the scope is endless. You may get into trouble and you will learn to fix them.

    I went with the RockPi4 because I have used it with other projects. The community is well supported and it's POE, two USB3 and a important M2 connector(nvme cache). However its got no SATA port so the two full speed USB3 are fitted with two cheap JMS578 usb-sata bridge. It's a matured chipset with upgradable firmware, performance is good.

    I run four RockPi4 giving me 8 SATA ports and 4x M2 nvme as cache. I started with ext4 but now rocking btrfs. GlusterFS as the distributed storage system with Ubuntu Server 20.04 as the operating system installed on Sandisk high endurance micro sd. Four Samsung 960 Evo was then added to the party to increase the challenge. LVMcache was setup initially but then I ended with bcache due to performance issues.

    In terms of powering the rig, the RockPi4 draws about 2-3watts when idle but can suck up to 6 watts full load. So two, dual port 30watts USB power adapters are required for redundancy purposes. They are filtered by twin Eaton backup power. The backup power supplies also powers two MeanWell 12V 35W power supplies that spins hard disk drives.

    As you can see almost everything is double(except for NBN connection), it's set up for high availability. I never really measure the performance of the system as I am pretty happy after adding the nvme cache. I only have a few family members running iOS PhotoSync as clients over my NBN connection and serving samba drive to my desktop on my local network. So far no complains.

    I am IT admin at work, I have at least 2 decades of Linux experience but it is considered pretty advanced in my opinion(maybe I am behind). If you have an old machine and wish to turn it in to a NAS, you can always BYO at Xpenology that turns your BYO hardware into a Synology server. When you get an idea how how it works then you can dive in to Arm stuff like Raspberry clone nas. These little computer boards are nothing new, they have been around for industrial use for ages. Many thanks to the Raspberry foundation for starting a cult.

    •  

      Thanks! I'll give your post a proper read and reply later, but - if you like GlusterFS, you need to give MooseFS a proper go. If you tried Moose a long time ago, give it another go.

      The RockPi4 looks super interesting. I've seen it before but might give it another look as I'm going to need another 6-12 nodes soon.

  • +1 vote

    Thanks for the comments everyone, negative positive and everything in between.

  • Hey OP,

    That Kobol Helios64 is a very cool product. Did you order direct from Kobol? How much did it cost you in total?

    MooseFS - Interesting FS. Does it have similar architecture to Gluster? I wanted to try Gluster but decided against it because its data durability / bitrot protection seems to be an afterthought. I wonder if MooseFS is doign better in this department?

    Current crop of ODROID are limited to 1Gbps. Would 2.5Gbps / 10Gbps significantly improve performance in a MooseFS cluster? Also I wonder what the CPU load is like on the HC2?

    No erasure coding support for non-Pro version? Hmm that sounds bad. Limited to mirroring for data redundancy?

    • +1 vote

      Hey! Yes, I did, it cost me something like $450 AUD. A LOT compared to a 5-bay RockPi 4 with the SATA hat, which is my next GOTO.

      The catch with the H64 is that they haven't quite sorted stability yet, which is a damn shame.

      MooseFS is a fairly different architecture to Gluster. As far as bitrot, it scans chunks fairly continuously and corrects them as needed. Moose has never lost my data in 10 years of running it in anger.

      Yes, MooseFS scales horizontally with bandwidth, so having more is good. CPU load is very low, 10-30% when running a busy MooseFS chunkserver (data storage server) on the HC2.

      The lack of erasure coding sucks, but think of it as networked RAID10 basically. The fact that it's open source and damn solid makes up for it IMO but YMMV. And yes, basically it's just mirroring, but per-file mirroring levels.

      I'm about to get my 150TiB lifetime license with EC :D Very excited. Been on a trial and seen gains of ~0.6TiB a day for a while due to conversion to EC.

      EDIT: Let me qualify… I worked at a place with 250 million files, and we only ever lost a single file over that whole time due to Firefox having a bug with how it wrote user profiles. Even then the file was a temporary file that didn't matter. There are installations around 10PiB+ with hundreds of billions of files and no history of a single data loss incident :)

      • From my perspective as a home user, RAID10 would be prohibitively expensive. So it seems better suited for enterprise usage.

        Good info on the H64. Thanks for sharing!

        •  

          Fair! For me it's well worth paying for Pro, it actually saves me $8,000 or so vs buying the equivalent amount of drives and hardware to achieve the same amount of availability.

  • I need 90TB what do you reckon this will cost?
    Currently using Google Cloud Archive storage but costs $$$$

    •  

      You can throw together 14TB * 7 units. So $120 * 7 + $330 to $420 * 7.

      Gives you $840 for the HC2s and $2310 to $2940 for the drives.

      This is a hell of a solution though, make sure you know what you're getting into. I'm very happy to help with that caveat in mind.

      Perhaps a better solution if you only need a years worth of storage is to rent big servers. I know a deal for 64TB of real disks for 75 euros a month (plus initial setup fees). Two of those and you've got a solid system.

  • +1 vote

    Just as an update, I'll be trying out the ODROID HC4 very soon, I've ordered two of them.

    That drops the price substantially to ~$500 AUD for the units themselves plus drive costs.

    Unfortunately I'd argue the HC4 is slightly uglier, but that's all subjective.