• expired

[Recertified] Seagate EXOS 18TB SATA ST18000NM000J $349, 16TB SAS ST16000NM004J $292 Delivered @ East Digital

660

It seems East Digital have slightly lowered their prices on the refurbished (factory recertified) Seagate drives compared to the last few months. Per-TB price on 18TB drives, in stock, has finally fall back down to <$20/TB again.

The Exos 16TB X18 SAS drives have the best price for large FR drives at the moment ($18.25/TB) but be aware it's SAS drives, not SATA!

Port Model Capacity Price Price/TB Link
SAS ST16000NM004J 16TB $292 $18.25 Here
SATA ST16000NM000J 16TB $309 $19.31 Here
SATA ST18000NM000J 18TB $349 $19.39 Here
SAS ST20000NM002D 20TB $414 $20.70 Here
SATA ST20000NE000 20TB $420 $21 Here
SATA ST22000NM001E 22TB $572 $26 Here

Regarding used/pull drives, the HC550 16TB posted on OzB previously is now OOS. Now HC550 18TB is the lowest per TB drives they sell, at $323 ($17.94/TB).

Update 18 Apr: It seems ED has jacked up almost every FR drive by around AU$100. Per-TB price now are at >$25 for the majority of models.

Related Stores

East Digital
East Digital

closed Comments

  • +7

    Given used HC550 18TB is very close to FR Exos X18 18TB, you may want to consider:
    - Physically they may be in a similar age (some buyers say the HC550 they got has 20-30K hours on the clock. Unless FR drives are early RMAs, they could have 10-20K hours on it as well)
    - Some prefer WD drives, saying they're quieter and better build, and Backblaze agrees, at least for the failure rate part (well sort of)
    - Both Exos and HC550 can be converted between 512e and 4Kn. Consumer drives cannot
    - FR drives may have more diversified sources, meaning between different drives in an array, they may fail at a different pace (what you probably want), while pulled drives are likely from the same batch

    • +3

      Sorry, what are you saying here? Is HC550 or Exos X18 the better buy for most people?

      • +1

        Depends on how you use the drive (just one or two standalone, or in RAID, how well have implemented 3-2-1 strategy), unfortunately there's no one size fits all answer.

        When the prices match I would personally go with FR drives. However pulls used to be way cheaper in comparison which negates all their disadvantages. Among my 30+ drives in the past decades (WD vs Seagate 50%/50%) I had exactly one WD failure (~30k hours, out of warranty) and one Seagate failure (<100 hr, RMAed). See? Again there's no clear answer.

        • +2

          Not to mention pulls from east digital have gone from 3 year to 1 year warranty now.

          I just divide the FR price by three and then figure that's my annual cost for that storage, anything beyond the warranty period is cream on top.

          FR FTW

          (and raid and backup also FTW)

          • @bdl: I won't do the math this way but it makes sense!

    • +6

      1 year vs 3 year warranty too

      • That's very true. Depends on how "lucky" you are, the warranty may not make a difference LOL. But more warranties the merrier.

        • +1

          I'd be wondering why, not really a great sign

  • i need 3 drives, but 8tb each is enough. if i wait do you think i can get those at $15 per TB ?

    • Unless it's a pull? I never see 8TB drives to be that cheap ever. The closest one is this?

      • That deal is over any how.

      • Hmm okay probably just need to wait longer. I really dont need above 10tb

        • +1

          You might get lucky but remember, they generally seem to pull at 3 years and 8TBs have been around for much longer than that, which makes it likely that most pulls of 8tbs have already occurred.

          And then if you're in the market for new, I bought 8tbs for 300ish in 2019 and they're pretty much the same price now.
          https://www.ozbargain.com.au/node/455640

    • +1

      @McMaferMur if you only need 8TB drives then just buy new Ironwolfs or Reds - there's no point going for used drives in such a small capacity, when you can just buy new drives for a few hundy.

      • Hmm i decide to just delete some of my linux isos to free up space. Kind of running out of budget lately. Cant afford.

  • +2

    look at the date of manufacture on the drives. the one in the picture is over 4 years old - jan 2021

    • -5

      If the seller says 30K hours on the clock, you can get a drive that's been turned for all 4 years. Definitely I not recommend it to buy above $50.

      • +4

        In reality the vast majority of drives can easily live past 5-10 year mark on 24/7.

        Also there's no way you can find any pulled modern enterprise drive for less than $50 that are >12TB and in good condition.

        • +2

          Yep got a 5TB WD drive from 2012 that's been running 24/7 now.

          • +1

            @SeanConnery: WDC WD30EZRX-00MMMB0 : 3000.5 GB
            110385 hours 12.6 years

            WD Green Drive still going strong.

        • -1

          I'm happy to accept your 10 year warranty on the drives you are selling. From my side, I can assure you that I will not run them 24/7 all 10 years. Deal?

          There's no seller saying they sell crap. But Aliexpress is full of the re-labelled drives with zeroed SMART that definitely been running for years, and often initially were different models.

          • @Ozzster: I’m not selling these drives. I’m just posting a deal.

            • @xmagic: OK. It's not the point, it's a question to all vendors. The fact that some drives can live longer is only a survivor bias. Manufacturers would be glad to offer such a 5-10 year warranty if the % of such drives would make any reasonable sense. If it doesn't happen it means for extra year that this drive lived above average there's another drive that lived a year less than average. In case of buying used it means you bought a lucky winner hoping that it remains lucky continuously for years, like winning a lottery again and again with reducing chances every time.

              • +1

                @Ozzster: Not sure why you're heading off on this weird tangent, but to address your original point :

                If the seller says 30K hours on the clock, you can get a drive that's been turned for all 4 years.

                Of course, that's why they're quite a bit cheaper than brand new drives. Some of the lifespan has already been used.

                Definitely I not recommend it to buy above $50.

                Why ? A modern enterprise drive has way more than $50 value left at just 4 years of use. Hence the drives in this very deal are priced much higher than $50.

                • @Nom: Rate of value of your data vs cost of disk space. Let's count risks and how much they'll cost you. I'm not sure about everyone's use cases, but I can only expect these drives to be used as a backup drives for home purposes. Considering that these disks have increased risks of instant death, non-instant death, data corruption that cannot be restored because they are helium-based, reduced warranty and lifetime, risks of common issues appearing simultaneously because the disks have the same history, and standard redundancy requirements for backups, you would need 3 storages having the same copies of data on them. In normal circumstances, without so many risks 2 drives would be sufficient, but here are more risks, therefore, an extra drive needed to cover them. It means, for every 50 dollars spent on actual data you would need 150 to be sure that it will live long, i.e. ratio is 1:3, or 200 dollars overall including the initial 50 dollars for the data source. So, if you want to have $300 disk you would need $900 for its backups in addition.

                  Obviously, you don't have to store 3 copies of your data or have backups at all. But this is not recommended. If you have just one copy and super confident that nothing will happen, of course its fine. But only for the kind of data that I would delete with no remorse.

                  • @Ozzster: That's a really long way of saying "You always need a robust backup strategy".

                    It doesn't matter what drives you're using, or how likely they are to fail - you still need a robust backup strategy because all drives have some chance of instant failure.

                    The beauty of these used drives is that you can buy the drives and the backup drives for way way less than the cost of brand new drives and brand new backup drives - which still need exactly the same robust backup strategy, even though you paid way more for them.

                    • @Nom: I actually mentioned that used have more risks and would need one more disk. Instant failure is covered as well.

    • +7

      the manufacture date is not as important as how the drive was utilised during its deployment

      had the fortune to learn from a Seagate rep. at an industry event
      sharing their observation from their manufacturer side, one little tidbit was: cold start and power cycles hurt the drives more than keeping the drive run 24/7

    • +1

      Err I thought they just reused old photos for the listing?

      • I believe so.

  • will the SATA ST18000NM000J 18TB be suitable for NAS use?

    • +4

      You can consider all enterprise drives (WD DC, Seagate Exos) are suitable for any use.

        • +15

          …wtf did i just read.

          • +4

            @Aarent: Watch the language mate :P

            But yes I agree - NAS-specific models are just something they do on the consumer line. Enterprise lines are totally different beast.

            If you have to name one place enterprise drives may not be suitable, that would be the desktop…

            • @xmagic: One would suggest that any spinning hard drive hasn't been suitable for a desktop for at least 5-10 years

            • -1

              @xmagic:

              If you have to name one place enterprise drives may not be suitable, that would be the desktop…

              … and the bedroom

        • +4

          Is the seagate marketing koolaid really that delicious?

          The whole "NAS" firmware stuff is a rip. WD started it by removing features from the green drives then adding them back in on the red 'NAS' series. (TLER debacle anyone?)

          A NAS is a server. Network Attached Storage. The EXOS drives are enterprise drives. You may not want them for a super low-power NAS, but they are certainly fit for NAS duties.

        • +1

          Exos are literally enterprise drives. Designed to be part of an array in a server. Not enough downvotes for your 'advice'.

  • +5

    FYI they change their pricing frequently (even multiple times a day) based on AUD/USD - I've been tracking them the past week, pricing today is $1 more than Friday but around $10 less than Wednesday/Thursday orange shenanigans :)

    I bought a drive Friday because I needed it however going to see what the exchange rate does over the next few weeks before buying more.

    • oof, all the 12tb's just jumped 50 AUD :|

  • +2

    Yikes still about $50 over historical (6 months ago) prices. Coming down though, 16tb recerts/refurbs were around $330-$350 the past 3 months.

    • +3

      Yeah that's the stupid exchange rate, kicking myself I didn't get more storage last Sept but oh well, time machine is broken :|

  • -2

    Refurbed Seagate? Cool, I need more doorstops.

  • +3

    Anecdotally, I just had to replace 2/3 neology refurbed 8tb seagate exos drives in my NAS that I bought a little over a year ago through an OZB deal. 8tb WD red and 4tb WD green still going strong 4 years in.

    • seagate was my go to brand for hard disks when they had the clam/seashell enclosure design. they were rock solid in reliability. when speeds got faster it felt like they were lost. they'd barely last a year after that.

      with hd's i tend to stick to a brand until i experience multiple failures. ofc, when i finally i jumped ship it was to ibm before their line became infamous…

  • +3

    I bought 2x 16TB about a year ago. Both had bad sectors upon arrival. ED replace no questions, shipped to Melbourne and replacement within a week from Singapore. 1 year later, one drive has done it again. Shipped to Melb, replacement from Singapore… last tracking was in Honk Kong 2 weeks ago (not ED fault); so still awaiting the replacement.

    Other people I know went refurb/pulled WDs and no issues; not sure if that means anything..

    • +3

      Could be just bad luck. But if there’s a significant pattern (drive dies on specific port for example) then consider checking your power supply. Bad PSUs kill HDDs like it’s nothing.

    • I bought 3x 16tb drives 1 year ago without issues too, hmm interesting some people ran into issues but good to know warranty is good.

  • +1

    Picked up 1x 18tb two weeks ago - I was shocked at how quickly it arrived.

    Testing well so far - no bad sectors, but does struggle to provide SMART data to some apps and not others.

    • +3

      I ordered a nas from Scorptec and 4x18tb from east digital on a Saturday

      The hard drives were in my hands on the Thursday, shipped from China. Before Scorptec had even shipped the nas.

  • +2

    Noticed their "New" stock has diminished to basically nothing. Used to always have many drive options / sizes etc there under "New" or have they now redefined the drives as "FR"?

    Bought a few times from ED so not a cop out, genuine question as I noticed it yesterday when I had a look.

    • +7

      Bit hard to tell if you're actually asking if you're saying "New" but in case you were unaware, there is a way to obtain the actual drive used hours, past the wiped smart data and people started checking their "New" drives and sure enough, they weren't.

      https://www.ozbargain.com.au/node/891267

      • My new drives from 2024 were properly new, so no that wasn't the case for all of them

        • +1

          Yes, because they were from 2024.

          But they were also selling "new" drives that were made back in 2021…

          • @Nom: Yeah I know. That SMART reset issue was only really a thing near the end of last year, but I'm just clarifying that mine were advertised as "new" on the shopify site and were new according to the farm hours. So not all of east digital's drives have been falsely listed as new, which one might assume from the message I replied to. Unless there's something I'm missing and my drives are also refurbs of course, but I don't believe that's the case.

  • +3

    I bought a ST16000NM000J 2 years ago for $308, sad that used prices have not come down in that time.

  • Ahh tempting, I'm running out of space on my NAS… but worried about the rebuild process for all 4 drives lol (Synology SHR-1)

    • +2

      Don't worry, if the rebuild fails you can just restore from your backup ( ͡° ͜ʖ ͡°)

      • Eh :P

        I do have a (technically untested) backup but only partial so at worst I'd be losing some non-crucial data and need to spend some hours restoring some services.

        I'm more worried that people are reporting some of these drives arriving faulty which means I'd need to run tests on them first - might pass and wait for a discount on new drives after all somewhere

        • You can run a few passes of 'badblocks' to preclear drives before you consider adding them to your array.

          • @skittlebrau: Eh, while yes I could, with 20TB drives we're talking almost 24h (at ~250MB/s) per single run, I don't really have a machine with enough drive slots and low power usage to run this a few times for 4 drives

            • @drasticmeasures1337: You don't need enough drive slots or low power usage - one drive at a time in any old random desktop or USB dock works just fine.
              There's no need to check all 4 drives simultaneously, nor is there any rush to test all 4 drives quickly. Take your time, and once you've established that you don't need to warranty-replace any of the 4 new drives, then start <whatever migration path you decide>.

              some of these drives arriving faulty which means I'd need to run tests on them first

              You always have to run the tests first anyway no matter where you get your drives, because absolutely any drive can arrive faulty !!

    • +1

      For a full array replace you may want to use dd and gdisk instead. A lot faster and no rebuild needed (original drives will be mostly read operations).

      Of course there will be downtime.

      • Wait, why didn't I think of that? Have you done this? Does Synology gracefully handle partition size changes later on? Any potential issues?

        I'm okay with some downtime, as long as I can get it done in one day instead of weeks lol. My other option was buying a DX517 expansion but it basically costs as much as a new NAS

        I guess I'd need to buy some sata cables though and can then boot to Linux for data migration, ideally I'd want two drives being cloned at once (I should have 4x sata ports on my main PCs mobo)

        • -1

          For cloning, you can use GParted Live USB. Badblock screening can be quite slow (my 12TBx8 took about 4 days to do all 4 patterns @ 1 pass) but you don't need to take down your RAID to do so. The actual mirroring part should be much faster. My own experience is that it can goes to around 100-130MB/s each drive on an 8-port HBA (4 pairs of drives per run, 2 runs), where the whole process took another 2-3 days. Definitely faster than a RAID as in my case I need to run 8x rebuilds (for 8 drives, RAID6). Though just rebuilding one drive at a time can be quite fast, too.

          Finally you'll need to use gpart or gdisk to fix the corrupted backup GPT partition table (as it's no longer at the end of the drive after cloning). DSM will be able to pick up the extra space once booted with a new set of drives. Also don't forget to run a full data scrub (where DSM does one pass of both mdadm resync and btrfs scrub) to ensure no data corruption.

          If you're considering upgrading to set of 4 drives in same model and capacity, better take this opportunity to convert it into RAID 5 instead of SHR. SHR is good for mix-match drives but isn't really good or same model drives.

          Hopefully at the end of the journey everything works fine.

          • @xmagic: Okay I have completed the migration. For those who may be interested:

            (Originally 8x12TB in RAID6, not SHR. Also I have an 8-bay rack server with H310 in IT-mode, so can saturate all 8 drives at full speed at the same time - this saves a lot of time)

            1. Badblocks and SMART scans

            For those HC550 18TB pulls, it would take around 30 hours to complete a full read and write pass in badblocks, and ~12 hours for a long SMART test. See details here.

            1. Data copy with dd and gparted/gdisk

            I did copying in 3 batches - I copied 1 drive as a test, inserted back to the original array in DSM, everything works, no array rebuild needed. Then I copied 4 drives, and finished up the remaining 3 drives.

            Each run would take between 71,000 - 75,000s (19-21 hours) with an average speed of 200MB/s. Once done, just use either gdisk or gparted to fix GPT table. GParted probably easier as it identify the issue, and the issue can be fixed with a button press.

            1. Verify by scrub

            After all drives are back to the NAS, run a full-volume scrub to verify the data. DSM would do two scrubs - the BTRFS scrub took about 19 hours for me (it depends on how much data you have stored). No error (yes!), and the md resync took a bit longer (21 hours) as it scans all blocks in the array.

            1. Expand the storage

            Just follow whatever DMS Storage Manager says to expand the empty, available space to the array. This would probably take another day or so as it's essentially what scrub would do but with all 6x18TB of storage.

            1. What to take away

            So this method apparently works. In the past you'll need to replace one or two drives at a time (two drives also risking array crash on rebuild), which means 4-8 rebuilds of the array, each worth around a day to complete. And that's not counting the time used to scan/clear the drive for use. But if just copying the data then rebuild once, you could save anywhere between 1/4 to 1/2 of time to achieve the same thing. This can be quite important when RAID arrays are getting larger and larger.

            Another method would be having a 2nd NAS and just migrate the data. This would involve the time takes to setup the new NAS, initial resync on new array created, and copying data over network (which is quite a bit slower than copying if the array is almost full). Though this involves zero down time so probably a less economic but more efficient choice.

            • +1

              @xmagic: Some updates - when I upgrading two of my arrays, one of them get expanded no problem but one of them didn't show options to use the extra space. I don't know exactly what happened but eventually I had to modify the partition table and use mdadm to manually expand the RAID6. No data loss though.

              So yeah, if someone is reading this, unless you're a poweruser and know what you're doing, better just replace drives and rebuild one by one. It's gonna take forever, but at least it works without a hitch.

    • I'm a little confused, so please forgive me if I am way off base here.

      With my Synology (also SHR-1), I simply pulled out my smallest HD, and stuck in a new (bigger) one. Waited 16, 18, 24 hours, whatever, while the array rebuilt itself, in the meantime used the NAS as normal.

      Then repeated for each disk, 4 x 16 TB over a week. No downtime. No performance hit that I noticed.

      I was under the impression this was the 'authorised and recommended' Synology method? No fiddling around with Gparted, sticking disks into a PC, cloning, etc, etc.

      If the new disk throws up a fault while building, pull it out, mail back to vendor for replacement, and continue with the next disk anyway.

      Or am I completely missing something?

      • +1

        Rebuild is the recommended option in most cases. It essentially treats the drive as lost, and rebuilds from parity.

        The process is in the background, but it also spins up all drives, and uses a comparative amount of power + CPU.

        The reason you'd clone a drive is for speed - so you can get back to work quickly by prepping the drive data. (It's also how you can recover data)

        Because the Synology CPUs are usually ancient ie. 2012 level CPUs, rebuild can be 0.05% per hour on the 4-bay and 5-bay models. Enough to write 50gb files in realtime, not really enough throughput to read >200mb/s across multiple drives and run CRC + ECC in RAM.

        To reduce the time taken 'offline', you pull a drive', clone it on another PC, rearrange the clone's partition data to where it's supposed to be on the larger drive, then put the cloned drive in place, and rebuild/test to restore the array.

        This can take half the time and half the power/CPU time on a 10+ Tb drive.

        It's a fairly common option for upgrading a Synology by rebuild/resilvering (ZFS term)

        Rebuild is a bit 'dangerous' so in some cases, SHR-1 will put the raid into degraded 'parking' read-only until it finishes, then reloads the MD into write mode. It's only relatively bad as the parity rebuild can go wrong if there's only 1 set of parity and there is a problem with the parity data not being accurate due to silent corruption of the data. This can be forced in some cases, as the rebuild can stop partially when the parity is in error.

        Degrade can also happen if there's a parity or hardware problem - as parity is not permanent per se. It 'expires' occasionally and needs to be rebuilt / rescrubbing to ensure the drive data is valid. Which is why SHR-2 is recommended once you go over 10+ tb as the rebuild can take 12+ hours, but it also opens up other issues like needing a more expensive Synology product with more drive bays, sic. Time and safety is usually the main factor for SHR-2 being faster to rebuild and recheck, but it also absorbs 2 drives worth of data to build parity.

        In a few alternative software RAID options using COW or Copy on Write, this can be done invisibly, as the MD operates in a cache type mode until the MD is available to write to. This allows for a Hot/Warm/Cold storage buffer, where incoming writes are stored in RAM (hot) , then Cache (warm), then the MD/Array of drives (cold). This is how (some) ZFS and BTRFS operate, for instance. So, when the array is offline or unavailable i.e. busy, moving data, resilvering, checking parity, health checking, et al, data can be read/written until the array is ready, sic.

        If you use Synology as your one-drive for network use, it's "messy" when rebuilding.
        If it's used as backup or "cold storage", not an issue as you can wait for it to finish, sic.

  • +1

    Anyone know of any good nas deals right now?

    • +2

      do u have an old pc lying around? if yes then use that rather than getting a new nas. its just not worth if u have old hardware lying around. else u can probs grab something from AliExpress when the discount code drops later today

    • If you have old computer, otherwise go on FB marketplace and buy an old pc for cheap with good sataports and then get unraid.

      • Wait to October 14.

  • Refurbished + Seagate. Even James Bond doesn't live this dangerously.

  • Fwiw, bough 4x wd 18th 8 weeks ago. Arrived 5 days in a well packed box. Not taxed either fwiw.
    3x had 25k hrs, one had 21k hrs.
    Did 2 weeks torture test in my NAS and all passed.
    Ordered 1x on Friday as a spare for my raid5z array.
    Can't complain for the price.

  • Is it useful for security camera recording?

    • NVR is write intensive so I’m not sure. If your NVR has RAID, then it should be fine. Otherwise low mileage drives may (technically) last longer.

  • -2

    I don't trust a website that's still using a .myshopify.com domain.

    Still using the basic shopify template with duplicate links (quicklinks and info have the same links), and unfilled fields that would typically make a business much more trustworthy

    i.e.
    Our mission
    Share contact information, store details, and brand content with your customers.

    that's an interesting mission

    • +1

      Nope you don’t have to. Many OzBers do, though.

      • -4

        Do those OzBers know that they don't even have an ABN on their website and are no guarantee's or protections in the way of warranty (both on their website and legally)? Might as well purchase second hand off of fb marketplace.

        • Nope. They’re operating from HK. I guess it’s just some level of trust.

          I personally don’t really care about warranty on HDDs so as long as it’s not DOA. Probably many others are walking on the thin ice knowingly, too.

          • -4

            @xmagic: That's the big problem….if I arrives DOA, and this "store" refuses to believe you or refund you, there is nothing you can do, the securities you have on this is the equivalent of purchasing something off of Facebook marketplace from a guy called Gazza who says "trust me bro, it's all good"

            • +3

              @whitepuma: So far ED has a good track record. And your credit card company may be able to help with uncertainties to a degree.

              No you don’t need to trust them mate. Good thing it’s a free market and it’s easy to walk away.

            • +1

              @whitepuma: Have you somehow missed the bazillion other threads where many OzBargainers have purchased from this store, and experienced an excellent warranty service ?!

              this is the equivalent of purchasing something off of Facebook marketplace

              It's absolutely nothing like that, because these guys have a proven track record.

              if I arrives DOA

              Then you just contact the seller, and they'll replace the drive. No big deal.

              • @Nom: A history of other sales with people here doesn't give you any legal protection. When push comes to shove, the ACCC can't help you here.

                Also, you need to remember that most the previous deals from this seller were via eBay. Doing it via eBay gives you other protections. Whilst this and similar listings are done via their shotty website.

                • @whitepuma:

                  A history of other sales with people here

                  is all you need.

                  But if you want to pay double the price for official-channel drives from authorised Australian retailers, then go right ahead. There's nothing stopping you.

                  • @Nom: using that logic then heymix must be a great and reliable product….given how many people here have purchased them.

                    • @whitepuma: The OzBargainers history of warranty claims with East Digital has been great.

                      If OzBargainers history with Heymix had been great and reliable, then yes that's we would think about Heymix… But the opposite is true 🤷

                      I don't really know what you're trying to say here ? Are you arguing for or against the OzBargainers experiences ??

                      • @Nom: i'm arguing that people (not only ozbargainers) need to understand that purchasing from a place like this doesn't have the same (or any) legal protections on the consumer that they would if they had purchased from a proper registered business.

                        From a consumer protections stand point, purchasing from here is the same as purchasing something "as-is" from facebook marketplace

  • What's the process to do a full check for bad sectors or blocks?

    thanks

    • +1
      • is this similar to doing a clear/preclear in unraid?

        • Possibly, but I don’t know how Unraid does it exactly.

          • @xmagic: np, preclear should do similar to badblocks, writing & reading 0 to all blocks

        • preclear is part of smartctl, which is what underlies both synology and unraid.

          It's using a series of SMART control functions to basically do a read, write then read test to all blocks on the drive. inherently destructive, but its there to pick up weak/damaged sectors. some variations do this twice to 'hash' erase i.e. flip to 0, then flip to 1, sic.

          extended, just does a read to the drive once to verify the data is intact.

          badblocks does almost the same thing as preclear, but it skips a few of the steps that Smartctl does.

          either of the three is a bit of overkill,

          but, on a used drive - preclear is the better option to validate as spinning rust is prone to silent failure - especially as 10+ tb drives have higher density platters - thermal and magnetic weakness can develop over time too.

          • @toliman: thanks, started preclear last night, 16 hours in.

          • @toliman: After I checked with Unraid Preclear plugin, I think these are three different things.

            • SMART tests/ATA secure erase are indeed internally by HDD's firmware. Depends on how the HDD sets up to do, they may do things differently (actually erase the whole disk, throw away the encryption key, etc.).

            • Unraid preclear plugin does a pre-read, write zero and after-read, to determine if there's data corruption when it's being zeroed out (at least with its default script). The goal is to have both verify the drive, but also importantly zero-out the drive for adding to existing array. In addition to this, it also conveniently tracks SMART data to check if the drive would be reliable

            • badblocks, actually in comparison to two above, way more lengthy and put much more pressure on drives. It's original intention was to scan bad blocks so filesystem can selectively skip those bad areas. But in reality, it write then read/compare 4 different patterns (0xaa (10101010), 0x55 (01010101), 0xff (11111111) and 0x00 (00000000), that's 4 full runs) on the drive (compared to 1-1.5 runs of methods above) in default mode. So one can say it's more reliable than other methods, but you're right, it's definitely an overkill, as its workload could total a drive on the brink of a crash (but you probably want that if there's warranty on the drive - bring a drive down to its knees within warranty is better than out of warranty).

  • damn the 18tb ones are sold out already :(

    • Relax mate, I didn’t get enough drives for myself either 😭

Login or Join to leave a comment