10TB WD Mybook with Reallocated Sectors?

I just recently received the two WD Mybook 10TB hard drive from the Amazon deal and had just started to use it directly connected to my pc via sata (Did the 3.3v pin mod).

I ran crystal disk info for the first time just after around 2 hours of use. I noticed that there are already 5 or 6 reallocated sectors on each drive.

Is that normal? Do hard disks generally come with reallocated sectors straight from the factory? Or is mine already showing signs of being a dud?

I have limited experience.
Anyone know what I should do?

Any help would be appreciated.

Edit: Turns out I'm an idiot. Nothing to see here…

Comments

  • -1

    I think Australia Post is capable of half of them.

  • +1

    Every hard drive I purchase has zero reallocated sectors. I would contact WD and ask for a replacement. A reallocated sector means there's a fault on the drive and the drive has retired that sector and moved the data elsewhere. There's a limited number of spare sectors before you start losing data.

    The real problem occurs if you get pending sectors for reallocation. That means the data there cannot be read successfully and essentially the drive is useless as you may have data in that sector that cannot be retrieved.

    Can you reverse the mod? If not then you'll be out of luck warranty wise.

    • Luckily, I choose to break the pins on the sata connector on the power supply and not on the drive itself.
      And I also was careful to not break any tabs on the enclosure (with help from a youtube video)
      But all of this is completely irrelevant now because I realize I'm an idiot. See below.

      Thanks for your help

  • Sadly this is why everyone recommends running stress tests and health checks before cracking open external drives.

  • Is that normal? Do hard disks generally come with reallocated sectors straight from the factory?

    Absolutely not.

    If that count continues to increase within a very short span of time (a day or two), by any number (especially a huge jump from 6 to 50 for example) and especially after prolonged write/read operations, then the drive is headed for early failure and may begin corrupting data before it fails completely and is totally inaccessible.

    If that count stays at 6 for the next month or two and doesn't increase, then it will likely continue to operate until it reaches its mechanically-limited lifespan, albeit with perhaps slightly reduced performance.

    Very rarely, and I mean very rarely (I've seen this happen in the single digits over 10 years in the IT Industry), your drive may encounter a bad sector on a platter and reallocate that sector to one from its reserve area (this is what the 'Threshold' count for the Reallocated Sectors Count attribute means; how many sectors can be reallocated before the HDD has no reserve sectors left and thus bad sectors will remain on the platter and data will be corrupted), and it will never encounter another bad sector for the remainder of its lifespan; but generally speaking every single HDD I have seen that developed reallocated sectors, always started off with a few, ballooned to a very high number within a short time-frame and then became bricked if it continued to operate.

    You would also be seeing non-zero values for the Reallocation Event Count and Current Pending Sector Count attributes if it was genuinely just a bad sector and not a dud HDD entirely.

    The other SMART attributes to keep an eye on are:

    • Reallocation Event Count
    • Current Pending Sector Count
    • Uncorrectable Sector Count
    • UltraDMA CRC Error Count

    Any value other than 0 (Decimal) for any of those attributes is a bad sign, as is values that increase within short periods of time.

    Anyone know what I should do?

    Return it immediately. It's just not worth the hassle wondering if this thing will die on you within such a short span of time.

    Generally speaking, factory dud HDDs will fail hard and fail early otherwise for good drives their SMART health should be green for the entirety of their mechanical lifespan. You get some cases where they die in their 3rd or 4th year of operation (and I refer to power-on hours count; so 3 or 4 years of continuous operation) but most failures if they do occur, happen in the first year of operation.

    • +1

      It turns out I'm an idiot.

      I mistook the threshold value as the decimal translation for the raw value (by default shown in hex).
      Turns out I actually have 0 relocated sectors. Which while is embarrassing, is actually good news!

      I also got thrown off by my ssd reporting 'Good 100%' while the new hdd showing just 'Good'

      Regardless, Thank you for your help.
      The information you have written is very insightful and is good to know.

      • It turns out I'm an idiot.

        Happens to the best of us.

        I mistook the threshold value as the decimal translation for the raw value (by default shown in hex).

        Yes, this is a rather confusing feature of Crystal Disk Info, displaying all raw values in hexadecimals by default, when the other columns display decimal values.

        Open up Crystal Disk Info, navigate to: Function > Advanced Feature > Raw Values > 10 [DEC]

        That makes the SMART values a lot more readily comprehensible.

        I also got thrown off by my ssd reporting 'Good 100%' while the new hdd showing just 'Good'

        That's because SSDs, being solid state flash memory, have a finite number of program/erase cycles. The firmware's wear indicators can actually give you a hard figure as to how much lifespan is left in an SSD's NAND blocks (e.g. 100% = brand new). That doesn't mean a brand-new SSD cannot fail or be a dud, it simply means barring catastrophic failure elsewhere on the PCB (such as the storage controller), an SSD's NAND blocks should last a long time.

        Hard drives, being magnetic, mechanical storage and not solid state, have a lot more moving parts and thus complexity. Failures are difficult to predict given they can occur due to many extraneous reasons:

        • The head of the actuator arm that reads and writes data on the platters might come into contact with a sector and physically damage it, such as when a HDD is dropped or experiences a strong vibration.
        • Some air might enter a sealed area of the drive and dust may contaminate platters.
        • Variations in the surface geometry of platters due to manufacturing defects may cause uneven wear on certain parts of platters over time, unevenly wearing down some tracks on a platter more than others.
        • The bearings in the spindle motor or the actuator axis may suddenly fail, meaning the data on the platters will be intact but unreadable unless you have a data recovery company physically take the platters out of the HDD in a clean room and migrate them to an equivalent HDD with a working spindle motor/actuator arm (at great cost).
        • Exposure to high humidity levels or temperatures for prolonged periods of time can destroy the circuity on the storage controller board and thus render the HDD inoperable.
        • Exposure to stronger than normal magnetic fields for a prolonged period of time can cause data on platters to become corrupted or destroyed.

        No one as yet has figured out how to incorporate accurate wear indicators in HDDs, other than the power-on hours count, which is what most HDDs manufacturers rate their HDDs lifespans and MTBFs (Mean Time Between Failures) by. As a general rule in the industry, once the power-on hours count passes 3 years, any HDDs in a critical or high-availability role, should be replaced, regardless of their SMART health, brand or class (i.e. enterprise or data-centre grade), warranty length, manufacturer-stated MTBFs or otherwise.

  • Put it back into the case and run WD HDD check (run extended test multiple times), if it failed perhaps worth returning it to them

Login or Join to leave a comment