276°
Posted 20 hours ago

Seagate IronWolf, 8 TB, NAS, Internal Hard Drive, CMR, 3.5 Inch, SATA, 6GB/s, 5,400 RPM, 256MB Cache, for RAID Network Attached Storage, 3 year Rescue Services (ST8000VN004)

£94.48£188.96Clearance
ZTS2023's avatar
Shared by
ZTS2023
Joined in 2023
82
63

About this deal

Now if one disk would have errors, ok, that can happen. But this is 3 disks showing errors, that’s highly unlikely, so what’s going on? Problem gets worse As you can see I had write errors on several disks. These didn’t all happen at the same time but over time while I was copying.

Sat Jan 1 21:51:17 2022] sd 0:0:6:0: [sdg] tag#0 CDB: Read(16) 88 00 00 00 00 01 9c 00 28 40 00 00 00 08 00 00 I performed several scrubs and while no data was lost or corrupted, each time one or more disks would generate some amount of CRC errors just like my friend had been having! What is going on here….. LSI/Avago controller related? Per disk chance? For myself I am now running for about a month with the new firmware and having done lots and lots of tests during that period not a single error has occurred anymore so I believe the new SC61 firmware fixes this issue for good. Also important, I have noticed no negative side affects regarding this new firmware, speed and everything else is still great! Upgrading your own drivesIronWolf Health Management (IHM) is an embedded software designed on the tenets of prevention, intervention and recovery. It aims to manage the health of the drive through the useful life, and provide ease of data recovery should a catastrophic event damage the drive and render it non-functional. Tough. Ready. Scalable. During the first scrub ZFS found some CRC errors but I believe those to have been caused by the issue earlier and that those just hadn’t been fixed yet, I was able to run the 2 scrubs mentioned above after fixing those. Working for me!

Lyve: Periferie-naar-cloudplatform voor massaopslag Lyve Cloud: Voordelige objectopslag, ontworpen voor de multicloud The exact issue was described as “flush cache timing out bug that was discovered during routine testing”. Hmm, that sounds a lot like the same issue as several members on the ixsystems and on reddit where describing and also what I’m seeing in my logs! I have same problem with completely new Seagate IronWolf NAS 8Tb drive ST8000VN004, dropping disk from ZFS on Qnap TS-h973AX NAS. I am running SeaTools for Windows on other computer and everything looks good, but NAS marked my drive with Warning, to many S.M.A.R.T. errors “Uncorrectable sector count”.Before using any disk I subject them to a full verify pass using a separate PC. This is a full 14Hr pass of HDAT2 which verified that each disk was 100% ok before using it. Actually, I’ve never had one of the Seagate IronWolf 10TB or 12TB disks fail me in this test or during service, QC in the factory must be really good (I’ve had different experiences with other brands). This is a full article about the issue but I have also made a video about it, you can chose what to read/watch/view! As far as is known by me right now is that this issue only occurs with the 10TB variant of these drives, but if you have a different experience, please make sure to comment!

Maak een einde aan de kosten en complexiteit van het opslaan, verplaatsen en activeren van gegevens op schaal. I didn’t get around to testing this but it did help in getting more information! More hints appear… I’ve been using Seagate IronWolf disks for a few years now and currently have about 20 in service, most of those are the 10TB (and 12TB) Non-Pro (ST10000VN0004) variety. Most of my experience with them has been great so when the new server build came along I bought a few more to run as my main ZFS pool. Sadly, things didn’t go exactly as planned, but I think I was also able to fix it, so let’s see what happened!What I do know for certain is that the errors occur while the disks are connected to these controllers while otherwise these controllers are held in high regard as functioning well. Potential Workaround “Fix” Sat Jan 1 21:51:17 2022] sd 0:0:6:0: [sdg] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK I just created POOL in raid 5 with 5 disks and started to copy data. This happened in first 8 hours of drive working.

Asda Great Deal

Free UK shipping. 15 day free returns.
Community Updates
*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, but this never affects which deals get posted. Find more info in our FAQs and About Us page.
New Comment