

> I do think for the average consumer or the more neglectful business, that we will see far more issues related to data corruption and loss during the transition from hard disks to SSDs. But for now, we retire old equipment for cost reasons before it fails, so all the drives have enough longevity for us. Now, if hard drives stop increasing in density, or take 8 years to increase by a factor of three, then we will need to re-evaluate based on failure rates and the like. Basically any drive takes the same amount of electricity (and physical rental space) as any other drive, so when we can "shrink" the footprint to 1/3 the former footprint, we save 2/3 on space rental and 2/3 on the electricity so it is worth doing. The rough rule of thumb is that it is worth us moving to more dense drives when we can get than factor of three density increase. Right now as we speak we are planning on moving data off of 4 TByte drives to 12 TByte drives. I definitely don't think these issues will affect Backblazeįor pure cost reasons, we (Backblaze) tend to migrate off of the smaller drives after about 5 years anyway. In terms of lifetime performance (spanning six years of testing), the 12TB HGST comes out as the least failing disk with a 0.37 percent annualized failure rate and Seagate's 4TB model had the highest failure rate at 2.72 percent.īackblaze has also provided a web page for users who wish to view the complete data set used to create this information.> I have drives up to 25 years old. We now use 60 drives of the same model as the minimum number when we report quarterly, yearly, and lifetime drive statistics as there are 60 drives in all newly deployed Storage Pods - older Storage Pod models had a minimum of 45," the company said in its blog post.įor this quarter, Backblaze also bid farewell to the 6TB models by Western Digital that had an average age of 50 months and it added more than 4,700 HGST branded (Western Digital owned) disks of 12TB capacity to its data centers. "There were 199 drives (108,660 minus 108,461) that were not included in the list above because they were used as testing drives or we did not have at least 60 of a given drive model. Since then, there has been only one additional failure, with no failures reported in Q2 2019." Nonetheless, both the 14TB and 4TB Toshiba models in use reported zero failures in this quarter.īackblaze also notes that the 14TB model "got off to a bit of a rocky start, with six failures in the first three months of being deployed. With all drives currently in use at Backblaze's data centers, the company highlighted the impressive performance of Toshiba drives but also noted that it didn't have a large enough sample quantity for the 4TB models to make for a reliable statistic. Admittedly, the manufacturer does have the most samples, 6 out of 13 models tested in total (also more quantity for each capacity in general) and the longest drive days more than 3 million drive days just for its 12TB model. Given that all the hard drives currently in use are subject to deterioration over time/usage and will fail one day, it's a good idea to keep an eye on which manufacturer is putting out the most reliable models on the market for when it's time to upgrade.įor the second quarter of 2019, Backblaze has published its latest HDD stats and it looks like Seagate drives suffered from the most failures. Out of these failed disks, a staggering 94 percent (446 models) belong to Seagate, with its 12TB ST12000NM0007 model coming on top with 247 drive failures alone.


With a total of 108,461 drives tested, 474 reported failure across different storage capacities. In brief: In its latest quarterly HDD reliability stats for 2019, cloud storage provider Backblaze has released figures that seem to continue the trend of bigger drives being better.
