Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
thanks! as you say because tye 5 vs 136 years it does not really matter in our environment, but it probably starts mattering when you have lots of disks.
I don't actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.
But with 30 drives that will be 150 and indicate that you will likely have at least one error of some kind because of using SATA
Hey, I'm not sure where you got your factor of 5 years, but it was a number I pulled out my ass. I'm a repair depot I typically didn't see drives that live much longer than 17k hours (just under 2 years). That didn't mean that they always fall at that age, only that systems that came through had about that much time on them max.
Regarding the 136 vs 150 million numbers, those numbers are pure bullshit. MTBF is a raw calculation of how long it will take these devices to fall based on operational runtime over how many failures were experienced in the field. They most likely applied a small number of warranty failures over a massive number of manufacturing runs and projected that it would take that long for about half their drives to fall.
In reality, you will see failure spikes in the lifetime of a product. The initial failures will spike and drop off. I recall reading either the data surrounding this article or something similar when they realized that the bathtub curve may not be the full picture. They just updated it again for numbers from up to last year and you can see that it would be difficult to project an average lifetime of 20 years, much less 150.
My last thought on this is that when Backblaze mentions consumer vs enterprise drives they are possibly discussing SATA vs SAS. This comes from the realization that enterprise workstation drives are still just consumer drives with a part number label on them (seen in Dell and HP Enterprise equipment). Now, they could be referring to more expensive SATA drives, but I can't imagine that they are using anything but SAS at this point in their lifecycle.
I have a bunch of working drives with 2+ years, and in my area almost everyone still has their system installed on old hard drives
I did not mean an average timeline of 20 years
there are plenty of enterprise SATA drives
that's workstation drives. Obviously if your work buys 2 TB wd blue drives they won't become enterprise drives. enterprise drives include like that of wd red pro, ultrastars, etc, which do use the SATA interface.
Yeah. I was tempering that statement with the fact that I was getting computers for repair, often with bad drives, that had 2 years of use. Now that I really think about it, we were seeing them up to about 5 years. I recall that we were discussing whether to proactively replace the drives with that much time on there. At the time I wanted to ship them back out, and others were saying that 5 years was end of life. Our job was just to get them running again vs. performing full repairs.
Then I was not sure what you meant by this:
..
Those weren't really on my radar, TBH. I took a look at the Ultrastar spec sheet and have to concede that the drive interface itself doesn't seem to affect the lifecycle of the drive itself. I do have to say that the spec sheet does say at the bottom: "MTBF and AFR specifications are based on a sample population and are estimated by statistical measurements and acceleration algorithms under typical operating conditions for this drive model," which is what I was guessing before for those million-hour numbers.
All in all, I am at this point only trying to track down and relay what I'm seeing about SAS vs SATA. From what I can tell, they are mostly the same, but SAS has more features (higher transfer rate, hot-swap capabilities, etc, etc,) HP says that SAS is more reliable, but I don't see anything on that other than the features I just mentioned. Lenovo seems to agree with that take, saying that the reliability between SAS and SATA is comparable,