this post was submitted on 08 Apr 2026
61 points (96.9% liked)

Selfhosted

58417 readers
714 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm sketching the idea of building a NAS in my home, using a USB RAID enclosure (which may eventually turn into a proper NAS enclosure).

I haven't got the enclosure yet but that's not that big of a deal, right now I'm thinking whether to buy HDDs for the storage (currently have none) to setup RAID, but I cannot find good deals on HDDs.

I found on reddit that people were buying high capacity drives for as low as $15/TB, e.g. paying $100 for 10/12TB drives, but nowadays it's just impossible to find drives at a bargain price, thanks to AI datacenters, I guess.

In Europe I've heard of datablocks.dev where you can buy white-label or recertified Seagate disks, sometimes you can find refurbished drives in eBay, but I can't find these bargain deals everyone seemed to have up until last year?

For example, is 134 EUR for a 6TB refurbished Toshiba HDD a good price, considering the price hikes? What price per TB should I be looking for to consider the drives cheap? Where else can I search for these cheap drives?

you are viewing a single comment's thread
view the rest of the comments
[–] WhyJiffie@sh.itjust.works 1 points 2 days ago (1 children)

Supposedly SATA controllers are also not built for the abuse I have been throwing them in my machines, and I don't want to push it.

what makes you say that?

[–] SpikesOtherDog@ani.social 3 points 2 days ago* (last edited 2 days ago) (1 children)

I just read that recently. Let me see if I can run that source back down.

Edit: All in one CompTIA server plus certification exam guide second edition exam SK0-005 McGraw-Hill Daniel LaChance 2021 Page 138. In the table there it says that SATA is not designing for constant use.

Edit 2:

https://www.hp.com/us-en/shop/tech-takes/sas-vs-sata

Reliability:

SAS: Designed for 24/7 operation with higher >mean time between failures (MTBF), often 1.6 million hours or more
SATA: Suitable for regular use but not as robust as SAS for constant, heavy workloads, with MTBF typically around 1.2 million hour

They are saying that SAS is a better option with a longer MTBF, but I don't expect my drives to last 5 years, much less 136.

My own two cents here is that you probably don't want to use SATA ZFS JBOD in an enterprise environment, but that's more based on enterprise lifecycle management than utility.

[–] WhyJiffie@sh.itjust.works 1 points 1 day ago (1 children)

thanks! as you say because tye 5 vs 136 years it does not really matter in our environment, but it probably starts mattering when you have lots of disks.

I don't actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.
But with 30 drives that will be 150 and indicate that you will likely have at least one error of some kind because of using SATA

[–] SpikesOtherDog@ani.social 1 points 22 hours ago

Hey, I'm not sure where you got your factor of 5 years, but it was a number I pulled out my ass. I'm a repair depot I typically didn't see drives that live much longer than 17k hours (just under 2 years). That didn't mean that they always fall at that age, only that systems that came through had about that much time on them max.

Regarding the 136 vs 150 million numbers, those numbers are pure bullshit. MTBF is a raw calculation of how long it will take these devices to fall based on operational runtime over how many failures were experienced in the field. They most likely applied a small number of warranty failures over a massive number of manufacturing runs and projected that it would take that long for about half their drives to fall.

In reality, you will see failure spikes in the lifetime of a product. The initial failures will spike and drop off. I recall reading either the data surrounding this article or something similar when they realized that the bathtub curve may not be the full picture. They just updated it again for numbers from up to last year and you can see that it would be difficult to project an average lifetime of 20 years, much less 150.

My last thought on this is that when Backblaze mentions consumer vs enterprise drives they are possibly discussing SATA vs SAS. This comes from the realization that enterprise workstation drives are still just consumer drives with a part number label on them (seen in Dell and HP Enterprise equipment). Now, they could be referring to more expensive SATA drives, but I can't imagine that they are using anything but SAS at this point in their lifecycle.