this post was submitted on 11 Apr 2026
49 points (96.2% liked)

Selfhosted

58502 readers
518 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I recently decided to rebuild my homelab after a nasty double hard drive failure (no important files were lost, thanks to ddrescue). The new setup uses one SSD as the PVE root drive, and two Ironwolf HDDs in a RAID 1 MD array (which I'll probably expand to RAID 5 in the near future).

Previously the storage array had a simple ext4 filesystem mounted to /mnt/storage, which was then bind-mounted to LXC containers running my services. It worked well enough, but figuring out permissions between the host, the container, and potentially nested containers was a bit of a challenge. Now I'm using brand new hard drives and I want to do the first steps right.

The host is an old PC living a new life: i3-4160 with 8 GB DDR3 non-ECC memory.

  • Option 1 would be to do what I did before: format the array as an ext4 volume, mount on the host, and bind mount to the containers. I don't use VMs much because the system is memory constrained, but if I did, I'd probably have to use NFS or something similar to give the VMs access to the disk.

  • Option 2 is to create an LVM volume group on the RAID array, then use Proxmox to manage LVs. This would be my preferred option from an administration perspective since privileges would become a non-issue and I could mount the LVs directly to VMs, but I have some concerns:

    • If the host were to break irrecoverably, is it possible to open LVs created by Proxmox on a different system? If I need to back up some LVM config files to make that happen, which files are those? I've tried following several guides to mount the LVs, but never been successful.
    • I'm planning to put things on the server that will grow over time, like game installers, media files, and Git LFS storage. Is it better to use thinpools or should I just allocate some appropriately huge LVs to those services?
  • Option 3 is to forget mdadm and use Proxmox's ZFS to set up redundancy. My main concern here, in addition to everything in option 2, is that ZFS needs a lot of memory for caching. Right now I can dedicate 4 GB to it, which is less than the recommendation -- is it responsible to run a ZFS pool with that?

My primary objective is data resilience above all. Obviously nothing can replace a good backup solution, but that's not something I can afford at the moment. I want to be able to reassemble and mount the array on a different system if the server falls to pieces. Option 1 seems the most conducive for that (I've had to do it once), but if LVM on RAID or ZFS can offer the same resilience without any major drawbacks (like difficulty mounting LVs or other issues I might encounter)... I'd like to know what others use or recommend.

you are viewing a single comment's thread
view the rest of the comments
[–] kalleboo@lemmy.world 5 points 3 days ago* (last edited 3 days ago) (1 children)

That whole "1 GB per TB of capacity" is some generic rule someone made up once that doesn't really have anything backing it up. It depends completely on your use case. If it's mostly media storage that is rarely accessed, I'm sure that 4 GB is plenty.

I run a beefy TrueNAS server for a friends video production company with a 170 TB ZFS array, right now ARC is using 40 GB of RAM with 34 GB free that it's not even bothering to touch, I'm sure most of the ARC space is just wasted as well. That's just one example of how 1 TB = 1 GB makes no sense.

[–] toebert@piefed.social 3 points 3 days ago* (last edited 3 days ago) (1 children)

Does it have deduplication enabled? Afaik that's the functionality that the high memory footprint is usually quoted.

(Also, was that meant to be 170TB?)

[–] kalleboo@lemmy.world 2 points 3 days ago

No deduplication. Before replying I tried doing some research to find where the 1 TB/1 GB rule came from originally but couldn't find any original source, and everything I found said that was without deduplication, for dedup its supposed to be more like 5 GB/TB (no idea how true that is either)

Yeah, TB, oops, edited thanks!