Selfhosted

58417 readers
1107 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

Due to the large number of reports we've received about recent posts, we've added Rule 7 stating "No low-effort posts. This is subjective and will largely be determined by the community member reports."

In general, we allow a post's fate to be determined by the amount of downvotes it receives. Sometimes, a post is so offensive to the community that removal seems appropriate. This new rule now allows such action to be taken.

We expect to fine-tune this approach as time goes on. Your patience is appreciated.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
 
 

I am a Certified Public Tax Accountant (Zeirishi) and financial planner in Japan, specializing in international taxation and transfer pricing. I am also a member of IFA (International Fiscal Association). I have a parallel background in IT — from early microcomputer programming through enterprise ERP implementations.

Last year I sat down and added up what my small practice was paying for SaaS: cloud storage, document collaboration, AI assistants, calendar, email, remote desktop, monitoring. The number was $163 per user per month. I was paying for convenience — but I was also paying for dependence. I could not verify the security architecture. I could not audit the data flow. And every year, the invoices went up while the control went down.

I decided to see whether I could build a self-hosted, zero-trust replacement that I actually understood and controlled — and that any solo practitioner or small firm with 3 to 10 employees could deploy by following a guide.

This is what I ended up with. It runs in production on real client work every day.


The Stack

  • VPS: Vultr, $24/month, Ubuntu 24.04 LTS
  • Zero-trust access: Cloudflare Zero Trust (free tier) — 2 open ports only (80/443), no VPN, no exposed SSH, no third-party tunnels
  • Private cloud + real-time editing: Nextcloud + Collabora Online
  • Four AI secretaries: A unified proxy routing to ChatGPT, Claude, Gemini, and Perplexity — each selected for a distinct strength. Claude for contracts and editorial precision. Perplexity for source-cited research. ChatGPT for general reasoning and coding. Gemini for structured data and integration. One authenticated portal, four specialized capabilities. Additional providers can be added by extending a single configuration file.
  • An AI butler: OpenClaw — an agentic automation layer that does not merely answer questions but executes multi-step tasks on instruction. Morning briefings, email-to-task conversion, weekly summaries, file organization. It operates under strict standing rules: all email actions produce drafts only, filesystem access is compartmentalized, and no action is taken without human confirmation.
  • Remote desktop: Apache Guacamole — browser-based RDP through 5 authentication layers (WARP encryption → Cloudflare Access OTP → TLS tunnel → Guacamole auth → Windows login)
  • Monitoring + alerting: Prometheus + Grafana + Alertmanager — the system watches itself and notifies you before problems become incidents
  • Triple-redundant backups: Nightly DB to Supabase (PostgreSQL-to-PostgreSQL, zero format conversion) + weekly AES-256 encrypted full config archive + 30-day retention with documented 2-hour restore procedure

8 security layers: WARP encryption → Cloudflare Access (OTP) → TLS tunnel → UFW (80/443 only) → fail2ban → sysctl hardening → localhost-only service binding → application-level authentication


Who This Is For

This stack is designed for solo practitioners and small firms — accountants, lawyers, consultants, advisors — with 3 to 10 employees. It scales within that range without architectural changes. If you are comfortable following step-by-step instructions in a terminal, you can build this. No DevOps background is required.


The Migration: SaaS → Zero Trust Self-Hosted

What you gain:

  • Cost control. No per-user pricing that compounds as you grow. The VPS cost is fixed. AI costs are usage-based and capped at your discretion.
  • Data sovereignty. Client data resides on infrastructure you control. It does not pass through third-party SaaS pipelines you cannot audit.
  • Architectural transparency. Every configuration file, every security layer, every network rule — you can read it, verify it, and change it.
  • Independence. No vendor can alter your pricing, discontinue your plan, or change terms of service beneath you.

What you accept:

  • Operational responsibility. There is no vendor to call at 2 AM. You maintain the system. The monthly checklist (13 items, ~30 minutes) and the emergency runbook (7 scenarios) exist precisely for this reason.
  • Initial time investment. The full build takes approximately 16–24 hours spread across two weekends. This is a one-time cost. After that, monthly maintenance is under one hour.
  • A learning curve. You will work in a terminal. The guide explains every command and every expected result, but you must be willing to follow it carefully.

The Cost Comparison

Initial investment:

  • VPS setup: $0 (hourly billing, cancel anytime)
  • Cloudflare Zero Trust: $0 (free tier)
  • All software: $0 (open source)
  • Domain name: ~$12/year
  • Your time: 16–24 hours (one-time)

Monthly running cost (3–8 person team):

Component Cost
VPS (Vultr) $12 (starter) / $24 (recommended) / $48 (growth)
Cloudflare $0
Supabase backup $0 (free tier)
All software $0
AI API usage (moderate, 3 users) $15–35
Total $35–50/month

Equivalent SaaS for 3 users:

Component Cost
Cloud storage + collaboration (Google Workspace) $36/month
AI subscriptions (4 providers) $240+/month
Remote desktop (TeamViewer) $45/month
VPN / zero-trust access $30+/month
Monitoring (Datadog/UptimeRobot) $45+/month
Total $400+/month

5-year savings estimate: $36,900–$48,900.


OpenClaw: The Butler — Used Safely

OpenClaw deserves specific discussion because it is both the most powerful and the most carefully constrained component in this stack.

CVE-2026-25253 (CVSS 8.8, High) and the ClawJacked attack class are real. Over 42,000 public instances exist, and approximately 36% (15,200) remain vulnerable. This stack specifies OpenClaw ≥2026.1.29 (patched) and adds three architectural defenses:

  1. Localhost-only binding. OpenClaw listens on 127.0.0.1 only. It is never reachable from the internet.
  2. Cloudflare Tunnel authentication. Even reaching localhost requires passing through Cloudflare Access OTP — an attacker would need to compromise your email account first.
  3. UFW port restriction. Only ports 80 and 443 are open. There is no path to OpenClaw from the outside.

The standing rules enforce behavioral constraints: all email actions produce drafts only (never autonomous sending), filesystem access is restricted to designated working directories, and every action requires human confirmation before execution.

The question is not whether the tool has risk. Every tool with real capability has risk. The question is whether the architecture contains that risk. This one does.


Four Secretaries, One Portal

The AI proxy is approximately 100 lines of Node.js. It routes requests to four providers through a single authenticated endpoint. API keys live in a .env file on the server and never reach the browser.

Each provider was selected for a distinct role:

  • Claude — contracts, editorial review, nuanced prose
  • Perplexity — source-cited real-time research
  • ChatGPT — general reasoning, coding assistance, analysis
  • Gemini — structured data, spreadsheet logic, integration tasks

This is not a limitation. It is a deliberate design. Four specialists outperform one generalist. And if a fifth provider emerges that serves your needs, adding it requires extending a single route in the proxy — fewer than 20 lines of code.

The spending rule: set a hard cap per provider before your first API request. $20/month each. Total maximum exposure: $80/month. Realistic spend for a 3-person team: $15–35/month.


The Guide: DIY from Start to Finish

I wrote a free five-part series that covers the entire build. Every command. Every configuration file. Every decision point. Every place where I made a mistake, so you do not have to.

If you follow Parts 1 through 5 and the operational appendices in sequence, you will finish with a complete, production-grade system — without needing to consult external documentation or fill in gaps from other sources.

Part What You Build
Part 1 Architecture overview, cost analysis, security model, threat assessment
Part 2 VPS provisioning, Cloudflare Zero Trust, UFW, fail2ban, sysctl hardening
Part 3 Docker, Nextcloud, Collabora, AI proxy, OpenClaw, CalDAV, email, backups
Part 4 Guacamole, accounting API integration, Prometheus, Grafana, Alertmanager, AES-256 encrypted backups
Part 5 Full operations manual: LLM proxy code, OpenClaw workflow templates, monthly/annual checklists, emergency runbook (7 scenarios), AI spending audit

Build time: approximately 16–24 hours across two weekends.

All five parts are published and free. No paywall. No signup. No follow-up sequence.


A Few Things I Learned

  1. Cloudflare Tunnel eliminated the need for a VPN entirely. Two ports open, everything else invisible. This was the single biggest simplification.
  2. The hardest integration was not the AI proxy — it was getting Collabora's aliasgroup configuration to work correctly with Cloudflare's TLS termination.
  3. OpenClaw's CVE is a serious concern, but the architectural defense — localhost-only binding plus tunnel authentication — neutralizes it structurally. Do not deploy it without understanding the risk.
  4. The most underrated component is Supabase as a backup target. PostgreSQL-to-PostgreSQL with zero format conversion.
  5. The real transformation was not technical. It was organizational. Four AI secretaries with defined roles and one butler with strict standing rules changed how I work every day. The system stopped being infrastructure and became a team.

I would be grateful for any feedback from this community. If you see something I could improve, or a better approach to any part of this stack, I would genuinely like to hear it.

4
28
submitted 6 hours ago* (last edited 3 hours ago) by tanka@lemmy.ml to c/selfhosted@lemmy.world
 
 

Hey :) For a while now I use gpt-oss-20b on my home lab for lightweight coding tasks and some automation. I'm not so up to date with the current self-hosted LLMs and since the model I'm using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me) I just wanted to use the collective wisdom of lemmy to maybe replace my model with something better out there.

Edit:

Specs:

GPU: RTX 3060 (12GB vRAM)

RAM: 64 GB

gpt-oss-20b does not fit into the vRAM completely but it partially offloaded and is reasonably fast (enough for me)

5
 
 

Hey guys! After over 2 years of me asking how to take the first steps in self-hosting, I think I've finally got most of the things I need set up (except for a mailcow server proxied through a VPS, but that's for another day). I've been seeing a bunch of posts here about the *arr stack, and only recently it piqued my interest enough to really warrant a serious look. But I'll be honest, it's a bit confusing. For now, I'm just thinking of starting up the whole suite on my machine, then slowly expose to internet the parts I find useful (and shut down the parts I don't). But I really can't find any good...tutorial(?) on how to quickly get the whole stack running, and I'm a bit worried about launching individual apps since I don't know if/how they communicate with each other. So I'll try to summarize my, quite naïve, questions here:

  • how exactly do I set up a quick stack? Is that possible? And more importantly, is that recommended?
  • most of the tutorials/stacks I see online use plex for video streaming, but seeing a lot of negativity around plex and its pricing, I reckon using jellyfin would be better. Does it just plug into the ecosystem as easily as plex apparently does?
  • I've already set up a hack-ish navidrome instance to stream music, but managing files is a real hassle with it. Does sonarr(?) do it any better?

I know most of these questions can be easily answered through some LLM (which I don't wanna rely on) or scouring documentation (which honestly look a bit daunting from my point right now), so I figured it'd be best to ask here. Thanks for any help!

6
 
 

Original project release post here for context: https://www.reddit.com/r/selfhosted/s/H4uS9GlwRJ

As a senior software engineer who’s working hard to build a tool for the self hosting community I’m hoping my post will be received better here, given the r/selfhosted’s new megathread tar pit rule for young projects… I’m reposting the Libre Closet feature update announcement here as I had planned to today on r/selfhosted.

I hope this finds some of the people who expressed interest in the project as we worked really hard to deliver on the requested features.

The intended release post read as the following:

——

First I would like to share gratitude for the warm and supportive reception Libre Closet received when I initially posted about it about a month ago. This really motivated me to take the project seriously.

I’d like to introduce ShoshannaTM who’s joined the project as one of the core maintainers.

Since the first post, we’ve gotten 88 github stars, over 3.1k docker image pulls, 2 community PRs contributed, and many helpful issues filed. We’ve made a point to try to respond to everyone in a timely manner and stay engaged with the budding community growing around this project.

We’ve focused a lot on quality and have taken the time to address bugs with numerous minor version releases.

The features and quality improvements have been made with considerable time, engineering, and intention. While myself and Shoshanna are using copilot to accelerate development, substantial upfront deliberate design was done on our part before any feature development. Additionally, nothing is taken without thorough iteration and human review. This project is not vibe coded. It is engineered and we take much pride in our craft and the quality of our work.

Please feel free to ask any questions you may have, whether about the development choices we’ve made, or about the product itself. Both of us are excited to continue to build a community around this project. 

Key changes in this version include:

  • Background removal for garment images
  • Outfit scheduling calendar, with the ability to plan multiple outfits per day, and mark outfits as worn (future versions could use worn outfit data to tell you which garments you wear most, or perhaps track laundry)
  • An improved outfit builder, which allows you to include as many or few garments as you’d like in a single outfit 
  • Total customization flexibility for garment categories

As before we’ve maintained the ease of self hosting and only one docker command is needed to deploy! For everyone already hosting Libre-Closet, you simply need to pull the fresh `latest` image to update to the v0.2.0 release.

`docker run -p 3000:3000 -v wardrobe-data:/data ghcr.io/lazztech/libre-closet:latest`

We can’t wait for everyone to try it out, and we hope you enjoy V0.2.0 of Libre Closet! 

Public: https://librecloset.lazz.tech/

GitHub: https://github.com/lazztech/Libre-Closet

7
41
submitted 18 hours ago* (last edited 17 hours ago) by egg82@lemmy.world to c/selfhosted@lemmy.world
 
 

It's been a month since Fetcharr released as a human-developed (I think we're sticking with that for now) replacement for Huntarr. So, I wanted to take a look at how that landscape has changed - or not changed - since then. I know this is a small part of an arr stack, which is a small part of a homelab, which is a small part of a small number of people's lives, but since I've been living in it almost every weekend for the last month or so I've gotten to see more of what happens there.

So, where are we at?

Let's start with Fetcharr itself:

  • ChatGPT contributions jumped from 4 to 17 instances, with 8 of those being "almost entirely" to "100%" written by LLM. 5 of those are github template files
    • An interesting note is that there are no Claude contributions, except for a vibe-coded PR for a plugin which I haven't reviewed or merged, and is unlikely to be merged at this stage because I don't want a bunch of plugins in the main codebase
  • Plugins is a new thing. I wanted to have my cake and eat it, too. I liked the idea of being able to support odd requests or extensible systems but I wanted to make sure the core of Fetcharr did one thing and did it well. I added a plugin API and system, and an example webhook plugin so folks could make their own thing without adding complexity to the main system
    • I may make my own plugins for things at some point but they won't be in the main Fetcharr repo. I want to keep that as clean and focused as possible
  • Fetcharr went from supporting only Radarr, Sonarr, and Whisparr to including Lidarr and Readarr (Bookshelf) in the lineup. This was always the plan, of course, but it took time to add them since the API docs are.. shaky at best
  • There were no existing Java libraries for handling *arr APIs so I made one and released it as arr-lib if anyone wants to use it for other projects in the future. No Fetcharr components, just API to Java objects. They're missing quite a few things but I needed an MVP for Fetcharr and PRs are always welcome.
  • The Fetcharr icon is still LLM-generated. I haven't reached out to any other artists since the previous post since I've been busy with other things like the actual codebase. Now that's winding down so I'll poke around a bit more

What about feedback Fetcharr has received?

The most common question I got was "but why?" and I had a hard time initially answering that. Not because I didn't think Fetcharr didn't need to exist, but because I couldn't adequately explain why it needed to exist. After a lot of back-and-forth some helpful folks came in with the answer. So, allow me to break how these *arr apps work for a moment.

When you use, say, Radarr to get a movie using the automatic search / magnifying glass icon it will search all of your configured indexers and find the highest quality version of that movie based on your profiles (you are using configarr with the TRaSH guides, right?)

After a movie is downloaded Radarr will continue to periodically refresh newly-released versions of that movie via RSS feeds, which is much faster than using the automated search. The issue with this system is that not all indexers support RSS feeds, the feeds don't get older releases of that same movie, and the RSS search is pretty simplistic compared to a "full" search and may not catch everything. Additionally, if your quality profiles change it likely won't find an upgrade. The solution to this would be using the auto-search on every movie periodically, which is doable by hand but projects like Upgradinatorr and Huntarr automated it while keeping the number of searches and the period of time reasonably low as to avoid overloading the *arr and the attached indexer and download client. Fetcharr follows that same idea.

The second largest bit of feedback I've gotten (or, rather, question) is "why use an LLM at all?" - buckle up, because this one gets long. One of the main selling points of Fetcharr is that it's developed by a human with the skills and understanding of what they're doing and how their system works, so it's worth discussing.

The "why?" is a fair question, I think. We've seen distrust of LLMs and the impacts of their usage across left-leaning social media for a while, now. Some of it is overblown rage-bait or catharsis but there do seem to be tangible if not-yet-well-studied impacts on a societal as well as an ecological level, and there's a more than few good moral and ethical questions around their training and usage.

I have (and share) a fair number of opinions on this thread but ultimately it all boils down to this:

  • I used the ChatGPT web interface occasionally as a rubber-duck for high-level design and some implementation of the plugin system, as well as a few other things
  • I also used it to actually implement a few features. The few times I used it are documented in the codebase and it was a "manual" copy/paste from the web UI and often with tweaks or full rewrites to get the code working the way I wanted
  • I, personally, currently have no issue with individuals using LLMs or even using vibe-coding tools to create projects and sharing them with the world, as long as they're clearly documented as vibe-coded projects or LLM usage has been documented in some way
  • We, as users of free software, have no obligation from the creators of said free software for anything at all. The inverse is true: the creators of the software have no obligation from its users to continue using it. What I mean to say is, you are just as entitled to not use a piece of software as the creator is to do whatever they want with the software they've made, however they've made it
    • If you don't like how something is done, you don't need me to tell you that it's perfectly okay to not like it, trust it, or use it. Conversely, you are not owed an explanation or re-write of a system you would otherwise enjoy. I have no issues explaining why I made the choices I did but others may not be as comfortable doing so
    • The rise of LLMs and vibe-coding tools has given the average user with an idea the ability to implement that idea. I think that's an amazing thing; seeing people with an idea, some hope, and a few dollars create something from nothing. I thought it was great seeing people learn software dev as a kid, creating useful tools, operating systems, or entire playable worlds from an empty text document and I still think it's great today, even if I don't like some aspects of what a vibe-coded project means. Hell, I prefer human-developed projects over their vibe-coded counterparts when I can find them

Finally, Fetcharr has had a few issues opened and subsequently closed with resolutions. Some are more creative exploitation of how Fetcharr's internal systems work, and others had re-writes of other internal systems before they worked properly. And then there were the frustrating mistakes after a long day of frustrating mistakes. Such is the way of software development.

The new landscape

Since the initial 1.0.0 release of Fetcharr, there's been some changes in other projects and new insights on how this all goes together. Most notably, Cleanuparr got its own replacement called Seeker which is enabled by default. If you run Cleanuparr you may consider replacing or removing Fetcharr from your stack. Try both, see if it's worth running yet-another-thing.

Additionally, the developer of Unpackerr has mentioned that they're looking into a web UI for configuring their project so that's exciting for those that enjoy a web UI config.

It also seems like there's been a few other vibe-coded Huntarr replacements such as Houndarr if you're into those. Looks like a neat little web app and system.

So, where are we at?

Well, let's take an honest look at things:

  • It seems like Cleanuparr may very well have a clean Fetcharr replacement. As much as I love seeing folks use tools I've built it's hard to say that Fetcharr is any better than Seeker. Admittedly, I haven't yet tried Seeker, but because it ties directly into Cleanuparr it may very well have Fetcharr beat if you already use the system
  • Again, this is a small portion of a stack that a small portion of people use which in itself is a small portion of the general population. Does any of this really matter on a grand scale? No. It's just interesting and I've been living in it for a month, so it's worth sharing some insights which might apply to other, larger conversations.
  • The statement-piece of Fetcharr is the (lack of) LLM/AI usage. This is where a large portion of the conversation landed and it's a conversation worth having.
  • Web UI config or some sort of stats is a bigger deal to more folks than I originally assumed. It's not a deal-breaker for most but it's interesting to see how important it is to have some sort of pretty web UI. See: the number of stars Fetcharr has vs other similar projects. If you're ever creating your own project that's worth keeping in mind.
8
 
 

I just finished making a site, it has multiple ways emails are stored, I made a n8n workflow that gets the stored emails, checks if it has not been added onto a spreadsheet before, deduplicates it, then adds it to a spreadsheet. This workflow runs anytime a new email is submitted. This is my first time using n8n, is this dangerous? in the sense that having a n8n workflow that can be triggered directly by users, can it cause massive usage spikes in memory, etc very easily when its dependent on what a user can do? even if i ratelimit it, because I dont know the overhead n8n has, it takes a minute for the workload to finish also.

9
 
 

I'm still on my little adventure of pulling my crap off the cloud and realized my calendar is still blowing around out there. What do people use for their personal calendars nowadays?

10
11
 
 

Hey there selfhosted community.

I had big plans when I moved last year to finally setup my homelab with proper VLAN seperation. Well a stressfull move later I simply had no energy left and just threw my whole homelab and all my services in my main LAN with no seperation whatsoever.

In how much of a world of pain am I in now when I want to switch my homelab services over in a seperate VLAN? Any recomendations or pointers to documentation for me to go through before I decide if this is something I want to do right now?

Currently this would impact a proxmox host with 3 VM's and 1 LXC and around 20 docker images.

12
 
 

I have a great music library on my homelab, but I hardly ever listen to it because I mostly listen to music on my TV.

I occasionally use the Jellyfin app, which is the best player I've found for TV, but it is not really built for audio and is cumbersome for searching, on-the-fly playlist, and continuous play.

Other apps I've tried are either worse in terms of features, or an absolute labyrinth to navigate with a remote. Most apps are designed for mobile and are somewhat to completely broken on TV, if they run at all.

I don't mind if the app is not free as long as it works well enough for my family to use it, and it connects to the server. I don't mind migrating my library to another service to get it working well on the TV.

Has anyone found an actually good Android TV app for hosted music?

13
 
 

The political and gundam memes my coworkers sent me got face recognized but it won't do it for my family photos...

14
 
 

As this will -thanks to me being quite clueless- be a very open question I will start with the setup:

One nginx server on an old Raspi getting ports 80 and 443 routed from the access point and serving several pages as well as some reverse proxies for other sevices.

So a (very simplified) nginx server-block that looks like this:

# serve stuff internally (without a hostname) via http
server {
	listen 80 default_server;
	http2 on;
	server_name _; 
	location / {
		proxy_pass http://localhost:5555/;
                \# that's where all actual stuff is located
	}
}
# reroute http traffic with hostname to https
server {
	listen 80;
	http2 on;
	server_name server_a.bla;
	location / {
		return 301 https://$host$request_uri;
	}
}
server {
	listen 443 ssl default_server;
	http2 on;
	server_name server_a.bla;
   	ssl_certificate     A_fullchain.pem;
    	ssl_certificate_key A_privkey.pem;
	location / {
		proxy_pass http://localhost:5555/;
	}
}
#actual content here...
server {
	listen 5555;
	http2 on;
    	root /srv/http;
	location / {
        	index index.html;
   	} 
    	location = /page1 {
		return 301 page1.html;
	}
    	location = /page2 {
		return 301 page2.html;
	}
        #reverse proxy for an example webdav server 
	location /dav/ {
		proxy_pass        http://localhost:6666/;
	}
}

Which works well.

And intuitively it looked like putting Anubis into the chain should be simple. Just point the proxy_pass (and the required headers) in the "port 443"-section to Anubis and set it to pass along to localhost:5555 again.

Which really worked just as expected... but only for server_a.bla, server_a.bla/page1 or server_a.bla/page2.

server_a.bla/dav just hangs and hangs, to then time out, seemingly trying to open server_a.bla:6666/dav.

So long story short...

How does proxy_pass actually work that the first setup works, yet the second breaks? How does a call for localhost:6666 (already behind earlier proxy passes in both cases) somehow end up querying the hostname instead?

And what do I need to configure -or what information/header do I need to pass on- to keep the internal communication intact?

15
 
 

I was wondering if I could get some suggestions from the community about the best way to go about migrating from Windows Server on my homelab host to Proxmox.

Background: Currently running Server 2022 Datacenter on my host because that's what I was familiar with. Using Hyper-V to manage multiple VMs like HomeAssistant and TrueNAS among other things. I have multiple container apps running within TrueNAS. There's an HBA passed through Hyper-V to the TrueNAS VM that has all of my HDDs connected to it. I'm just looking to migrate away from anything Microsoft/Windows.

Way Forward: The only procedure that I could come up with in my head was moving everything off of the NAS to my personal PC if I have the space, or maybe multiple PCs in the house. Back up all of my configs. Install Proxmox on the host, build out the VMs again, then restore the configs and just hope everything works the same. Or, just backup my configs build the new VMs on proxmox, restore and hope the new TrueNAS VM sees the data on the HDDs? I don't know if that would work.

If someone has a better idea, I'd like to hear it. Is there any way to dual boot Proxmox on the host maybe and slowly migrate things 1 by 1? Can you convert the VMs from Hyper-V before I install Proxmox?

Thanks everyone!

16
 
 

Thought I might share this for anyone that's running into the same problem as me. Since the Beelink S12 Pro is a popular choice for some in this community, especially if you are running video transcoding (mediaservers).

After one and a half years of 24/7 activity I've started having issues with my Proxmox VMs and LXCs running my selfhosted services. First I've only noticed some services were becoming unreliable, but after a while it spread to different VMs, so it looked like the host must be the issue.

After reading through the logs it became apparent, that my M.2 SSD had I/O errors and was about to fail. S.M.A.R.T reported high temps. 53°C idle / up to 80°C when running overnight Backups, so the issue was pretty clear. My SSD was constantly overheating which caused it to fail in the end.

New SSD, fresh Proxmox install (finally did the upgrade from 8 to 9). Restored my Backups and some settings and everything was running again in no time (❤️ Proxmox backups). Instantly checked the SMART temp values and same issue as before. Temps way too high.

Did some reading and tweaked some settings to reduce the load on I/O, but nope still way too hot.

Next I removed the empty 2.5 disk bay taking up space above the M.2 and installed a heatsink with 10mm height, which made it slightly better but not that much. Temps dropped to 49°C idle / 71°C when running backups.

This made me rethink the problem. Reducing I/O and adding a heatsink didn't seem to reduce heat by that much. Why? Some additional reading and pondering about the problem helped me find the issue. The CPU is right on the other side of the board from SSD. The CPU was cooking my SSD from below... Even tried tweaking the fan settings in BIOS to cool down the CPU so my SSD wouldn't melt, but didn't really help in the end. My beelink just became louder.

Last resort was using a sledgehammer to crack the nut: Bought a 80mm Noctua 5V PWM fan. 3D printed a new bottom plate for the beelink and am now blasting my SSD on full speed with air. Even at full speed 24/7 the additional power cost is extremely low.

Result: 28°C idle / 35°C when running backups.

My Beelink now looks like some Frankenstein creation, but I'm just happy my servers and backups are stable again. Maybe I'll add a cover to make it a bit easier on the eyes.

FYI: I'm currently running 6 LXCs and 1 VM on this poor thing. Two of which are running >20 docker containers.

17
115
submitted 2 days ago* (last edited 1 day ago) by Landless2029@lemmy.world to c/selfhosted@lemmy.world
 
 

So I've been playing Icarus with the wife and the optimization is hot garbage. Wife is hosting and pulling 10 fps with a Nvidia 3070TI

We enjoy the game so I start doing research. Turns out once you've played enough the database on the host just gets too big and chokes out the CPU threads since it can't use more than 2 cores.

Answer is to migrate your world to a sepf hosted dedicated server. Say no more.

So now I got an excuse (wife approved) to setup a computer as a server and keep it running. I have an old HP SFF i5 16GB RAM with an SSD I've reimagined a few times for a home server.

Flashed it with Debian and setup the Icarus server in docker. Runs like a champ.

~~Bonus points. I hooked up a wattage meter and it idles at 1~2 watts!
I used to run an old gaming computer as a home server and it felt like $30 a month in electricity.~~

Edit: System idles at 19 watts. I had the meter plugged into the wrong device...

Now I can start throwing more stuff on there once I figure out backup for the game world incase I bork it.

18
 
 

Here's my beautiful unemployed-for-too-long-have-no-money-dont-care-about-looks lab :)

picture of a raspberrypi, switch, HP elite desk, KVM and mess of cables on a desk

Hey it's more than good enough to run all this ¯_(ツ)_/¯

screenshot showing list of hosted apps and resources usage of servers

19
 
 

To be honest, I've seen commercial 7' racks in data centres and computer rooms that were worse than the worst ones here!

I was once tasked with rejigging 3 racks in a remote computer room. The racks were arranged in an "L" pattern due to the constraints of the room. None of the doors - front or back - could close because of cables running between servers and switches. Some cables actually ran diagonally across the L shape. A lot of cables were jammed between the mounting rails, and 3 metre cables were used where a 50cm one would have done, or 2 metre ones where a 3 meter or more was needed. Almost nothing was labelled, and where it was, it was wrong. The cable colour coding scheme was ignored, and nothing was recorded. There were servers racked on a slant - TWO nuts off on one side - and even mounted back-to-front. Others were literally sat directly on top of other kit, not bolted in at all. RAID arrays for critical servers were mounted in adjacent racks, with the cables running around the opened rear rack door, and there were a number of suspicious, unmarked servers, of odd brands that were hooked into the main switch, that nobody could identify. One turned out to be an abandoned Nagios server, but one was never identified, and nothing broke, nobody screamed when I turned it off.

Just about all the horrible things you have seen or heard about were in that room. It took weeks to sort it out.

20
 
 

I'm trying to setup my VPN and I'm a bit confused here.

I have a commercial VPN subscription that I'm using on my phone and laptop. Now I've set up WireGuard on my OpenWRT router to access my home network remotely. I can connect to it from my phone but from what I see there's no way to have both commercial VPN and my local network WG active at the same time (both are using WG so I'm trying to create WG config with two peers but I don't think it's possible).

So what do people actually do? From what I see I have 3 options:

  1. Don't use commercial VPN on my phone, only use WG to access my network
  2. Switch between VPNs manually whenever I want to access my network
  3. Setup commercial VPN on my router, move all my networks traffic through this VPN and move all traffic from my phone through my home network.

Am I missing something? What's the typical approach here? I thought that what I'm trying to do is basic scenario but it looks like it's not that simple if at all possible.

21
 
 

I've got Immich working great on Unraid, but if I'm on my network I can't really use it. Just fails to resolve the dns. I looked it up and it's that my router doesn't support hairpin or something. It's a Aginet hb810. I found a workaround in the Immich client where you can add a second entry that's network specific, but it doesn't seem to work very reliably.

What are my options?

22
 
 

A huge upshot to using a laptop is you have a built-in UPS and KVM.

23
 
 

I'm sketching the idea of building a NAS in my home, using a USB RAID enclosure (which may eventually turn into a proper NAS enclosure).

I haven't got the enclosure yet but that's not that big of a deal, right now I'm thinking whether to buy HDDs for the storage (currently have none) to setup RAID, but I cannot find good deals on HDDs.

I found on reddit that people were buying high capacity drives for as low as $15/TB, e.g. paying $100 for 10/12TB drives, but nowadays it's just impossible to find drives at a bargain price, thanks to AI datacenters, I guess.

In Europe I've heard of datablocks.dev where you can buy white-label or recertified Seagate disks, sometimes you can find refurbished drives in eBay, but I can't find these bargain deals everyone seemed to have up until last year?

For example, is 134 EUR for a 6TB refurbished Toshiba HDD a good price, considering the price hikes? What price per TB should I be looking for to consider the drives cheap? Where else can I search for these cheap drives?

24
 
 

I have a OnePlus5T that runs PostmarketOS (console, no GUI). I use pmbootstrap to flash the image and it serves me well for the most part. To make the internet connection more stable, I wish to connect the phone to the router using an Ethernet adapter.

I have borrowed a Porttonics Ethernet 8in1 adapter and works well on other stock android phones. I can use ethernet nicely and surf sites on those phones

I seem to be unable to do this with Postmarket OS. Based on what i have read, I think the kernel needs to be tweaked so it can connect and work with the router. Does anyone know how to do this?

(This is my second post on this topic. Apologies).

25
 
 

Any idea for a self hosted home maintenance reminder system? Essentially something that will have reminders for things that reoccur regularly either ie yearly, monthly etc. Ideally it would have some way of checking it off to show the task was completed and then it would reoccur at the preset time next month /year etc.

view more: next ›