Selfhosted

58417 readers
719 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

Due to the large number of reports we've received about recent posts, we've added Rule 7 stating "No low-effort posts. This is subjective and will largely be determined by the community member reports."

In general, we allow a post's fate to be determined by the amount of downvotes it receives. Sometimes, a post is so offensive to the community that removal seems appropriate. This new rule now allows such action to be taken.

We expect to fine-tune this approach as time goes on. Your patience is appreciated.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
 
 

Yesterday's update to Vaultwarden paved the way for yesterday's second update to Vaultwarden (by causing an issue)

4
 
 

The goal is to allow easily moving away from an email provider e.g from protonmail to tutanota or fastmail or whatever. How do people achieve this?

I just want to have myname@mydomain, the emails to go to whichever managed email service that allows it, and to then grab everything from that service with POP to then self-host a proxy that multiple devices can connect to. STMP can go either to my hosted server or the managed host, doesn't matter.

The idea is explicitly not to do the job of a managed email service. No DKIM, no SPF, no DMARC, none of that.

Distro is NixOS, but can adapt any instructions given. Mentioning just in case somebody already has a nix configuration with this setup.

5
 
 

Quick post about a change I made that's worked out well.

I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.

Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.

For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don't need that for automation workflows.

Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.

6
 
 

Vaultwarden update out as of ~15 minutes ago, includes security updates.

It says "unconfirmed owner can purge entire organization vault". That seems probably not great, so updating is probably a good idea.

7
 
 

Hi there,

recently there has been a post here about Colota and thought you might be interested in a short summary about Colota.

I am tracking my position since several years now mainly with Owntracks (and now Colota) and a simple postgres DB/table.

I am a fan of the indieweb and eat what you cook and with already some million location points collected I recognized some pattern in existing GPS trackers I wasn't happy about:

  1. Battery consumption
  2. Duplicate points while staying in the same location for a long time

So I decided to build my own GPS tracker and called it Custom Location Tracker.

Improved battery consumption should come from disabling GPS entirely in so called "geofences" which are basically circles you draw on a map in the app. With GPS disabled in these you also won't get duplicate points while staying at e.g. home or work.

The app is still quite new (actively developed since early 2026) but has already quite a lot of features which basically all came from user feedback. E.g.:

  • Automatic Tracking profiles which apply different tracking settings while e.g. being connected to Android Auto, moving slower than 6km/h or while the phone is currently charging.
  • The app works fully offline (map will not be visible then) but you can predownload map tiles from a tile server I selfhost or use your own tile server.
  • You can define how locations are synced to your backend. E.g. only for a specific Wi-Fi SSID every 15min, once a day or with every location update.

Overall the app's focus should move to be a mobile location history app. So basically Google Timeline in a mobile app which also supports selfhosted backends (as backup).

The app is fully open-source AGPL-3.0, has no ads, analytics or telemetry and only sends data to your own server (if you want to).

You can download two versions.

  1. Google Play store which uses Fused Location Provider and therefore uses Google APIs. Also works with the sandboxed version by GrapheneOS and microG.
  2. FOSS version which uses Android's native GPS provider with a network location fallback. Available on IzzyOnDroid and hopefully someday on F-droid.

Both can be also downloaded directly from the repo.

8
45
Spam blocking in 2026 (downonthestreet.eu)
submitted 1 day ago* (last edited 1 day ago) by Shimitar@downonthestreet.eu to c/selfhosted@lemmy.world
 
 

I self host my email since 20+ years. Always done with the postfix + dovecot stack with the works (dkim dmark DNS stuff etc).

In the last few years I just removed all spam filters as they where a chore and didn't provide no much benefit (3-5 spam emails per week) even if my main email address has been out and about for at least 2 decades or more.

Recently, last few weeks, spam is picking up to the point I receive some 10+ spam emails per day and this is pretty annoying, obviously.

So, what are you doing for spam filtering at server level nowadays? Is it still spamassassin circus? Anything better or more efficient?

9
 
 

Back in the day it was nice, apt get update && apt get upgrade and you were done.

But today every tool/service has it's own way to being installed and updated:

  • docker:latest
  • docker:v1.2.3
  • custom script
  • git checkout v1.2.3
  • same but with custom migration commands afterwards
  • custom commands change from release to release
  • expect to do update as a specific user
  • update nginx config
  • update own default config and service has dependencies on the config changes
  • expect new versions of tools
  • etc.

I selfhost around 20 services like PieFed, Mastodon, PeerTube, Paperless-ngx, Immich, open-webui, Grafana, etc. And all of them have some dependencies which need to be updated too.

And nowadays you can't really keep running on an older version especially when it's internet facing.

So anyway, what are your strategies how to keep sanity while keeping all your self hosted services up to date?

10
 
 

Anyone using MX Linux to self host?

11
 
 

Not sure if this is the exact right terminology, but essentially I’m looking for recommendations for a self-hosted solution (Linux) allowing creation of records for things like car, house, computer etc. with the ability to add notes for things like servicing, upgrades, processes and procedures. A nice to have, but not essential feature, could be a calendar with reminders on upcoming events relating to each asset. I’ve considered setting up a wiki, static site CMS or even just a note taker like Obsidian or LogSeq, but figure there are probably more dedicated tools out there. Also I have little to no experience with those listed, other than static site CMS which I use for work in web development. Thanks in advance for recommendations that are working for you.

12
23
submitted 1 day ago* (last edited 1 day ago) by vk6flab@lemmy.radio to c/selfhosted@lemmy.world
 
 

I've been attempting to get audio to work across all guests in Proxmox. While I can probably physically attach a soundcard to each guest, I'd need multiple and I'd need to keep fiddling each time I started a guest.

Apparently with a command line argument, qemu can provide audio support through the host, so I figured that using pipewire to co-ordinate this would be the easiest way forward, except it turns out that it's a (GUI) user space process, which doesn't make sense on a server with no GUI users.

There's a Debian server package apparently specifically for this, but it doesn't appear to work and my (linked) bug report doesn't seem to have been noticed, let alone fixed.

How do you do audio across multiple guests?

Note, new Proxmox user, ancient Debian user. Use case: single user virtual workstation environment to replace the VMware system I've been using since 2009. (The hardware failed and VMware is now owned by people who are not interested in me as a customer.)

Edit:

Just to be explicit, any working audio would be great, preferably a system that doesn't require special drivers in the guest, since I'm also running ancient operating systems on this host, not to mention, experimental ones alongside my main virtual workstation.

13
 
 

I recently decided to rebuild my homelab after a nasty double hard drive failure (no important files were lost, thanks to ddrescue). The new setup uses one SSD as the PVE root drive, and two Ironwolf HDDs in a RAID 1 MD array (which I'll probably expand to RAID 5 in the near future).

Previously the storage array had a simple ext4 filesystem mounted to /mnt/storage, which was then bind-mounted to LXC containers running my services. It worked well enough, but figuring out permissions between the host, the container, and potentially nested containers was a bit of a challenge. Now I'm using brand new hard drives and I want to do the first steps right.

The host is an old PC living a new life: i3-4160 with 8 GB DDR3 non-ECC memory.

  • Option 1 would be to do what I did before: format the array as an ext4 volume, mount on the host, and bind mount to the containers. I don't use VMs much because the system is memory constrained, but if I did, I'd probably have to use NFS or something similar to give the VMs access to the disk.

  • Option 2 is to create an LVM volume group on the RAID array, then use Proxmox to manage LVs. This would be my preferred option from an administration perspective since privileges would become a non-issue and I could mount the LVs directly to VMs, but I have some concerns:

    • If the host were to break irrecoverably, is it possible to open LVs created by Proxmox on a different system? If I need to back up some LVM config files to make that happen, which files are those? I've tried following several guides to mount the LVs, but never been successful.
    • I'm planning to put things on the server that will grow over time, like game installers, media files, and Git LFS storage. Is it better to use thinpools or should I just allocate some appropriately huge LVs to those services?
  • Option 3 is to forget mdadm and use Proxmox's ZFS to set up redundancy. My main concern here, in addition to everything in option 2, is that ZFS needs a lot of memory for caching. Right now I can dedicate 4 GB to it, which is less than the recommendation -- is it responsible to run a ZFS pool with that?

My primary objective is data resilience above all. Obviously nothing can replace a good backup solution, but that's not something I can afford at the moment. I want to be able to reassemble and mount the array on a different system if the server falls to pieces. Option 1 seems the most conducive for that (I've had to do it once), but if LVM on RAID or ZFS can offer the same resilience without any major drawbacks (like difficulty mounting LVs or other issues I might encounter)... I'd like to know what others use or recommend.

14
 
 

I just wanted to share it. I'm some random guy. It looks good.

  • It supports shake to reset sleep timer!
15
 
 

Hey all, I hope I'm on topic, I host a bunch of self hosted services at home, however with the way things are going in the UK I'm looking to get a VPS set up, initially to use as a proxy and wireguard pop, probably move more stuff to avoid censorship later on (use case is a little fuzzy just yet).

So, primary question is - any suggestions for good VPS providers that aren't the big 3 tech bros, in Western Europe, preferably France, Netherlands Belgium or Spain ?

Secondary question, my ISP throttled all VPN traffic the other week, we have 3 different VPN providers (2 mainstream 1 small player) across about a dozen devices they were all throttled to 250K. If you turned VPN off or split tunneled it went back to 100mb plus (I have a 1Gb connection).

When I asked on reddit for advice the reddit bots immediately jumped in with "oh it's just your VPN provider" however if I dropped phones off the wifi and connected to mobile telephony the VPN'd connections were fine - similar speed to split tunnel less some overhead. Lasted for 12 hours and then went back to normal. I assume I was being sin-binned for too much sailing of the seven seas.

Any idea what settings I can tweak to make it harder for them to throttle me ? I tried changing the Mullvad one to use port 443 but it didn't affect the throttling - maybe they'd already put the throttle on for anything encrypted by that point ?

Edit to fix poor grammar

Edit 2 - thank you all so much for the rapid replies, I'm going with OVHCloud as the cheapest option at my desired spec, with Ionos as the fallback if I have any issues with it.

The list you guys gave was brilliant though, so many options. Really appreciated

16
 
 

Hello selfhosters, I'm currently in the process of doing recon for hardware upgrades in my homelab (bad timing, I know). It's been some time since I did this the last time and every hardware website I've checked so far, either don't have any mini pcs, nucs or similar, or they have zero information about the hardware.

Are there any good sites for selfhosted hardware? I prefer European sites if there are any, but at this point I'll take anything I can get.

17
65
submitted 2 days ago* (last edited 2 days ago) by tanka@lemmy.ml to c/selfhosted@lemmy.world
 
 

Hey :) For a while now I use gpt-oss-20b on my home lab for lightweight coding tasks and some automation. I'm not so up to date with the current self-hosted LLMs and since the model I'm using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me) I just wanted to use the collective wisdom of lemmy to maybe replace my model with something better out there.

Edit:

Specs:

GPU: RTX 3060 (12GB vRAM)

RAM: 64 GB

gpt-oss-20b does not fit into the vRAM completely but it partially offloaded and is reasonably fast (enough for me)

18
 
 

cross-posted from: https://lemmy.world/post/45470185

Hi everyone!

Dograh is an open-source, self-hostable voice AI agent platform. It lets you build phone call agents with a drag-and-drop workflow builder. Think n8n but for voice calls. It's an alternative to Vapi, Retell, etc.

https://github.com/dograh-hq/dograh

(Any star would mean a lot - super appreciated ⭐️)

We've been building this for about 6 months now, just me and my co-founder. Fully bootstrapped and we have said No to inbound VC money to stay that way. It's been exciting but also exhausting, we're getting 5-10 tickets a day now and still figuring out how to keep up with that while shipping new stuff.

Here's what's new in v1.20:

  • Speech-to-Speech via Gemini 3.1 Flash Live. Instead of stitching together separate STT, LLM, and TTS services, this collapses the whole pipeline into a single connection. The calls sound noticeably more natural.
  • Pre-recorded voice mixing. You use actual human voice recordings for predictable parts of the conversation (greetings, confirmations, hold messages) and TTS only kicks in when the agent needs to say something dynamic (use cloned or same voice as fallback) . Saves a lot on TTS costs but most importantly makes bot sound human (because its actual human voice) and lowers latency
  • Post-call QA with sentiment analysis and miscommunication detection.
  • Full call traces via Langfuse Apart from the above key highlights, we have all the basic stuff: tool calls, call transfer, Knowledge base, etc. Docker setup takes about 2 minutes. Bring your own API keys, no vendor lock-in.

**What's on the roadmap (happy to hear more requests):" **- Full support for self hostable open source AI models (LLM TTS STT S2S)

  • More integrations. Happy to take suggestions here One clarification since I know this community cares about this stuff:
  • Dograh is BSD-2 licensed and always will be. No bait and switch. Everything we build goes into the open source.

Just trying to build something useful and keep the lights on. If you can check it out and give us a star it would be a blessing ❤️ , but if not, I love you anyway :)

https://github.com/dograh-hq/dograh

Docs: https://docs.dograh.com/

Tech stack if anyone's curious: FastAPI, Next.js, forked Pipecat, Langfuse.

19
 
 

geteilt von: https://lemmy.ml/post/45783448

Hey! I shared NAS Monitor here a while back – figured it's time for an update since the project has grown quite a bit.

If you want a quick overview first: 📺 https://youtu.be/IGdEm8DbXmg

What's new:

  • Real-time WebSocket push & SSE streaming
  • Traffic charts with Download/Upload in MiB/s
  • Temperature history, threshold alerts
  • Docker container controls (start/stop with toast/confirm UI)
  • Container logs viewer
  • Home Assistant iframe embedding
  • Downtime tracking & storage forecast
  • Secrets via Docker Compose instead of env vars
  • Frontend split into 8 modular JS files (might be interesting if you want to contribute)

Plus a bunch of fixes around disk health parsing, Docker 500 errors, container stats latency and SSE cache bypass.

Still looking for contributors – the codebase is a lot cleaner now and easier to get into.

🔗 Source + API Docs: https://gitlab.com/K-22/nas-monitor-interface

📖 Setup: https://nas-monitor-interface-cc7f40.gitlab.io/

📄 UGOS Pro API (reverse-engineered): https://gitlab.com/K-22/nas-monitor-interface/-/blob/main/API.md

20
 
 

Original project release post here for context: https://www.reddit.com/r/selfhosted/s/H4uS9GlwRJ

As a senior software engineer who’s working hard to build a tool for the self hosting community I’m hoping my post will be received better here, given the r/selfhosted’s new megathread tar pit rule for young projects… I’m reposting the Libre Closet feature update announcement here as I had planned to today on r/selfhosted.

I hope this finds some of the people who expressed interest in the project as we worked really hard to deliver on the requested features.

The intended release post read as the following:

——

First I would like to share gratitude for the warm and supportive reception Libre Closet received when I initially posted about it about a month ago. This really motivated me to take the project seriously.

I’d like to introduce ShoshannaTM who’s joined the project as one of the core maintainers.

Since the first post, we’ve gotten 88 github stars, over 3.1k docker image pulls, 2 community PRs contributed, and many helpful issues filed. We’ve made a point to try to respond to everyone in a timely manner and stay engaged with the budding community growing around this project.

We’ve focused a lot on quality and have taken the time to address bugs with numerous minor version releases.

The features and quality improvements have been made with considerable time, engineering, and intention. While myself and Shoshanna are using copilot to accelerate development, substantial upfront deliberate design was done on our part before any feature development. Additionally, nothing is taken without thorough iteration and human review. This project is not vibe coded. It is engineered and we take much pride in our craft and the quality of our work.

Please feel free to ask any questions you may have, whether about the development choices we’ve made, or about the product itself. Both of us are excited to continue to build a community around this project. 

Key changes in this version include:

  • Background removal for garment images
  • Outfit scheduling calendar, with the ability to plan multiple outfits per day, and mark outfits as worn (future versions could use worn outfit data to tell you which garments you wear most, or perhaps track laundry)
  • An improved outfit builder, which allows you to include as many or few garments as you’d like in a single outfit 
  • Total customization flexibility for garment categories

As before we’ve maintained the ease of self hosting and only one docker command is needed to deploy! For everyone already hosting Libre-Closet, you simply need to pull the fresh `latest` image to update to the v0.2.0 release.

`docker run -p 3000:3000 -v wardrobe-data:/data ghcr.io/lazztech/libre-closet:latest`

We can’t wait for everyone to try it out, and we hope you enjoy V0.2.0 of Libre Closet! 

Public: https://librecloset.lazz.tech/

GitHub: https://github.com/lazztech/Libre-Closet

21
 
 

Hey guys! After over 2 years of me asking how to take the first steps in self-hosting, I think I've finally got most of the things I need set up (except for a mailcow server proxied through a VPS, but that's for another day). I've been seeing a bunch of posts here about the *arr stack, and only recently it piqued my interest enough to really warrant a serious look. But I'll be honest, it's a bit confusing. For now, I'm just thinking of starting up the whole suite on my machine, then slowly expose to internet the parts I find useful (and shut down the parts I don't). But I really can't find any good...tutorial(?) on how to quickly get the whole stack running, and I'm a bit worried about launching individual apps since I don't know if/how they communicate with each other. So I'll try to summarize my, quite naïve, questions here:

  • how exactly do I set up a quick stack? Is that possible? And more importantly, is that recommended?
  • most of the tutorials/stacks I see online use plex for video streaming, but seeing a lot of negativity around plex and its pricing, I reckon using jellyfin would be better. Does it just plug into the ecosystem as easily as plex apparently does?
  • I've already set up a hack-ish navidrome instance to stream music, but managing files is a real hassle with it. Does sonarr(?) do it any better?

I know most of these questions can be easily answered through some LLM (which I don't wanna rely on) or scouring documentation (which honestly look a bit daunting from my point right now), so I figured it'd be best to ask here. Thanks for any help!

22
49
submitted 3 days ago* (last edited 3 days ago) by egg82@lemmy.world to c/selfhosted@lemmy.world
 
 

It's been a month since Fetcharr released as a human-developed (I think we're sticking with that for now) replacement for Huntarr. So, I wanted to take a look at how that landscape has changed - or not changed - since then. I know this is a small part of an arr stack, which is a small part of a homelab, which is a small part of a small number of people's lives, but since I've been living in it almost every weekend for the last month or so I've gotten to see more of what happens there.

So, where are we at?

Let's start with Fetcharr itself:

  • ChatGPT contributions jumped from 4 to 17 instances, with 8 of those being "almost entirely" to "100%" written by LLM. 5 of those are github template files
    • An interesting note is that there are no Claude contributions, except for a vibe-coded PR for a plugin which I haven't reviewed or merged, and is unlikely to be merged at this stage because I don't want a bunch of plugins in the main codebase
  • Plugins is a new thing. I wanted to have my cake and eat it, too. I liked the idea of being able to support odd requests or extensible systems but I wanted to make sure the core of Fetcharr did one thing and did it well. I added a plugin API and system, and an example webhook plugin so folks could make their own thing without adding complexity to the main system
    • I may make my own plugins for things at some point but they won't be in the main Fetcharr repo. I want to keep that as clean and focused as possible
  • Fetcharr went from supporting only Radarr, Sonarr, and Whisparr to including Lidarr and Readarr (Bookshelf) in the lineup. This was always the plan, of course, but it took time to add them since the API docs are.. shaky at best
  • There were no existing Java libraries for handling *arr APIs so I made one and released it as arr-lib if anyone wants to use it for other projects in the future. No Fetcharr components, just API to Java objects. They're missing quite a few things but I needed an MVP for Fetcharr and PRs are always welcome.
  • The Fetcharr icon is still LLM-generated. I haven't reached out to any other artists since the previous post since I've been busy with other things like the actual codebase. Now that's winding down so I'll poke around a bit more

What about feedback Fetcharr has received?

The most common question I got was "but why?" and I had a hard time initially answering that. Not because I didn't think Fetcharr didn't need to exist, but because I couldn't adequately explain why it needed to exist. After a lot of back-and-forth some helpful folks came in with the answer. So, allow me to break how these *arr apps work for a moment.

When you use, say, Radarr to get a movie using the automatic search / magnifying glass icon it will search all of your configured indexers and find the highest quality version of that movie based on your profiles (you are using configarr with the TRaSH guides, right?)

After a movie is downloaded Radarr will continue to periodically refresh newly-released versions of that movie via RSS feeds, which is much faster than using the automated search. The issue with this system is that not all indexers support RSS feeds, the feeds don't get older releases of that same movie, and the RSS search is pretty simplistic compared to a "full" search and may not catch everything. Additionally, if your quality profiles change it likely won't find an upgrade. The solution to this would be using the auto-search on every movie periodically, which is doable by hand but projects like Upgradinatorr and Huntarr automated it while keeping the number of searches and the period of time reasonably low as to avoid overloading the *arr and the attached indexer and download client. Fetcharr follows that same idea.

The second largest bit of feedback I've gotten (or, rather, question) is "why use an LLM at all?" - buckle up, because this one gets long. One of the main selling points of Fetcharr is that it's developed by a human with the skills and understanding of what they're doing and how their system works, so it's worth discussing.

The "why?" is a fair question, I think. We've seen distrust of LLMs and the impacts of their usage across left-leaning social media for a while, now. Some of it is overblown rage-bait or catharsis but there do seem to be tangible if not-yet-well-studied impacts on a societal as well as an ecological level, and there's a more than few good moral and ethical questions around their training and usage.

I have (and share) a fair number of opinions on this thread but ultimately it all boils down to this:

  • I used the ChatGPT web interface occasionally as a rubber-duck for high-level design and some implementation of the plugin system, as well as a few other things
  • I also used it to actually implement a few features. The few times I used it are documented in the codebase and it was a "manual" copy/paste from the web UI and often with tweaks or full rewrites to get the code working the way I wanted
  • I, personally, currently have no issue with individuals using LLMs or even using vibe-coding tools to create projects and sharing them with the world, as long as they're clearly documented as vibe-coded projects or LLM usage has been documented in some way
  • We, as users of free software, have no obligation from the creators of said free software for anything at all. The inverse is true: the creators of the software have no obligation from its users to continue using it. What I mean to say is, you are just as entitled to not use a piece of software as the creator is to do whatever they want with the software they've made, however they've made it
    • If you don't like how something is done, you don't need me to tell you that it's perfectly okay to not like it, trust it, or use it. Conversely, you are not owed an explanation or re-write of a system you would otherwise enjoy. I have no issues explaining why I made the choices I did but others may not be as comfortable doing so
    • The rise of LLMs and vibe-coding tools has given the average user with an idea the ability to implement that idea. I think that's an amazing thing; seeing people with an idea, some hope, and a few dollars create something from nothing. I thought it was great seeing people learn software dev as a kid, creating useful tools, operating systems, or entire playable worlds from an empty text document and I still think it's great today, even if I don't like some aspects of what a vibe-coded project means. Hell, I prefer human-developed projects over their vibe-coded counterparts when I can find them

Finally, Fetcharr has had a few issues opened and subsequently closed with resolutions. Some are more creative exploitation of how Fetcharr's internal systems work, and others had re-writes of other internal systems before they worked properly. And then there were the frustrating mistakes after a long day of frustrating mistakes. Such is the way of software development.

The new landscape

Since the initial 1.0.0 release of Fetcharr, there's been some changes in other projects and new insights on how this all goes together. Most notably, Cleanuparr got its own replacement called Seeker which is enabled by default. If you run Cleanuparr you may consider replacing or removing Fetcharr from your stack. Try both, see if it's worth running yet-another-thing.

Additionally, the developer of Unpackerr has mentioned that they're looking into a web UI for configuring their project so that's exciting for those that enjoy a web UI config.

It also seems like there's been a few other vibe-coded Huntarr replacements such as Houndarr if you're into those. Looks like a neat little web app and system.

So, where are we at?

Well, let's take an honest look at things:

  • It seems like Cleanuparr may very well have a clean Fetcharr replacement. As much as I love seeing folks use tools I've built it's hard to say that Fetcharr is any better than Seeker. Admittedly, I haven't yet tried Seeker, but because it ties directly into Cleanuparr it may very well have Fetcharr beat if you already use the system
  • Again, this is a small portion of a stack that a small portion of people use which in itself is a small portion of the general population. Does any of this really matter on a grand scale? No. It's just interesting and I've been living in it for a month, so it's worth sharing some insights which might apply to other, larger conversations.
  • The statement-piece of Fetcharr is the (lack of) LLM/AI usage. This is where a large portion of the conversation landed and it's a conversation worth having.
  • Web UI config or some sort of stats is a bigger deal to more folks than I originally assumed. It's not a deal-breaker for most but it's interesting to see how important it is to have some sort of pretty web UI. See: the number of stars Fetcharr has vs other similar projects. If you're ever creating your own project that's worth keeping in mind.
23
24
 
 

I'm still on my little adventure of pulling my crap off the cloud and realized my calendar is still blowing around out there. What do people use for their personal calendars nowadays?

25
 
 

Hey there selfhosted community.

I had big plans when I moved last year to finally setup my homelab with proper VLAN seperation. Well a stressfull move later I simply had no energy left and just threw my whole homelab and all my services in my main LAN with no seperation whatsoever.

In how much of a world of pain am I in now when I want to switch my homelab services over in a seperate VLAN? Any recomendations or pointers to documentation for me to go through before I decide if this is something I want to do right now?

Currently this would impact a proxmox host with 3 VM's and 1 LXC and around 20 docker images.

view more: next ›