egg82

joined 2 years ago
[–] egg82@lemmy.world 3 points 14 hours ago (1 children)

From the above post:

The most common question I got was “but why?” and I had a hard time initially answering that. Not because I didn’t think Fetcharr didn’t need to exist, but because I couldn’t adequately explain why it needed to exist. After a lot of back-and-forth some helpful folks came in with the answer. So, allow me to break how these *arr apps work for a moment.

When you use, say, Radarr to get a movie using the automatic search / magnifying glass icon it will search all of your configured indexers and find the highest quality version of that movie based on your profiles (you are using configarr with the TRaSH guides, right?)

After a movie is downloaded Radarr will continue to periodically refresh newly-released versions of that movie via RSS feeds, which is much faster than using the automated search. The issue with this system is that not all indexers support RSS feeds, the feeds don’t get older releases of that same movie, and the RSS search is pretty simplistic compared to a “full” search and may not catch everything. Additionally, if your quality profiles change it likely won’t find an upgrade. The solution to this would be using the auto-search on every movie periodically, which is doable by hand but projects like Upgradinatorr and Huntarr automated it while keeping the number of searches and the period of time reasonably low as to avoid overloading the *arr and the attached indexer and download client. Fetcharr follows that same idea.

So, if the RSS systems work just fine for you, then that's great! This is a tool made for the people who have found the RSS searches have failed them for one reason or another.

[–] egg82@lemmy.world 3 points 14 hours ago

it's definitely something. Not sure how we wound up on this trend; I'm just here, along for the ride.

[–] egg82@lemmy.world 4 points 14 hours ago (1 children)

That would be pretty awesome! I personally like giving money to folks as compensation for borrowing their skills, but free is a nice add. The current icon is actually pretty nice and works well, but since it's LLM-generated I wanted to replace it with a human creation.

[–] egg82@lemmy.world 2 points 15 hours ago* (last edited 15 hours ago)

For anyone interested in the configarr config I use, here you go. It's somewhat customized to my taste (especially dubs > subs for anime) and there's likely an issue or inconsistency or two in it that someone more familiar might be able to spot, but it works pretty well and I'd say it's a good starting point if you just want to get going.

Note that it's a kubernetes ConfigMap but it's not hard to pull the relevant info into docker for your own needs.

[–] egg82@lemmy.world 9 points 15 hours ago* (last edited 15 hours ago) (3 children)

as always, the answer is "it depends" - everyone has their own unique flavor of *arr stack with different components. Breaking it down, everything revolves around the core apps:

  • Radarr, for movies
  • Sonarr, for TV shows / anime
  • Lidarr, for music
  • Readarr (now Bookshelf), for books/audiobooks
  • Whisparr, for porn

These apps do the majority of the hard work of going from eg. "I want this movie" to "this movie file is now downloaded and placed into a subdirectory on my NAS or storage somewhere"

Realistically, all you need to get started is a download client (usenet, torrent client, whatever - the most popular choice is qbittorrent-nox or an equivalent docker container), your *arr app(s) of choice, and a way to consume and share the media you've now downloaded to your NAS or server (plex, jellyfin, stash, audiobookshelf, VLC, etc)

For consuming media, here's a non-comprehensive list that most people will recommend at least one thing from:

  • Plex or Jellyfin for audiovisual media. TV shows, anime, movies, porn, audiobooks, and music
  • Stash for porn-specific media, if you prefer. Significantly better metadata handling and management designed specifically and only for porn
  • Audiobookshelf specifically for books and audiobooks. Again, better metadata handling and management designed specifically for books/audiobooks
  • VLC or an equivalent if you prefer mounting your media share to your PC and just playing the raw files

The rest of the *arr ecosystem serves as a way to automate this core idea or fix issues with that automation. An example from my own homelab:

  • I have every *arr app listed as the core for finding/downloading whatever media
    • I have two instances of Sonarr and Bookshelf. One Sonarr for TV shows and one for anime, and similarly one Bookshelf for regular books and one for audiobooks. the way data management is handled in these apps it's significantly easier to set up two instances of each rather than trying to force everything into one app
  • I use Prowlarr as an indexer manager. You can add indexers to each app but it's easier to set up Prowlarr and let it do the handling and search caching
  • I use qBittorerent for the actual downloading and Plex for sharing. I've found that friends and family have a much easier time both finding and using Plex, so I stuck with that over Jellyfin
  • I set up Unpackerr because often times you'll find imports for the *arr apps fail because they're compressed in some way. This just automates the finding and decompressing of those files so they can import successfully without needing me to go in and do things myself
  • I use configarr to automate the application of the TRaSH guides to each *arr which significantly increases the odds of getting a good quality version of whatever it is you're looking for when doing an automatic search
  • I have Seerr set up so friends and family can request movies, TV, or anime on their own without needing to message me all the time
  • The *arr apps do an okay-ish job of constantly looking for upgrades for existing media but they fail in a lot of unexpected ways so I used to run Huntarr. After that imploded I created and now run Fetcharr. If a better version of something I have is ever released it'll nab it automatically
  • Since I'm a filthy dub watcher (I just can't do subtitles, sorry) I have Taggarr to tag anime series as "not the dubbed version" which works well enough
  • I just set up dispatcharr for live TV which was a fun little side-project and maybe could be useful later. This was one of those "ooh pretty" set-it-up-and-see-how-it-goes things.
  • Because automated requests from Seerr and Fetcharr can clog up your queues with failed downloads pretty quickly (stalled, bad releases or naming, etc), I set up Cleanuparr to deal with that whole mess. Works pretty well, no need to check and clear things myself any more
  • My wife can't do any media without subtitles so I also have Bazarr running to download those for any media that's missing them
  • I also set up Maintainerr because I've realized my friends and family have a habit of requesting stuff and then never watching it, so this prevents media from completely filling up the NAS. It deletes media based on rulesets. Mine is customized to delete unwatched stuff after X days
  • I also have Mixarr set up which I have mixed (hah) feelings on. Just takes my music I listen to and grabs artists I don't already have. Very obviously vibe-coded which makes me nervous because of the type of people who vibe-code popular apps and the thick skin required to publish popular apps to the internet. So far I haven't found anything better
  • I also recently set up audiobookshelf for books and audiobooks. The metadata handling and management is ehh so I may look into LazyLibrarian to clean up and properly tag downloaded media before audiobookshelf pulls it so it can actually get the correct books and authors
  • I also have Stash running for an interface to Whisparr, since adding porn to Plex would be a terrible idea. My friends have kids and they watch a lot on the Plex. It would be super unfortunate to have porn as a recommended video
  • Finally, I run tautulli for stats upon stats upon stats. And because Mintainerr can make use of it
  • FileFlows and Tdarr are also popular for compression, health checks, etc of existing media. I ran them previously but don't any longer

Not all of these will be useful to you, and you'll likely find others that are more useful for your situation. Like I mentioned, everyone's *arr stack is different and unique.

My recommendation: start with an *arr or two, configarr (optional but really recommended - hard to set up but once you do you're good forever), prowlarr (optional but you'll thank yourself later if you ever get into this and end up with more *arrs), and unpackerr (really do recommend this one) and go from there.

42
submitted 20 hours ago* (last edited 20 hours ago) by egg82@lemmy.world to c/selfhosted@lemmy.world
 

It's been a month since Fetcharr released as a human-developed (I think we're sticking with that for now) replacement for Huntarr. So, I wanted to take a look at how that landscape has changed - or not changed - since then. I know this is a small part of an arr stack, which is a small part of a homelab, which is a small part of a small number of people's lives, but since I've been living in it almost every weekend for the last month or so I've gotten to see more of what happens there.

So, where are we at?

Let's start with Fetcharr itself:

  • ChatGPT contributions jumped from 4 to 17 instances, with 8 of those being "almost entirely" to "100%" written by LLM. 5 of those are github template files
    • An interesting note is that there are no Claude contributions, except for a vibe-coded PR for a plugin which I haven't reviewed or merged, and is unlikely to be merged at this stage because I don't want a bunch of plugins in the main codebase
  • Plugins is a new thing. I wanted to have my cake and eat it, too. I liked the idea of being able to support odd requests or extensible systems but I wanted to make sure the core of Fetcharr did one thing and did it well. I added a plugin API and system, and an example webhook plugin so folks could make their own thing without adding complexity to the main system
    • I may make my own plugins for things at some point but they won't be in the main Fetcharr repo. I want to keep that as clean and focused as possible
  • Fetcharr went from supporting only Radarr, Sonarr, and Whisparr to including Lidarr and Readarr (Bookshelf) in the lineup. This was always the plan, of course, but it took time to add them since the API docs are.. shaky at best
  • There were no existing Java libraries for handling *arr APIs so I made one and released it as arr-lib if anyone wants to use it for other projects in the future. No Fetcharr components, just API to Java objects. They're missing quite a few things but I needed an MVP for Fetcharr and PRs are always welcome.
  • The Fetcharr icon is still LLM-generated. I haven't reached out to any other artists since the previous post since I've been busy with other things like the actual codebase. Now that's winding down so I'll poke around a bit more

What about feedback Fetcharr has received?

The most common question I got was "but why?" and I had a hard time initially answering that. Not because I didn't think Fetcharr didn't need to exist, but because I couldn't adequately explain why it needed to exist. After a lot of back-and-forth some helpful folks came in with the answer. So, allow me to break how these *arr apps work for a moment.

When you use, say, Radarr to get a movie using the automatic search / magnifying glass icon it will search all of your configured indexers and find the highest quality version of that movie based on your profiles (you are using configarr with the TRaSH guides, right?)

After a movie is downloaded Radarr will continue to periodically refresh newly-released versions of that movie via RSS feeds, which is much faster than using the automated search. The issue with this system is that not all indexers support RSS feeds, the feeds don't get older releases of that same movie, and the RSS search is pretty simplistic compared to a "full" search and may not catch everything. Additionally, if your quality profiles change it likely won't find an upgrade. The solution to this would be using the auto-search on every movie periodically, which is doable by hand but projects like Upgradinatorr and Huntarr automated it while keeping the number of searches and the period of time reasonably low as to avoid overloading the *arr and the attached indexer and download client. Fetcharr follows that same idea.

The second largest bit of feedback I've gotten (or, rather, question) is "why use an LLM at all?" - buckle up, because this one gets long. One of the main selling points of Fetcharr is that it's developed by a human with the skills and understanding of what they're doing and how their system works, so it's worth discussing.

The "why?" is a fair question, I think. We've seen distrust of LLMs and the impacts of their usage across left-leaning social media for a while, now. Some of it is overblown rage-bait or catharsis but there do seem to be tangible if not-yet-well-studied impacts on a societal as well as an ecological level, and there's a more than few good moral and ethical questions around their training and usage.

I have (and share) a fair number of opinions on this thread but ultimately it all boils down to this:

  • I used the ChatGPT web interface occasionally as a rubber-duck for high-level design and some implementation of the plugin system, as well as a few other things
  • I also used it to actually implement a few features. The few times I used it are documented in the codebase and it was a "manual" copy/paste from the web UI and often with tweaks or full rewrites to get the code working the way I wanted
  • I, personally, currently have no issue with individuals using LLMs or even using vibe-coding tools to create projects and sharing them with the world, as long as they're clearly documented as vibe-coded projects or LLM usage has been documented in some way
  • We, as users of free software, have no obligation from the creators of said free software for anything at all. The inverse is true: the creators of the software have no obligation from its users to continue using it. What I mean to say is, you are just as entitled to not use a piece of software as the creator is to do whatever they want with the software they've made, however they've made it
    • If you don't like how something is done, you don't need me to tell you that it's perfectly okay to not like it, trust it, or use it. Conversely, you are not owed an explanation or re-write of a system you would otherwise enjoy. I have no issues explaining why I made the choices I did but others may not be as comfortable doing so
    • The rise of LLMs and vibe-coding tools has given the average user with an idea the ability to implement that idea. I think that's an amazing thing; seeing people with an idea, some hope, and a few dollars create something from nothing. I thought it was great seeing people learn software dev as a kid, creating useful tools, operating systems, or entire playable worlds from an empty text document and I still think it's great today, even if I don't like some aspects of what a vibe-coded project means. Hell, I prefer human-developed projects over their vibe-coded counterparts when I can find them

Finally, Fetcharr has had a few issues opened and subsequently closed with resolutions. Some are more creative exploitation of how Fetcharr's internal systems work, and others had re-writes of other internal systems before they worked properly. And then there were the frustrating mistakes after a long day of frustrating mistakes. Such is the way of software development.

The new landscape

Since the initial 1.0.0 release of Fetcharr, there's been some changes in other projects and new insights on how this all goes together. Most notably, Cleanuparr got its own replacement called Seeker which is enabled by default. If you run Cleanuparr you may consider replacing or removing Fetcharr from your stack. Try both, see if it's worth running yet-another-thing.

Additionally, the developer of Unpackerr has mentioned that they're looking into a web UI for configuring their project so that's exciting for those that enjoy a web UI config.

It also seems like there's been a few other vibe-coded Huntarr replacements such as Houndarr if you're into those. Looks like a neat little web app and system.

So, where are we at?

Well, let's take an honest look at things:

  • It seems like Cleanuparr may very well have a clean Fetcharr replacement. As much as I love seeing folks use tools I've built it's hard to say that Fetcharr is any better than Seeker. Admittedly, I haven't yet tried Seeker, but because it ties directly into Cleanuparr it may very well have Fetcharr beat if you already use the system
  • Again, this is a small portion of a stack that a small portion of people use which in itself is a small portion of the general population. Does any of this really matter on a grand scale? No. It's just interesting and I've been living in it for a month, so it's worth sharing some insights which might apply to other, larger conversations.
  • The statement-piece of Fetcharr is the (lack of) LLM/AI usage. This is where a large portion of the conversation landed and it's a conversation worth having.
  • Web UI config or some sort of stats is a bigger deal to more folks than I originally assumed. It's not a deal-breaker for most but it's interesting to see how important it is to have some sort of pretty web UI. See: the number of stars Fetcharr has vs other similar projects. If you're ever creating your own project that's worth keeping in mind.
[–] egg82@lemmy.world 1 points 1 week ago

Harry Potter and the Sorcerer's Stone was 2001, 25 years ago. Deathly Hallows was 2011, 15 years ago.

[–] egg82@lemmy.world 1 points 4 weeks ago* (last edited 4 weeks ago)

glad to hear it! Thanks for checking it out.

[–] egg82@lemmy.world 2 points 4 weeks ago* (last edited 4 weeks ago)

Honestly, I should have figured these kinds of questions would come up around a project that is specifically designed to not use LLMs as much as possible. It's a fair (and hard) series of questions, so here's where I currently stand:

I don't particularly like the profit-driven nature of the companies or NPOs behind the popular LLMs. Capitalism (Communism, Socialism, Anarchism, etc) in their purest forms are all terrible for different reasons, and you can see the issues with Capitalism reflected in the decisions these orgs make affecting their products and stakeholders.

I do, however, like the idea of an LLM as a secondary and more customized option to a search engine. There are questions I've had for years that weren't easily Google-able but answered with in a few seconds and easily verifiable with more conventional search techniques. Usually this is because I'm missing terminology or the current terminology is generic enough and the concept specific enough that any information is drowned in pages of results for other things. The vector-based nature of LLMs means you can get to specific concepts quite quickly.

They're also pretty decent at stuff I am terrible at, like quick bits of math I would spend hours or days figuring out. This is a me problem, but my math skills are roughly around pre-college with patches of understanding around geometry and trig and I had to spend enormous effort getting just-barely-passing math grades for my degree. It's not fun for me but it's usually necessary for software development somewhere. A heavy-math portion is a good way to kill my motivation for a project. Similarly, there are languages which just aren't fun and are repetitive and iterative in nature. Bash is a good example; I'm a Linux sysadmin and DevOps engineer by trade but a bash script with fancy flags and features just sucks to write. An LLM can do them easily and quickly and they're easy enough to check, modify, and criticize.

There are also things LLMs are terrible at, and LLMs aren't "excellent" at any particular thing. I'm remembering a clipped-to-death meme of someone in college where one professor says LLMs can't do their particular subject very well but can do others fine. Another professor says the same thing shortly after, but for their particular subject. It highlights a problem tangentially related to the Dunning-Kruger effect with the same basis: people underestimate the depth of fields they do not understand. That said, LLMs can't be trusted blindly and need to be verified. You can't use an LLM to develop an understanding of a thing without a lot of learning on the side from more traditional media sources. You can, however, often use it to fill in gaps of understanding.

There are moral and ethical issues with current LLMs and because of those the acronyms LLM and AI are likely forever tainted- or at least for the next decade or so. The popular phrase "plagiarism machine" is a good example of that. The phrase is accurate enough and hits on an emotional level that's easy to parrot and remember, and those kinds of things tend to stick around the collective subconscious long after the phrases themselves die.

One of the main issues today is over-reliance on LLMs for doing-your-work-for-you which is where vibe-coding comes in. Obviously it's terrible for the reasons I explained above (a lack of understanding your own project and learning) but after trying it a bit myself I can see that it's fun to do. I use opencode on home projects occasionally to keep on top of the understanding of these tools and to try out new things. It's never directly saved me any time, but often it frees me up to do something else for a while and then I come back to a mostly-what-I-wanted thing that required minimal editing. I've never created a full project from start to finish with these tools, however; only to change bits of existing projects and fix issues. My plan was to try this out at some point but after using them for a while and seeing vibe-coded projects online I don't think I need to in order to get a decent understanding of what will happen.

I can't say for sure that the current generation and use of LLMs is "killing the planet" because there's not enough research on it yet. There's preliminary studies that largely point to "yes" but usually in strange and unexpected ways that could be solved. A few of those are refuted and all of them need reproducible results and peer-reviewing at the very least. So, I mean, yeah, it's probably not wrong but, unfortunately, we just need to wait and see. There is, of course, the obvious dangers of wait-and-see, but these are difficult society-level issues and I don't have any answers here. I'm not going to get hung up on problems I can't solve.

LLMs in their current state remind me of 3D printers. I also use a 3D printer because it's fun and a useful tool. Over-reliance on 3D-printed products is problematic and they're not the tool for every job. There's second-order effects of 3D printers that are, maybe surprisingly, not talked about frequently with the average user which is plastic waste and energy consumption. I'm sure oil companies love 3D printers because it's a great way to sell plastic and it's not in the collective consciousness yet. There's a number of parallels to be drawn between these and current LLMs (mostly the ones in web browsers owned and hosted by companies, but also self-hosted ones). That said, I still use a 3D printer occasionally for the things that I can 3D print effectively. I use LLMs occasionally for the things that save me time, energy, and/or sanity.

The question I have to ask myself is "do I believe I am a terrible person if I use X thing that I know causes harm?" - the answer is often "no" but it changes based on new information and where I'm at in life. I worry about what I can change and sometimes what I can't change. There's the concept of "voting with your wallet" but that's currently largely been proven to be a moral-high-ground thing more than a hurt-the-company thing. That's fine; everyone is entitled to their own opinion and everyone has to do what's best for them.

The only thing I felt was "unfair" about your statement was the idea that any use of an LLM constituted a vobe-coded project. I disagreed with that idea and thought it was a disingenuous take. It was also not cool to tell someone you think their hard work and time is effectively worthless. I see where you're coming from and I respect that you know what you want and what you're going to do. I also think that words have meaning and maybe more nuance to your take would have been a good thing to share.

[–] egg82@lemmy.world 1 points 1 month ago

That's great! A cronjob can be effective if your indexer doesn't mind the extra strain or you have a small library.

[–] egg82@lemmy.world 6 points 1 month ago* (last edited 1 month ago) (2 children)

Not sure what you mean by that. I occasionally use the web UI as the tool that it is and I've played around with opencode, cursor, etc previously on other home projects to get a sense for where things are and what the limits of these things are. That said, I take pride in my own work and this project is no exception. Is there something in this project that makes you think I threw a prompt into cursor and am passing that off as my own? Or are you against the idea of using an LLM and consider any person or project using them at all to be vibecoded?

As a quick edit, I'll note that, since I documented any use of ChatGPT reasonably well in this project, you can see the number of times it was used and what it provided. I feel the contributions were largely inconsequential and really just time-saving on my end. I also vetted (and understood!) the output and modified it according to what I wanted. Personally, I don't consider that to be "vibe-coding" but I suppose everyone has their own definition.

Edit again: ugh, it's far too easy to focus on negative feedback and let that consume you. I am not going to defend my use of ChatGPT but I personally think that someone seeing the word ChatGPT and saying "oh so this is vibe-coded" is disingenuous to the project and my skills as a developer. I spent years learning and mastering Java and this is a lot of my experience and several weekends of my free time. Look, if you feel that the four uses of ChatGPT, much of which have been modified by my own hand and all of which inconsequential, constitutes a vibe-coded system then that's your take - but I don't think it's a fair take. There are many things to be said about the ethics of modern LLMs and over-reliance on them but personally I think understanding and effectively using tools at your disposal is a skill. If you want something completely free of LLMs these days you may very well have to invent the universe.

Phew. Okay, I'm off my soap-box. Consider me got. I'll try not to think about this too hard but it definitely feels bad pouring your time and skills into a thing and seeing that one comment saying "nah this isn't worth anything"

[–] egg82@lemmy.world 1 points 1 month ago* (last edited 1 month ago) (1 children)

in Media Management (click Advanced) there's an "Analyze Video Files" option to get more data about your actual files. If I remember correctly this also re-tags downloaded media with your profiles if it was mislabeled. If you already have quality profiles set up and gated (you can add profiles that look for these attributes, like 7.1 or 5.1) then you can simply hit the search button on your media and rely on the *arr app to do the rest. If you don't want to upgrade stuff that's already satisfactory to you then you can do the same thing with the "Cutoff Unmet" filter. Fetcharr allows you to do either of these with the new USE_CUTOFF environment variable.

If you're looking for ffmpeg media analysis and health checks you can also check out something like tdarr.

[–] egg82@lemmy.world 4 points 1 month ago

good catch! I forgot that existed.

 

https://github.com/egg82/fetcharr

Disclaimer: I am the developer

Long story short, after Huntarr exploded I still wanted an app that did the core of Huntarr's job: find and fetch missing or upgradable media. I looked around for some solutions but didn't like them for various reasons. So, I made my own.

No web UI, configured via environment variables in a similar manner to Unpackerr. It does one job and it does it (a little too) well. Even when trying a few different solutions for a few days each, Fetcharr caught a bunch of stuff they all missed almost immediately. This is likely due to the way it weights media for search.

Since you made it this far, a few notes:

  1. I did still use ChatGPT on a couple of occasions. They're documented and entirely web UI - no agents. Anything it gave me was vetted and noted in the code before publishing.
  2. The current icon is temporary and LLM-generated. I've put out some feelers to pay an artist to create an icon. Waiting to hear back.
  3. It's written in Java because that's the language I'm most familiar with. SSL certs in Java containers can be painful but I added some code to make it as easy as Python requests or Node
  4. While it still has a skip-if-tagged-with-X feature, it doesn't create or apply any tags. I didn't find that portion necessary, despite other popular *arrs using it. Not sure why they do, even after developing this.
  5. Caution is advised when first using it on a large media collection. It'll very likely pick up quite a number of things initially if you weren't on top of things beforehand. Just make sure your pipeline is set up well, or you limit the number of searches or lengthen the amount of time between searches using the environment variables.
view more: next ›