Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
6. Defend your opinion
This is a bit of a mix of rules 4 and 5 to help foster higher quality posts. You are expected to defend your unpopular opinion in the post body. We don't expect a whole manifesto (please, no manifestos), but you should at least provide some details as to why you hold the position you do.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
view the rest of the comments
If the tool works by just having a text field where you write the instructions for it, then yeah, it would have the same issues as other LLMs - what you ask for isn't always what you get. There would definitely need to be some kind of logger so you could review it later and see how it's actually performing.
You can never get rid of false positives - at least not before we reach AGI - but that doesn't really matter. I'm already getting a huge number of them relying on keyword filtering and blocking, which are both broad, blunt, and inaccurate tools. It doesn't need to be perfect - just better. It's not like I'm going to miss some life-changing information just because it falsely flagged some content. I'm already missing ALL the content on Reddit, Instagram, TikTok, and Facebook. A few Lemmy posts I might've potentially found interesting don't weigh much on that scale.
Here's a good example of a thread where about 80% of the comments are pure noise. In the ideal case I would open that thread and only see the few civil comments written in good faith with "truth seeking" attitude. The rest of it is just dunking.
As I said, something along these lines should be possible for consumers as well - AI sentiment analysis certainly exists (for example, for social media managers in companies).
However, to my knowledge, such a "content filtering" tool out of the box does not exist, which is likely due to the fact that it would consume a great deal of energy (or tokens in cloud models, which of course also equates to energy), since every single post and comment would have to be evaluated by an LLM - and this would likely apply to every user of such a tool, provided that each user can specify their own criteria for filtering.
A service like this, which could be easily deployed as a kind of SaaS solution without technical knowledge, would therefore have to be a paid service if it were to function effectively for the user.
I don’t think such services exist, though -perhaps for mainstream social media platforms or as browser extensions for some usecases, but likely not outside the mainstream.
You could build something like this yourself, though - using a locally operated model or by utilizing an API. But you’d have to develop it yourself, I’m afraid.