this post was submitted on 29 Apr 2026
694 points (98.9% liked)
Microblog Memes
11422 readers
3058 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments

You do not want to know how good current LLM’s would be, if you would remove the thousands of negative-prompts aka. guard rails.
Narrator: They would still be garbage.
Anthropic actually developed a system which, in the hands of the most capable…in narrow domains used conscientiously in a limited fashion with tremendous and constant risk mitigation……, is reportedly not garbage
Narrator: they ruined it
I doubt that. What evidence do you have?
Well theyd be able to say how to make a bomb. Or kill yourself effectively. AI ceos dont even care what their systems can do. If some customers die thats okay to them, it shows how intelligent their ai is. And thats a statement from one of the big AI CEOs.
I don’t think those are the categories where most people are finding LLMs frustrating. We keep being told human white collar work is on the precipice of being replaced, but LLMs continue to be really inconsistent. Failing to parrot easily retrievable info like how to build a legally restricted thing or off yourself isn’t what people are finding lacking it’s that half the time it does something sorta correctly and the other half of the time it lies, fucks up, or fucks up and then lies about it.
Im just parroting what john oliver said on his last episode on sunday.
This is demonstrably false, given you can download your own models and change the system prompts yourself.
That’s not how it works, as the guard rails are not just simple prompts that you just can delete.
Even with “abliteration”, you are modifying the model basically without the whole retraining, but also lose many capabilities at the same time.
So much for “demonstrably false”, while you obviously have never tried to uncensor any LLM.
The thread was literally about the prompt text.
The prompts are part of the training, you realize that? They are then inside the weights. Not just text files you can delete and you are good?
Only because an LLM reveals those negative-prompts does not mean you can just remove them.
Do you genuinely know what you are talking about, or are you just here to ragebait?
No they're not. They're injected into every input that you enter into the system.
Are you suggesting that there is a conspiracy to keep AI down?
How would that work AI is barely regulated.
AI is more regulated than you might think, or else they would not censor their models. One thing is to improve quality in a cosmetic way, as they have not fixed the issue at the core yet (scaling is currently more important). The other thing is safety. Or did you not hear what Grok did in the past months? So tell me again it is not regulated.
It literally tells people to kill themselves some of the time it's definitely not regulated.
I would love to know where you're getting your information from.
Your mom told me that yesterday
Thank you for demonstrating to everybody in the thread that you have absolutely no idea what you're talking about because you have now resorted to attempting to be insulting rather than to defend your arguement.