this post was submitted on 01 Apr 2026
801 points (99.1% liked)
Fuck AI
6692 readers
545 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My capacity for apocalyptic news has to be carefully metered to prevent burnout and stress. (It's a balancing act.) Humanity seems to be hellbent on heading down a dark road with a bunch of ugly possible turns.
The potential to stifle FOSS is bad enough. What I find even more concerning is when the LLM gets good enough to reliably fool developers and can inject very complicated exploits from multiple sources.
I'm a programmer by trade, but it's highly specialized. Otherwise, I'm just a power user/script kiddie. My crystal ball is pinging alarm bells about some of this. It's hard for me to make an accurate assesment of the risk, I lack the knowledge. I expect unspecified bad things and am in a reactionary mode on this.
I hear you. I'm very much the same, both in trying to not pay too much attention for the same reasons, but also the trade, though perhaps not all that specialised.
Once the economic aspect of this comes to the conclusion we already know: it isn't sustainable. I think we might start to see a more sensible approach to LLM usage.
The current status is as if people are asking LLMs if a mushroom they picked is safe to eat, and then serving the whole family. A more sensible approach would be to get a name suggestion from the LLM, then use that as an entry point to manually verify it.
The LLM user should always be the expert. I.e., don't serve something potentially poisonous. Let it come with suggestions, by all means. But if you don't know enough to verify the correctness of what it says, then you already lost. Unfortunately, this is how most people use it now. Followed by being shocked "it lied".