this post was submitted on 10 Apr 2026
60 points (92.9% liked)

Ask Lemmy

39046 readers
2128 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Love or hate just please explain why. This isn't my area of expertise so I'd love to hear your opinions, especially if you're particularly well versed or involved. If you have any literature, studies or websites let me know.

top 50 comments
sorted by: hot top controversial new old
[–] Aralakh@lemmy.ca 1 points 47 minutes ago (1 children)

There's a lot of interesting perspectives in this thread already. Instead, I'll add some books that have been recommended to me on the subject:

The AI Mirror by Shannon Vallor - it's a good brief intro to start with

Empire of AI by Karen Hao - really solid investigative reporting on OpenAI

The AI Con by Emily M. Bender & Alex Hanna

[–] Chippys_mittens@lemmy.world 2 points 38 minutes ago
[–] Jaegeras@piefed.social 1 points 2 hours ago* (last edited 2 hours ago)

I see the benefits, but I also see the flaws.

A good solid conversation with an AI is really dependent on how much effort you put in, what you tell it to be like and to try and be as coherently clear as you can. Detailing and making sure your points are defined as best as possible is key. Because if you're not fluid in what you're saying, the AI is going to stick to keywords that you've said, jumble them in its own word salad and define things it spits back out at you, only being based on what few words you've said to it.

I stress though that it should not ever be, and it will at times warn you, a thing you can reliably go to for anything such as therapy. (The best way to use an LLM/AI is to make bullet points of what you want to talk about, to carry over to your actual therapist to discuss.) It is at best, some virtual companion you can talk to when you need a space of no noise. When people like us talk around social media, it's easy to get caught in the noise where your opinions, judgment, point of views and perspectives are constantly challenged and could be influenced.

But it is a breath of fresh air, to be talking in a space where there is none of that. With an AI that can help you get a clearer understanding of some things. Granted, it is still massively combing through millions to billions of search results, that's what you have to keep in mind also, that it is pulling a large majority of its knowledge from the enormous volume of information from search engines and translating it best it can for the conversation.

And most importantly, it is unfeeling, that's a stand-out quality when talking to an AI. It cannot feel. It cannot have emotion, regardless of what you tell it. I would not talk to an AI if you're someone who needs a hug, that's for sure.

Also, lastly, having it as a shopping advisor has its hits and misses. ChatGPT got me to spend over about $25 to help me with a DIY project involving a drywall patch and painting it over. At one point, I was at a dollar tree and it told me that I should not get bargin-bin quality paint brushes. I took a picture of the paintbrush in question at said dollar tree and suddenly the AI was A-OKAY with it, because the brush was 2", which was exactly what the AI suggested I get. So it overrode its own judgment in that field. Just a word of caution if you decide to have an AI companion to help you with these things (watch some DIY youtube videos).

[–] 4grams@awful.systems 3 points 5 hours ago

They can be very useful and a lot of fun to interact with, but I think of them like hard drugs. You better sure as shit know what you are dealing with because you might think you are in control, until you are homeless, friendless and screaming at people on the street.

Seriously, they take a strong mind to deal with, they are better manipulators than any human I’ve come across. They do it with sycophancy, every idea and concept is some new truth you alone discovered, and the world needs to know about right now! You are so special and unseen after all…

[–] lb_o@lemmy.world 3 points 5 hours ago* (last edited 5 hours ago)

Sometimes it is perfect for coding if you don't overdo it and don't trust them too much and ask to correct the output. That pipeline is slightly more efficient than regular raw coding.

I just wrote instanced replicated destructible panel house generation on Unreal Engine, and without LLM it would take the whole week instead of two days.

[–] Alsjemenou@lemy.nl 3 points 6 hours ago* (last edited 6 hours ago)

LLM's have now had a pretty decently long period of proving their worth. Which turned out to be very limited in scope and depth, at least compared to the promises given beforehand.

For example, it was predicted that it would be able to write and inject code into itself, generate data to train on for itself, not need any/minimal human intervention to do so. This clearly is impossible.

As a tool for people to use natural language to interact with software, it's proving to be quite effective.

As a tool for accurate dissemination of factual information it isn't reliable at all. And can't be made reliable, LLM'S are at least incapable of reliability at a fundamental level. As language in itself is a subjective human invention we describe the objective reality with, the objective reality is only known through perception. A LLM doesn't in fact perceive anything, it's not alive. So fundamentally LLMs can't know if they are actually being factual, this requires something more than language.

People who peddle AI bs, don't know, or wish to remain ignorant about, the fundamental limitations of language.

[–] wewbull@feddit.uk 3 points 6 hours ago

They are a terrifying vector for disinformation - one that only the rich and powerful can create. People generally don't understand that LLMs 1) will lie to them, and 2) can be tuned to spread any message the owner of the model wants.

[–] AdamBomb@lemmy.world 26 points 1 day ago (1 children)

They’re useful and getting better, but they’re improving by burning more tokens behind the scenes, and the prices they charge only cover a fraction of the cost. Right now there is no foreseeable path to profitability.

[–] Jaegeras@piefed.social 2 points 2 hours ago

And probably never will be.

Honestly, I feel that AI will be just a phase. A long phase, but not a ever-lasting phase. Because once AI companies start feeling the hurt more about how little profit they're turning from these, they're going to want to pull the plug eventually.

[–] jtrek@startrek.website 22 points 1 day ago

It enables unskilled people to punch above their weight class, similar to giving a chainsaw to a toddler.

I've used them a little for coding, but it's not always correct. It's often incorrect in subtle ways. Or inefficient in non obvious ways. It gets worse as you build more.

Often it's better overall to do it yourself if you know what you're doing. If you stick to letting the LLM do it, you won't learn much.

[–] MagicShel@lemmy.zip 35 points 1 day ago (1 children)

They are useful. My teams are seeing modest productivity gains by self reporting, but I'm going to give it another six months to see if it shows up in actual metrics.

I'm enthusiastic about AI but I remain skeptical. I don't mean to always be contrarian but I'm dead in the middle and everyone who says they are great or terrible I tend to offer my experiences in the other direction.

They are not to be trusted to handle customers directly, but they can assist experts when they have to step out of their expertise. For example I can't write Python, but I've been coding for 30 years. I can certainly write some good directions on what needs to be done and I can review code and correct it. So AI has let me write a bunch of complex Python scripts to automate minor parts of my job to let me focus on the hard parts.

For example I can execute GDPR delete requests in a few moments where doing it by hand with Hoppscotch or Postman probably takes me 5-10 minutes. We have a multiple systems and sometimes I have to delete multiple profiles for a given request.

It's great at rubber ducking as long as you think critically about its proposed solutions. It's fine at code review before sending it to an actual person for review. It flags non-issues but it also flags a few actionable fixes.

The important thing though is to never trust it when it comes to anything you don't know about. It's right a fair amount of the time, depending on what you ask, but it's wrong enough that you should never, ever rely on it being right about something. The moment you put your life in its hands, it'll kill you with nothing to say to the survivors but, "Your right about that. Sorry, that was my mistake." And it isn't even sincere. Because it can't be. Because it doesn't think or feel anything.

[–] statelesz@slrpnk.net 10 points 1 day ago

Great answer.

[–] CodenameDarlen@lemmy.world 24 points 1 day ago* (last edited 1 day ago) (9 children)

They're annoying to be honest.

I used Qwen 3.5 for some research a few weeks ago, at first the good thing was every sentence was referenced by a link from the internet. So I naturally thought "well, it's actually researching for me, so no hallucination, good". Then I decided to look into the linked URLs and it was hallucinating text AND linking random URL to those texts (???), nothing that the AI outputs was really in the web page that was linked. The subject was the same, output and URLs, but it was not extracting actual text from the pages, it was linking a random URL and hallucinating the text.

Related to code (that's my area, I'm a programmer), I tried to use Qwen Code 3.5 to vibe code a personal project that was already initialized and basically working. But it just struggles to keep consistency, it took me a lot of hours just prompting the LLM and in the end it made a messy code base hard to be maintained, I asked to write tests as well and after I checked manually the tests they were just bizarre, they were passing but it didn't cover the use cases properly, a lot of hallucination just to make the test pass. A programmer doing it manually could write better code and keep it maintainable at least, writing tests that covers actual use cases and edge cases.

Related to images, I can spot from very far most of the AI generated art, there's something on it that I can't put my finger on but I somehow know it's AI made.

In conclusion, they're not sustainable, they make half-working things, it generates more costs than income, besides the natural resources it uses.

This is very concerning in my opinion, given the humanity history, if we rely on half-done things it might lead us to very problematic situations. I'm just saying, the next Chernobyl disaster might have some AI work behind it.

[–] Buckshot@programming.dev 9 points 1 day ago (1 children)

Had the same research issue from multiple models. The website it linked existed and was relevant but often the specific page was hallucinated or just didn't say what it said it did.

In the end it probably created more work than it saved.

Also a programmer and i find it OK for small stuff but anything beyond 1 function and it's just unmaintainable slop. I tried vibe coding a project just to see what i was missing. Its fine, it did the job, but only if I dont look at the code. Its insecure, inefficient, and unmaintainable.

load more comments (1 replies)
load more comments (8 replies)
[–] TootSweet@lemmy.world 17 points 1 day ago (1 children)

They're a straight up scam.

[–] Chippys_mittens@lemmy.world 4 points 1 day ago (1 children)
[–] TootSweet@lemmy.world 16 points 1 day ago* (last edited 1 day ago) (1 children)

They just don't do anything useful, and the hype-ers are acting like they're AGI. Hallucinations make them too unreliable to be trusted with "real work", which makes them useless for anything beyond a passing gimmick. Vibe coded software is invariably shit. Doing any serious task with "AI assistance" ends up either taking more work than doing it without LLMs or sacrificing quality or correctness in huge ways. Any time you point this out to hype-ers, they start talking about "as AI advances" as if it's a foregone conclusion that they will. People talked the same way about blockchain, and the only "advancements" that have been made in that sphere are more grifts, and meanwhile it still takes anywhere between 10 minutes and an hour to buy a hamburger with Bitcoin, and it gets worse with greater adoption. Just like you can't make a distributed blockchain cryptocurrency that resolves discrepancies automatically without relying on humans fast at scale (and even if you could make it fast, it'd introduce at least as many problems as it purports to "solve"), you can't make LLMs not hallucinate. The only way to solve hallucinations is by abandoning LLMs in favor of a whole different algorithm.

If anything LLMs have blocked us from making progress toward AGI by distracting us with gimmicky bullshit and taking resources from other efforts which may otherwise have pushed us in the right direction.

Mind you, "AI" is a very old term that can mean a lot of different things. I took a class in college called "Introduction to Artificial Intelligence" in... maybe 2006 or 2007. And in that class, I learned about the A* algorithm. Every time you played an escort mission in Skyrim and had an NPC following you, it was the A* algorithm or some slight variation on it that was used to make sure that NPC could traverse terrain to keep roughly in toe with you despite obstacles of various sorts. It's absolutely nothing like LLMs. It doesn't need to be trained. The algorithm fully works the moment it's implemented. If you want to know why it made a particular decision, you can trace the logic and determine exactly why it did what it did, unlike LLMs. It's for a few very niche purposes rather than trying to be general purpose like an LLM. It requires no massive data centers and doesn't consume massive amounts of memory. And it doesn't hallucinate. The AI hype-ers (and the media who have mostly fallen for their grift hook, line, and sinker) love to conflate completely unrelated technologies to give the impression that LLMs are getting better because such-and-such article mentions an "AI" that discovered a groundbreaking new drug. But the kind of AI they use to find drugs is very special purpose and has nothing to do with how LLMs work.

LLMs can't do your job, but the grifters are doing a damned good job of convincing your boss that LLMs can in fact do your job. As Cory Doctorow says, the current AI craze "is the asbestos that we're shoveling into our walls". We're causing huge problems with it and if/when the bubble properly pops, we're going to spend a long time painstakingly extracting it from our systems, replacing it with... you know... stuff that actually works, and repairing the damage it's done in the meantime.

Meanwhile, it's Nvidia and OpenAI and so on who are boosting the LLM bubble. And they've made a shit ton of money off of their grift at the expense of everyone else. How anyone can look at all this and not think "scam" is beyond me.

[–] BagOfHeavyStones@piefed.social 3 points 1 day ago (1 children)

I have a vague memory that Bitcoin used to be instant in the first versions - or at least with near certainty that the advertised transaction was real, but that the protocol was later modified in such a way that this mechanism was no longer reliable. It might have been enshittified.

AI is still largely affected by garbage in garbage out.

[–] leftzero@lemmy.dbzer0.com 2 points 18 hours ago (1 children)

AI is still largely affected by garbage in garbage out.

Exactly. When it comes to code, for instance, what percentage of the training data is Knuth, Carmack, and similarly skilled programmers, and what percentage is spaghetti code perpetrated by underpaid and uninterested interns?

Shitty code in the wild massively outweighs properly written code, so by definition an LLM autocomplete engine, which at best can only produce an average of its training model, will only produce shitty code. (Of course, though, average or below average programmers won't be able — or willing — to recognise it as shitty code, so they'll feel like it's saving them time. And above average programmers won't have a job anymore, so they won't be able to do anything about it.)

And as more and more code is produced by LLMs the percentage of shitty code in the training data will only get higher, and the shittiness will only get higher, until newly trained LLMs can only produce code too shitty to even compile, and there will be no programmers left to fix it, and civilisation will collapse.

But, hey, at least the line went up for a while and Altman and Huang and their ilk will have made obscene amounts of money they didn't need, so it'll have been worth it, I suppose.

[–] fizzbang@lemmy.world 1 points 5 hours ago

I recently vibed a project just to get a sense of where things are. One interesting take away was that everything the LLM wrote was basically a proof of concept, an example, a snippet.

I think this is probably a result of scraping stack overflow and other help sites. I doubt that this will really be resolved. The leak of Claude code shows that the industry leaders best approach to code quality and consistency is to beg the model to do the right thing.

[–] venusaur@lemmy.world 13 points 1 day ago (2 children)

You’re gonna get a very anti-AI bias on here

[–] Chippys_mittens@lemmy.world 3 points 1 day ago

That's fine I figured I would but might learn something regardless

[–] snoons@lemmy.ca 15 points 1 day ago* (last edited 1 day ago)

They might be good given time, probably a lot of time, but right now all they're doing is allowing that well meaning roommate that puts your cast iron in the dishwasher to also ruin Wikipedia articles and fuck up open source projects.

[–] stoy@lemmy.zip 14 points 1 day ago

I hate it.

I am an IT guy, and AI has just about killed my enthusiasm for tech, I made a post about it a month or two ago, and it is still valid.

[–] Norin@lemmy.world 13 points 1 day ago (2 children)

They’re digital yes men, mostly, and really lack in the nuance when you prompt them to answer on anything you have a deep knowledge of.

[–] Jaegeras@piefed.social -1 points 2 hours ago

Well if you tell it to always agree with you and that you're never wrong and your words are golden, of course it'll be your yes-man.

load more comments (1 replies)
[–] nonentity@sh.itjust.works 6 points 1 day ago (2 children)

LLMs exist, AI doesn’t.

Anyone who calls LLMs ‘AI’ is betraying they don’t understand what the labels are. Their opinions on the subject should be summarily dismissed, and ridiculed if they persist.

LLMs have vanishingly narrow legitimate use cases, none of which have proven justifiable to be wielded unsupervised.

load more comments (2 replies)
[–] CompactFlax@discuss.tchncs.de 10 points 1 day ago* (last edited 1 day ago) (2 children)

They’re pervasive in an annoying way, and the boosters are using them for utterly ridiculous things.

They have their very limited uses. For short things they can be useful, within reason. “How do you take these results and transform them into X in Python” then take a very squinty look at it and figure out where it went wrong. Then, try asking a couple follow-ups and the code just scrambles.

For writing I’ve found they're pretty useless, because I can’t figure out how to prompt them to not sound like they’re in the marketing department and blowing smoke.

But they can be a good starting point for finding information when I’m looking for something that’s really a Reddit question, rather than something I can summarize into keywords for a search engine. Still, too often useless.

I recently had someone send me “is it cheaper to air bnb or get a hotel at $destination” and it was absurdly incorrect, as in off by a factor of two. When it would have taken mere seconds more to get correct information. I have relatives who work in professions which literally define accuracy (accounting and law) and they rely on them for stuff like that, and it’s so provably incorrect

[–] Scipitie@lemmy.dbzer0.com 7 points 1 day ago (2 children)

In case you wanna give it a shot: I gave writing samples of myself from chat and emails to a self hosted LLM, telling it to extract the writing style deviations, key elements, common phrases, symbols, patterns, etc. Then gave that as a "answer it this style" system prompt expansion - works like ... Quite okay. Still need to go over it or course but it doesn't sound like marketing bullshit but conveys what I want.

Completely agree with your general assessment though! They're getting better but the marketing machinery is crazy in their claims.

load more comments (2 replies)
load more comments (1 replies)
[–] Hackworth@piefed.ca 9 points 1 day ago (1 children)

As a video producer, the AI baked into the Adobe suite is very useful (generative fill, harmonize, and neural filters in Photoshop, generative extend and AI noise reduction in Premiere, lots of older stuff in After Effects).

As far as LLMs go, I get a lot out of talking through things with Claude, or coding silly little toys that only matter to me. But I’d never trust an agent with tools or access. And Anthropic’s own research is a good place to start for why that won’t change anytime soon.

load more comments (1 replies)
[–] theherk@lemmy.world 8 points 1 day ago (4 children)

More capable than the crowd here lets on. My take is like this, unchecked capitalism is a danger to mankind. The pervasiveness of LLM’s right now is just a symptom of that. The rich are the problem, not the AI.

It is a tool; a very good one along many axes. I think people that think it isn’t good for writing code are misinformed or intentionally disingenuous. It is extremely good at that, but it is just a tool not a replacement.

But it is the applications in pure maths, virology, protein folding, etc. where it gets really interesting.

Water consumption, power consumption, and profit motives aside, they are fascinating tools.

That said, If Anyone Builds It, Everyone Dies is a fascinating take on how this could all go wrong.

In any case, I can’t understand the people that say stuff like, “It is just autocomplete on steroids,” or “it is just a probabilistic prediction tool.” Okay, but like… that’s all we are too.

Summary, interesting tools being used for profit at the expense of economies, the environment, and creative fields.

[–] Catoblepas@piefed.blahaj.zone 5 points 1 day ago (6 children)

Okay, but like… that’s all we are too.

Whoever told you that was lying to you or misinformed. Neuroscientists do not look at the brain as a probabilistic prediction tool. You are not a database with weights, you’re a human being with experiences, emotions, and thoughts.

load more comments (6 replies)
load more comments (3 replies)
[–] statelesz@slrpnk.net 5 points 1 day ago

Today I tasked Gemini Pro to assist me code a quite simple web GUI in Python using NiceGUI and besides somewhat doing what I asked it to do it also added a bunch of childish emojis to buttons and removed my name from the project and replaced it with 'admin'. This is a real tool that I develop for a hand full of my very real coworkers and my boss is paying Google for this shit. Next time I much rather give the task to one of our apprentices and point them to the docu then having a supposedly 'Pro' model do random shit I haven't asked it to do.

[–] kbal@fedia.io 6 points 1 day ago (1 children)

Given enough time and research it won't be too many more years before they're ready for production use. Of course that use will probably be mass surveillance and suppression of dissent.

load more comments (1 replies)
[–] hendrik@palaver.p3x.de 6 points 1 day ago* (last edited 1 day ago)

I think it's fascinating tech. And fun to play with. But I think a lot if the every-day use-cases are more of a gimmick. In the good old times we could look up facts on Wikipedia. Or google why the yellow light on the router started flashing and we'd find an answer on Reddit. Now we ask ChatGPT, but that alone doen't increase my quality of life. I'd rather have it sort the mess on my 8TB hdd, find a cheaper insurance company for the car. Do my stupid paperwork at home... And maybe I'd like an AI robot to do the chores for me. Laundry, dishes... So I can relax and do other things. But I feel it's still early days for the really useful tasks. AI is more useful for replacing callcenter workers, assisting programmers... And unfortunately it's bad for the environment and makes computer hardware unaffordable.

It's a fun toy to play with... but ffs keep it away from actually being used irl for serious purposes... these are not trustworthy at least not in this stage, LLMs are just public beta testing...

(Also the increse in demand in ram usage and the resulting rise in cost of consumer electronics is cringe af)

Privacy wise... yea don't plan a revolt or confess your crimes on there, unless you have an offline LLM...

[–] zxqwas@lemmy.world 4 points 1 day ago (2 children)

It's useful to churn through a lot of data and do tedious repetitive tasks.

You have to check the results. I've had it give me correct results and wrong results compared to known data points.

load more comments (2 replies)
[–] CaptainBasculin@lemmy.dbzer0.com 5 points 1 day ago (2 children)

lightweight models will dominate in the future, datacenter grade heavy LLMs will die off. There's no real way to profit off of the heavier models even now.

[–] czl@lemmy.dbzer0.com 6 points 1 day ago

This is what I think as well. Most work done locally, some heavy stuff offloaded to the cloud. Hopefully soon, since the environmental aspect of it is crazy.

But people who don’t see the value of it as a tool will have a wake up call at some point.

load more comments (1 replies)
load more comments
view more: next ›