this post was submitted on 30 Mar 2026
43 points (80.3% liked)

Asklemmy

54223 readers
242 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 7 years ago
MODERATORS
 

AI can't be all that bad. The problem I'm always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you've got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.

However, I've found some benefits with AI. For example, I'm chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It's helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

top 50 comments
sorted by: hot top controversial new old
[–] TribblesBestFriend@startrek.website 36 points 1 month ago (5 children)

And if ChatGPT made a mistake? How would you know before it’s to late ?

load more comments (5 replies)
[–] aReallyCrunchyLeaf@lemmy.ml 25 points 1 month ago* (last edited 1 month ago) (1 children)

The technology itself is novel and cool. Its the complete and utter meltdown of all tech companies into brainless hype machines that is harmful, which is course, is a function of capitalist incentive and the need for the tech industry to come out with some new paradigm shifting innovation every decade. A normal, healthy society would have been able to leverage machine learning and LLM technology where its most useful, like parsing large amounts of data, or running a local instance on your computer to ask a few questions, etc. We wouldn't see LLMs in every text editor, pencilcase and pair on sneakers but these snake oil salesmen who run the US economy are absolutely desperate for a new paradigm shift so they can keep making exponentially more money.

The thing is, we don't need to build these datacenters siphoning comically evil amounts of energy from the grid and making personal compute a thing of the past. Average everyday person doesn't need cloud compute, they can run a local 4b parameter (very, very small) model on their laptop or phone if they need to ask chatgpt to make them a workout routine or to ask them who won the 1918 world series. But these fucking cretins don't care, that's not the point, they are in this because it's a golden ticket to growth city and once they cash their check they don't give one hot fuck about the human-spirit-stealing-machine they built.

TLDR: our society is broken and that's why we keep getting the shittiest, lowest-common-denominator version of everything. everything has to suck by definition because that's the only version that the system we built will allow.

[–] logos@sh.itjust.works 19 points 1 month ago (3 children)

I have a friend at work that does a lot of video. He films weddings, music videos etc. and is making a pilot for Netflix. He uses AI to go through all his footage and tag it according to content. E.g. if he needs a clip of birds, he can just search ‘birds’ and it will pull up all relevant footage. Incredibly useful.

[–] Jimmycrackcrack@lemmy.ml 3 points 1 month ago (6 children)

This could come in pretty handy for me. What's he edit on that does this?

load more comments (6 replies)
load more comments (2 replies)
[–] DarrinBrunner@lemmy.world 14 points 1 month ago* (last edited 1 month ago) (1 children)

For every small benefit, there are disastrous mistakes. We shouldn't discuss one without the other:

https://tech.co/news/list-ai-failures-mistakes-errors

March 2026

  • Police used AI facial recognition to arrest a Tennessee woman for crimes committed in a state she says she’s never visited

February 2026

  • Health advice given by AI chatbots is frequently wrong, says new study

January 2026

  • Study reveals that fixing AI mistakes takes up to 40% of the time that it saves

  • An AI tool used by ICE to identify applicants with previous law enforcement experience falsely flagged applicants with no such experience, leading to the placement of unqualified recruits in field offices.

December 2025

  • AI mistakes clarinet for gun at Florida school

November 2025

  • Google Antigravity deletes entire content of user’s computer drive

  • Report finds AI hallucinations in 490 court filings from the past six months

October 2025

  • Teenager handcuffed after AI mistakes Doritos packet for gun

  • Lawyer submits AI-assisted court filing with fake citations

  • Man follows ChatGPT advice over stopping eating salt, develops rare condition. The man was hospitalized, sectioned, and eventually treated for psychosis. He tried to escape the hospital within 24 hours of being admitted.

  • ChatGPT-5 jailbroken with 24 hours of release

July 2025

  • AI Coding app deletes entire company database

  • McDonald’s AI chatbot error exposes data of 64 million job applicants

  • AI program is tasked with running a small shop, goes insane, claims to be human

  • Apple Intelligence falsely presents BBC headline

... and it just keeps going.

[–] HobbitFoot@thelemmy.club 3 points 1 month ago* (last edited 1 month ago) (1 children)

So don't put AI in front of anything mission critical or without going through a review of a human.

[–] GiorgioPerlasca@lemmy.ml 3 points 1 month ago* (last edited 1 month ago) (1 children)

So LLMs in agentic mode are a disaster waiting to happen.

[–] HobbitFoot@thelemmy.club 3 points 1 month ago
[–] Lumidaub@feddit.org 12 points 1 month ago (2 children)

If we're strictly talking about LLMs: Certain accessibility services - MAYBE. Writing closed captions / transcription for the most part requires little "human" touch. If we ASSUME that AI will be able to it reliably one day - because it really can't yet - that's one thing that would benefit society.

Image descriptions is another thing I might see done by AI one day but that still requires an understanding of what's actually important about the image.

load more comments (2 replies)
[–] MerrySkeptic@sh.itjust.works 11 points 1 month ago (11 children)

I'm a therapist. I use HIPAA compliant AI to generate my (editable) case notes for my sessions now. Not only is it a huge time saver to simply edit a generated note as opposed to making one from scratch, but in many cases it takes more detailed notes, including quotes from clients.

I have heard of other therapists and medical doctors also using AI to help with diagnosing.

The danger is when therapistsdon't review the content to check for accuracy. Because occasionally it will generate something not really reflective of what the therapist might have been doing, or it might lack detail that the therapist might have otherwise inclused. But more often the stuff it comes up with is surprisingly accurate.And editing is even easier when you can just tell the AI something like, "include more details about how the client noticed their pattern of putting their own feelings last," and it just does what you asked. You don't necessarily have to edit manually, though you can.

load more comments (11 replies)
[–] WoodScientist@lemmy.world 10 points 1 month ago

Running automated hacking and blackmail campaigns against AI companies.

[–] MorkofOrk@lemmy.world 7 points 1 month ago (6 children)

An amazing use for it in audio engineering is for feedback suppression. The old way to give yourself more headroom required you to sit there and turn up the gain until feedback happens and cut that frequency. Now you just turn on the feedback suppression and it does all that for you on the fly. It's game changing for live sound, every major venue has it now.

[–] Jobe@feddit.org 4 points 1 month ago (1 children)

Great for film sound too. You're filming a rainy scene and the rain is way to loud? You had to get the actors into the studio and do voiceover, now you can often just filter it out.

load more comments (1 replies)
load more comments (5 replies)
[–] rossman@lemmy.zip 6 points 1 month ago

rubberducking for those with social anxiety. Also small friction to get surface level answers that normally took digging from multiple sources.

it's a study monster that initially wiped chegg, duolingo, sparknotes etc. The double edge is that people forgot how to take notes, learn fundamentals to handle complex problems.

[–] CanadaPlus@lemmy.sdf.org 6 points 1 month ago* (last edited 1 month ago)

Anything that's fuzzy and impossible to automate with traditional algorithms, but that also has a reasonably high tolerance for error. It just makes up stuff a good portion of the time, you see.

However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

Watch out, personal finance is not one of those things.

[–] Danitos@reddthat.com 6 points 1 month ago

Accesibility.

[–] shellington@piefed.zip 6 points 1 month ago

I agree there is a lot of annoying hype. However i also agree there are some specific use cases where it can be helpful.

I for one find it handy some times when i am writing bash scripts to do things on my system. I obviously check them before running but it does save time.

Although i do recommend running models locally if possible as it is obviously preferable from a privacy and cost standpoint.

[–] lattrommi@lemmy.ml 5 points 1 month ago

I went to my local neighborhood association because I wanted to improve where I live. I was elected president of the association a couple months later, mostly because no one else wanted to do it. It's a fairly poor part of a medium sized city in the U.S.

I've been using AI (running locally on a computer I built that isn't connected to the internet, to reduce harm to the environment) to apply for grants, plan events and help me run the meetings.

It is actually perfect for the job. Saying that as someone who thinks AI is mostly hype and useless for the majority of its current common uses these days. I feed it the text from city grant applications or ask it to make a poster to increase attendance and it's saved me a lot of time. Without it, being someone diagnosed ADHD, I would not have been able to do most of the stuff I have accomplished so far.

[–] damnthefilibuster@lemmy.world 5 points 1 month ago (3 children)

I was sitting in a restaurant the other day and staring at the menu. It was Italian and none of the things made sense. Too wordy and not clear what was meat and what was fancy cheese. The waiter was utterly useless - too busy to help and when present, not answering my questions about what would be a good simple pasta in white sauce.

I took a photo and asked Claude what’s a good white sauce pasta which would be like Alfredo.

It found two options I hadn’t even looked at. AI is good at sorting through complexity. But I don’t just mean AI as in LLMs. It needs a lot more tools and knowledge to be useful. So what you need is a smart system which may or may not have AI as a component.

load more comments (3 replies)
[–] Techlos@lemmy.dbzer0.com 5 points 1 month ago* (last edited 1 month ago)

Curating massive music libraries. I've been using a small embedding model to organise my music for DJing, and being able to generate a t-sne plot clustered on perceptual similarity has been wonderfully useful.

I've also found CLIP models useful for searching videos, just embed a screenshot every couple of min of footage and query with a description of the scene.

And as bad as generated subtitles can be, when the only other option is nothing at all they are pretty nice to have.

[–] verdigris@lemmy.ml 5 points 1 month ago* (last edited 1 month ago)

Chatbots? Basically nothing. Any interaction I have with one leads to spending more time verifying its output, inevitably finding many mistakes, and eventually finding a primary source for what I'm actually looking for. The best actual impact it has is forcing me to narrow down my nebulous question into what I actually specifically want, but the bot itself is contributing very little to that.

Neutral nets in general have limited real usefulness in analyzing large batches of data when other purpose-built analysis software doesn't exist.

"AI" is a misnomer and there is absolutely zero evidence to suggest that we're even on a path toward actual AI, sometimes called AGI, though they're also changing that to just mean a profitable LLM which is fucking hilarious.

Any task you use a bot to do, you will become worse at that task. For mass data analysis, that's fine, poring over reams of data is already a skill that other technology has largely obsoleted. But using it to do research, to read or write for you, or god forbid to make actual decisions and think for you, are very slippery slopes that are already causing a lot of the general public to seriously erode their basic mental capabilities.

[–] Tenderizer78@lemmy.ml 5 points 1 month ago* (last edited 1 month ago) (1 children)
  • Searching a large dataset with a vague search criteria.
  • Real-time feedback when studying a foreign language (since accuracy is less important than quantity).
  • Apparently in medicine they're using generative AI for something meaningful, but I'm not entirely convinced it is actually generative AI and I'd need to do more research.
  • Sometimes it can help in learning to program and in sanity-checking code security.
load more comments (1 replies)
[–] thatsTheCatch@lemmy.nz 4 points 1 month ago

Most of my qualms with AI aren't in the usage of AI, but in its creation (water usage, mass layoffs, etc.—you've heard it all before).

To me it's like asking "What are some good uses for slaves?" (An extreme example to show the point, I'm not trying to say AI is the same as slavery).

Like yeah I could find good uses for it, but should it exist in the first place?

[–] whotookkarl@lemmy.dbzer0.com 4 points 1 month ago (1 children)
load more comments (1 replies)
[–] racoon@lemmy.ml 4 points 1 month ago* (last edited 1 month ago)

Converting PDFs into HTMLs or RFT/TXT docs witout OCR typos. Until recently, it was almost impossible to turn a scanned book from PDF into doc or TXT, because the output of copying and pasting or converting using PDF tools was illegible. AI now can do a “deep AI seek” (look it up) into the texts.

I am converting a textbook into an audiobook in HTML (paragraph highlighting with manual sync) with an integrated popup glossary into every word (with grammar and meaning) and dictionary lookup if clicked.

Besides, as an apendix to each chapter, I add all the explanations from the book.

I took the ~4 500 words of the book and asked for a grammar analysis and meaning lookup to create a glossary. The IA joyfully skipped many terms but that is something I will fix when each chapter is finished. Now I am being punished with waiting despite having paid $20.

[–] makingStuffForFun@lemmy.ml 3 points 1 month ago

I had a project of markdown files. About 400 of them, with about 1200 plus links in them.

The original filenames were changed. The links no longer worked.

The LLM went through each link, and found the new one, based on filename and file content, using its ability to recognise patterns, words, etc etc.

Absolutely saved me maybe a couple of days of manual pain labour, and all done in about 10 minutes.

This is the kind of thing I use it for. Horrible repetitive processes.

[–] WolfLink@sh.itjust.works 3 points 1 month ago

LLMs tend to be a “jack of all trades, master of none”. You are likely to find them useful for helping you with something you are inexperienced at, but not at something you are an expert in. However, because they lie a lot, it’s best to double-check your information, but the LLM can still be helpful with the ”you don’t know what you don’t know” issue.

[–] tomiant@piefed.social 3 points 1 month ago

Learning, exploring concepts and ideas.

[–] Jobe@feddit.org 3 points 1 month ago

In engineering/manufacturing, machine learning can be used to monitor performance and predict part failures of machines so you only do maintenance when it's actually required. Parts are usually replaced when the warranty runs out, but they will often still be good for a while.

[–] Oisteink@lemmy.world 3 points 1 month ago

Reading TOS and clicking off all the privacy options on the cookie popup

[–] Azrael@reddthat.com 3 points 1 month ago

Some hospitals use AI to scan patients and find signs of illness before it becomes a problem. I'd say that's a pretty good use of AI.

[–] semi@lemmy.ml 3 points 1 month ago

In computational biology / biotechnology, LLMs are being trained on biological sequences and can then be used to generate new genes or genetic variants. These genes can be placed into bacteria who are then fed with e.g. sugar to make them produce various valuable molecules from renewable resources instead of from crude oil using conventional chemistry. There is also work on enabling plastic biodegradation this way.

[–] aceshigh@lemmy.world 3 points 1 month ago (3 children)

It’s very helpful for neurodivergent people - helps you figure out who you are and what you want, how you think, learn and work best, identify your obstacles and help you overcome them, understand your neurodivergency and compare it to how neurotypical people think. It’s fantastic at generating ideas that you then test out. The ideas that it gives you are based on how you actually function, so often times they’re valid.

load more comments (3 replies)
[–] hexagonwin@lemmy.today 3 points 1 month ago (1 children)

seems to be decent for OCR, maybe also speech recognition. i hear it's okay for finding some concept you can explain abstractly but don't know the exact word for, but haven't tried this personally.

load more comments (1 replies)
[–] moakley@lemmy.world 3 points 1 month ago

Honestly, Google Search has been better the last couple years after spending the previous twenty years getting consistently worse.

Most of what I use Google for is trivial. Like how old is a certain actor, or why was this author canceled, or what does this item do in a video game?

It's great for those things. Especially the video game stuff. I don't want to watch a 10 minute video just to get a discrete answer, and now I don't have to.

I can even ask it for spoiler-free hints on a particular puzzle, and most of the time it gives me something useful.

[–] lepinkainen@lemmy.world 3 points 1 month ago

I have a script that uses yt-dlp to get subtitles off a YouTube video and summarises the main points for me with a language model so that I don’t have to watch a 20 minute top10 list video that could’ve been a buzzfeed article.

The whole thing is fully vibe engineered too.

load more comments
view more: next ›