this post was submitted on 04 Apr 2026
1 points (100.0% liked)

Public Health

1707 readers
2 users here now

For issues concerning:


🩺 This community has a broader scope so please feel free to discuss. When it may not be clear, leave a comment talking about why something is important.



Related Communities

See the pinned post in the Medical Community Hub for links and descriptions. link (!medicine@lemmy.world)


Rules

Given the inherent intersection that these topics have with politics, we encourage thoughtful discussions while also adhering to the mander.xyz instance guidelines.

Try to focus on the scientific aspects and refrain from making overly partisan or inflammatory content

Our aim is to foster a respectful environment where we can delve into the scientific foundations of these topics. Thank you!

founded 2 years ago
MODERATORS
 

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

you are viewing a single comment's thread
view the rest of the comments
[–] NigelFrobisher@aussie.zone 0 points 6 days ago

I’m seeing this, even in intelligent people. They expect they can just keep prompting and reach a 100% correct answer that needs no human verification. Looks like an earlier phase of AI Psychosis to me.