this post was submitted on 04 Apr 2026
1 points (100.0% liked)

Public Health

1707 readers
4 users here now

For issues concerning:


🩺 This community has a broader scope so please feel free to discuss. When it may not be clear, leave a comment talking about why something is important.



Related Communities

See the pinned post in the Medical Community Hub for links and descriptions. link (!medicine@lemmy.world)


Rules

Given the inherent intersection that these topics have with politics, we encourage thoughtful discussions while also adhering to the mander.xyz instance guidelines.

Try to focus on the scientific aspects and refrain from making overly partisan or inflammatory content

Our aim is to foster a respectful environment where we can delve into the scientific foundations of these topics. Thank you!

founded 2 years ago
MODERATORS
 

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

you are viewing a single comment's thread
view the rest of the comments
[–] Asetru@feddit.org 0 points 1 week ago (2 children)

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

This is the first paragraph of the article and I'm already up in arms against the writing... Painting it as a "two sides" situation with people that like AI on one side and people that like AI differently on the other side is just too off-putting. Did an AI write this?

[–] Mothra@mander.xyz 0 points 1 week ago (1 children)

Well, it says generally, and broad categories which leaves room for other cases not accounted for, apparently because they are a minority. One category of people who don't question it, another category which remains open to the possibility of ai making mistakes.

I'm curious, in your opinion, which are the other big groups of people that the article failed to mention?

[–] Asetru@feddit.org 0 points 1 week ago (1 children)

People that actively avoid using AIs whenever possible.

[–] Mothra@mander.xyz 0 points 1 week ago (1 children)

It's talking about AI users. Users

[–] Asetru@feddit.org 0 points 1 week ago (1 children)

Yes. "Whenever possible". It doesn't mean "always".

[–] Mothra@mander.xyz 0 points 1 week ago

Frequency of use it doesn't interfere with what they are trying to measure: whether users consider the possibility of inaccurate answers, or whether they don't.

If frequency of use is taken into account, and they are only considering users who regularly use ai, then people who try to avoid using ai isn't part of the data pool. These people belong to the minority we established irrelevant to the study.

If, however, they are still surveying people who rarely use ai as well as frequent users, these people can still belong to either of the two categories they are studying: those who generally consider the possibility of receiving inaccurate answers, and those who don't.

Previously you said there are more groups of people which prove the dicothomy to be false, but I fail to see it that way.

[–] Skua@kbin.earth 0 points 1 week ago

Is that not just the article reflecting the study it's talking about? It has users either accept or override the chatbot's answer