this post was submitted on 04 Apr 2026
1 points (100.0% liked)

Public Health

1704 readers
1 users here now

For issues concerning:


🩺 This community has a broader scope so please feel free to discuss. When it may not be clear, leave a comment talking about why something is important.



Related Communities

See the pinned post in the Medical Community Hub for links and descriptions. link (!medicine@lemmy.world)


Rules

Given the inherent intersection that these topics have with politics, we encourage thoughtful discussions while also adhering to the mander.xyz instance guidelines.

Try to focus on the scientific aspects and refrain from making overly partisan or inflammatory content

Our aim is to foster a respectful environment where we can delve into the scientific foundations of these topics. Thank you!

founded 2 years ago
MODERATORS
 

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in β€œcognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

top 38 comments
sorted by: hot top controversial new old
[–] floofloof@lemmy.ca 0 points 5 days ago (1 children)

In general, β€œfluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation,” they write.

People have always conflated confidence with ability and knowledge. That's why so many positions of power are occupied by confident bullshitters. It seems like that tendency transfers over to people's interactions with LLMs.

It would be interesting to experiment with an LLM trained to sound less confident and more tentative or self-deprecatory. Maybe the results would be different.

[–] Sergio@piefed.social 0 points 5 days ago

experiment with an LLM trained to sound less confident and more tentative or self-deprecatory

yeah! I think that's an active area of research. from a quick search, here's an example:

https://dl.acm.org/doi/full/10.1145/3613904.3642122

[–] Matriks404@lemmy.world 0 points 6 days ago (1 children)

So like with any other tech or tool, there are two groups of people... Tale old as the world.

[–] supersquirrel@sopuli.xyz 0 points 6 days ago

No, normal tech and tools don't send people into psychosis.

[–] NigelFrobisher@aussie.zone 0 points 6 days ago

I’m seeing this, even in intelligent people. They expect they can just keep prompting and reach a 100% correct answer that needs no human verification. Looks like an earlier phase of AI Psychosis to me.

[–] mycodesucks@lemmy.world 0 points 1 week ago (1 children)

"ChatGPT, how should I feel about this?"

[–] BeMoreCareful@lemmy.world 0 points 1 week ago

Oh, so we've amalgamated all the Facebook conspiracy theories with 4chan conspiracy theories, along with whatever % garbage political messagin,g everyone's major religious texts, and basically the sum off all art, knowledge, and advertising that was available on the Internet at the time.

Of I were real honest, I'd say that last one is the one that really bothers me. The vast majority of our modern media is dumb ads. Really, since the Victorian era. My unscientific guess is that the bulk of modern media is designed to weedle past your logic to make your emotions want to buy various petroleum products.

And out off all that mess we're expecting what?

[–] TammyTobacco@sh.itjust.works 0 points 1 week ago (2 children)

Religion has been doing this for centuries.

[–] Mothra@mander.xyz 0 points 6 days ago

Maybe... But I guess so does branding of many sorts. People rarely question the efficiency and/or safety (or the moral integrity in the manufacturing process) of a lot of products. Foods, cosmetics and medicines would be the first categories that spring to mind which are regularly abused and misused by population at large.

So yes my point being perhaps religion has been doing this for centuries but it's not like there wasn't any other case

[–] supersquirrel@sopuli.xyz 0 points 1 week ago (1 children)

I think AI Obsession is basically a religious belief system.

[–] Lemminary@lemmy.world 0 points 6 days ago

It must be pressing some buttons that both have in common for sure.

[–] Tarambor@lemmy.world 0 points 1 week ago

Reddit comments are a shining example of this.

I think a very good amount of AI users are like this....

[–] TheTechnician27@lemmy.world 0 points 1 week ago* (last edited 1 week ago) (2 children)

Yada yada here's the open-access paper.

(I usually provide these links neutrally, but I'll make a point here: in a public health community, it may be worth requiring linking to a paper on top of the news article covering it – especially if it's open-access. Ars here is mercifully concerned with methodology; many outlets don't give a shit.)

Conclusion is as follows (for expedience; I encourage reading other parts):

As AI becomes ubiquitous in society, understanding how it reshapes human thought is essential. Tri-System Theory [author's note: introduced in this paper; tenuous to call it a "theory" on that basis] offers a new framework for this cognitive frontier. By introducing System 3 (Artificial) as a distinct and external reasoning process, we move beyond the classical architecture of dual-process theories and chart a new decision-making paradigm: one where intuition, deliberation, and artificial cognition coexist, compete, or converge. We show that people not only use System 3 to assist with reasoning, but often surrender to its outputs whether correct or flawed. This cognitive surrender illustrates the value and integration of System 3, but also highlights the vulnerability of System 3 usage. Similar to how System 1-driven heuristics lead to systematic biases, System 3 has differential cognitive shortcomings that will challenge decision-makers and society at large.

Tri-System Theory is not a warning about AI’s dangers but a recognition of System 3’s psychological presence. We do not merely use AI; we think with it. [author's note] In doing so, we must ask new questions: What happens when our judgments are shaped by minds not our own? What becomes of intuition and effort when a generative, artificial partner stands ready to answer? How do we preserve agency, reflection, and autonomy in a world where users engage in cognitive surrender? We offer Tri-System Theory as a conceptual foundation for understanding these challenges. It is a theory for an age of human-AI algorithmic cognition, and for the decision-makers, researchers, and designers shaping that future

[–] mfed1122@discuss.tchncs.de 0 points 6 days ago

Yuck. This petty observation is unworthy of being called System 3. Stealing valor from Kahneman and Tversky. Keep their terminology out of your mouths, trend chasers

[–] Tiresia@slrpnk.net 0 points 1 week ago (1 children)

I think this paper is overly exoticizing AI. People have always been externalizing deliberation to others, be they parents, friends, bosses, partners, gods, spirits, journalists, advertisers, superstitions, tarot cards, or rubber ducks.

Perhaps it is worth calling all of these "system 3", but I see no reason to separate LLMs from them. Our judgment has never been our own entirely, and even if there is nobody else to defer to we can defer to "what they would do".

We accept that these external sources are flawed and can give us bad advice that we follow, but we keep listening as long as we think that is made up for by good advice or other factors.

[–] OpenStars@piefed.social 0 points 1 week ago

People have been using "argument by authority" since before language was invented.

Otoh, this article has to sell its clicks so... all-new terminology it is then.

[–] danh2os@piefed.social 0 points 1 week ago

"those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses" Yep.

[–] okwhateverdude@lemmy.world 0 points 1 week ago

I dunno, I find this entirely unsurprising. And I bet this also correlates strongly with political identity too: authoritarians love gullible idiots that vote for them. Humanity is fucking stupid in aggregate

[–] BillyClark@piefed.social 0 points 1 week ago (1 children)

If you're willing to abandon your thinking to AI, I'm guessing you weren't too attached to it in the first place.

If we let people freely continue in this manner, we're going to evolve into two separate species, one of which we might as well call the Eloi, completely unable to think or perform any tasks for themselves.

[–] yakko@feddit.uk 0 points 1 week ago

I get you, but it's important nuance that cognitive surrender is closely associated with external incentives and time pressure. If you're being paid to do a boring task, e.g., and you don't have enough time to do it, using AI is just the path of least resistance. I don't condone it, but I can clearly see how it happens. It's fun to talk shit on idiots though, as well.

[–] Paragone@lemmy.world 0 points 1 week ago (1 children)

This makes clearer that critical-thinking can't be a mere add-on, it has to be the bedrock of people's capability, ideally of 4/5ths of the population as a minimum-standard, XOR social-highjacking by ideologies & AI is certain.

Unless someone's identity is including critical-thinking, then they're prone to cognitive abdication, either to ideology or to LLM, both are the same fundamental abdication.

I didn't know that LLMs were gaining the same abdication that ideologies have been gaining, but .. now that that paper mentions some evidence, it looks clear.

( "religions" are another highjacker of minds.. all religions which displace critical-thinking do the same thing.

For anybody who claims that religion always displaces critical-thinking, you've obviously no experience with Vajrayana style ruthlessly-correct reasoning.

I say Vajrayana style, but it could well be pervasive throughout the different south Asian branches of AwakeSoulism/Buddhism: I don't know.

Western philosophy is mushy-as-hell, by Vajrayana's standards for objectivity & correctness )

_ /\ _

[–] mimavox@piefed.social 0 points 1 week ago

Sorry, but you obviously know nothing about Western philosophy if you think that it as "mushy" and contain no logical thinking. It is the exact opposite. Religion is another story, though. Don't conflate the two.

[–] SatansMaggotyCumFart@piefed.world 0 points 1 week ago (1 children)

I wonder if being religious is a factor in which group you belong in.

[–] supersquirrel@sopuli.xyz 0 points 1 week ago (1 children)

The vast majority of Cults are not religious so I doubt it would be a strong correlation.

[–] SatansMaggotyCumFart@piefed.world 0 points 1 week ago* (last edited 1 week ago) (1 children)

I don’t know if I can name even one irreligious cult but I can name dozens of religious ones.

[–] supersquirrel@sopuli.xyz 0 points 6 days ago

Thst says more about how you are only open to seeing religious cults as cults and do not see other thought terminating ideologically charged movements as cults.

A worship of capitalism as "natural" is a cult, fad diets are 10000% all cults, multi-level-marketing schemes are cults. Cults are EVERYWHERE, identifying only the outwardly religious as cults is to only see the tip of the iceberg.

[–] Asetru@feddit.org 0 points 1 week ago (2 children)

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

This is the first paragraph of the article and I'm already up in arms against the writing... Painting it as a "two sides" situation with people that like AI on one side and people that like AI differently on the other side is just too off-putting. Did an AI write this?

[–] Skua@kbin.earth 0 points 1 week ago

Is that not just the article reflecting the study it's talking about? It has users either accept or override the chatbot's answer

[–] Mothra@mander.xyz 0 points 1 week ago (1 children)

Well, it says generally, and broad categories which leaves room for other cases not accounted for, apparently because they are a minority. One category of people who don't question it, another category which remains open to the possibility of ai making mistakes.

I'm curious, in your opinion, which are the other big groups of people that the article failed to mention?

[–] Asetru@feddit.org 0 points 1 week ago (1 children)

People that actively avoid using AIs whenever possible.

[–] Mothra@mander.xyz 0 points 1 week ago (1 children)

It's talking about AI users. Users

[–] Asetru@feddit.org 0 points 1 week ago (1 children)

Yes. "Whenever possible". It doesn't mean "always".

[–] Mothra@mander.xyz 0 points 1 week ago

Frequency of use it doesn't interfere with what they are trying to measure: whether users consider the possibility of inaccurate answers, or whether they don't.

If frequency of use is taken into account, and they are only considering users who regularly use ai, then people who try to avoid using ai isn't part of the data pool. These people belong to the minority we established irrelevant to the study.

If, however, they are still surveying people who rarely use ai as well as frequent users, these people can still belong to either of the two categories they are studying: those who generally consider the possibility of receiving inaccurate answers, and those who don't.

Previously you said there are more groups of people which prove the dicothomy to be false, but I fail to see it that way.

[–] NaibofTabr@infosec.pub 0 points 1 week ago

Wouldn't they... have to have some logical thinking in the first place... ?

The whole point of ai is to train people out of critical thinking. It started with shitting all over / underfunding the arts, then turning schools into employee training camps and now, to remove any last residue of free thought, ai to fill the gaps.