1
this post was submitted on 04 Apr 2026
1 points (100.0% liked)
Public Health
1707 readers
4 users here now
For issues concerning:
- Public Health
- Global Health
- Health Systems & Policy
- Environmental Health
- Epidemiology
- etc.
🩺 This community has a broader scope so please feel free to discuss. When it may not be clear, leave a comment talking about why something is important.
Related Communities
- Medical Community Hub
- Medicine
- Medicine Canada
- Premed
- Premed Canada
- Public Health (📍)
See the pinned post in the Medical Community Hub for links and descriptions. link (!medicine@lemmy.world)
Rules
Given the inherent intersection that these topics have with politics, we encourage thoughtful discussions while also adhering to the mander.xyz instance guidelines.
Try to focus on the scientific aspects and refrain from making overly partisan or inflammatory content
Our aim is to foster a respectful environment where we can delve into the scientific foundations of these topics. Thank you!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is the first paragraph of the article and I'm already up in arms against the writing... Painting it as a "two sides" situation with people that like AI on one side and people that like AI differently on the other side is just too off-putting. Did an AI write this?
Well, it says generally, and broad categories which leaves room for other cases not accounted for, apparently because they are a minority. One category of people who don't question it, another category which remains open to the possibility of ai making mistakes.
I'm curious, in your opinion, which are the other big groups of people that the article failed to mention?
People that actively avoid using AIs whenever possible.
It's talking about AI users. Users
Yes. "Whenever possible". It doesn't mean "always".
Frequency of use it doesn't interfere with what they are trying to measure: whether users consider the possibility of inaccurate answers, or whether they don't.
If frequency of use is taken into account, and they are only considering users who regularly use ai, then people who try to avoid using ai isn't part of the data pool. These people belong to the minority we established irrelevant to the study.
If, however, they are still surveying people who rarely use ai as well as frequent users, these people can still belong to either of the two categories they are studying: those who generally consider the possibility of receiving inaccurate answers, and those who don't.
Previously you said there are more groups of people which prove the dicothomy to be false, but I fail to see it that way.
Is that not just the article reflecting the study it's talking about? It has users either accept or override the chatbot's answer