this post was submitted on 09 Apr 2026
1001 points (99.1% liked)
Science Memes
19865 readers
1730 users here now
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.

Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- !abiogenesis@mander.xyz
- !animal-behavior@mander.xyz
- !anthropology@mander.xyz
- !arachnology@mander.xyz
- !balconygardening@slrpnk.net
- !biodiversity@mander.xyz
- !biology@mander.xyz
- !biophysics@mander.xyz
- !botany@mander.xyz
- !ecology@mander.xyz
- !entomology@mander.xyz
- !fermentation@mander.xyz
- !herpetology@mander.xyz
- !houseplants@mander.xyz
- !medicine@mander.xyz
- !microscopy@mander.xyz
- !mycology@mander.xyz
- !nudibranchs@mander.xyz
- !nutrition@mander.xyz
- !palaeoecology@mander.xyz
- !palaeontology@mander.xyz
- !photosynthesis@mander.xyz
- !plantid@mander.xyz
- !plants@mander.xyz
- !reptiles and amphibians@mander.xyz
Physical Sciences
- !astronomy@mander.xyz
- !chemistry@mander.xyz
- !earthscience@mander.xyz
- !geography@mander.xyz
- !geospatial@mander.xyz
- !nuclear@mander.xyz
- !physics@mander.xyz
- !quantum-computing@mander.xyz
- !spectroscopy@mander.xyz
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and sports-science@mander.xyz
- !gardening@mander.xyz
- !self sufficiency@mander.xyz
- !soilscience@slrpnk.net
- !terrariums@mander.xyz
- !timelapse@mander.xyz
Memes
Miscellaneous
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm failing to see how this is different from making up a fact and then spreading it to news outlets. If you are the authority, and you say something is true, you don't get to point and laugh when people believe your lies. That's a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine. It only regurgitates what's been said. It isn't going to suddenly start doing science on it's own to determine if what you've said is true or not. That isn't it's job. It's job is to tell you what color the sky is based on what you told it the color of the sky was.
Hang on. Are you suggesting its unethical/immoral to lie to a machine?
Additionally, the authors didn't submit the article to a magazine as factual. They posted the articles on a preprint server which can be very questionable anyway as there is no peer review. The machine chose to ignore rigor and treat them as fact.
What you may as well have said:
I really don't understand why people think that LLMs are GOFAI. They aren't making the hard choices. They aren't giving novel solutions to the energy crisis. They aren't solving the trolley problem. They are shitting out what you feed them. If you feed them garbage, you get garbage in return. No one is surprised when the dog gets worms after eating poop it found in the yard. Why are we shocked that an AI that doesn't know fact from fiction treats everything the same?
I think that's the problem though, I think the poop in the yard is a better example. Key is the researchers put that information in speculation. That's like if Anderson Cooper made up a fake news story, and posted it in an anonymous tweet to analyze how far it would spread, and then fox news picks it up and runs with the story all day.
That's the key problem, people are trusting LLMs to do their research for them, when LLMs just gather all the information they can get their hands on mindlessly.
That's the key problem, If they send a misinformative article, to a place for untested, unproven random speculation with a very low bar for who can submit... they can determine that LLMs are looking there. Key thing to note is, it's not their fake disease that's the threat. It's that if it found their fake article, then LLMs probably also scooped up a ton of other misinformed or dubious things.
Lets look at it this way, say it was a cake, but we threw it in the garbage, 2 weeks later we find the same cake... at jims bakery, same ID, same distinct marker we put on it.
What does that tell us, it tells us that Jims bakery is clearly sometimes, dumpster diving and putting things up that clearly are dangerous.
That isn't a fault in the LLM, though, that is a fault in the general make-up of human skepticism, or lack their of. We didn't invent the word 'Propaganda' without having a sentence to use it in. Those that don't practice skepticism, critical thinking, and even mild reasoning are the ones that will get led astray. That didn't just start happening when LLMs came around, it's been here since we first started talking to each other. It's only more visible now because everything is more visible now. The world is much more connected than it ever has been, and that grows with every literal day. All these fucking idiots that don't double check what they are being told are the problem, regardless of if it came from an LLM or a human, because I guarantee you they are being led astray by both. They don't trust the machine because it's a machine, they trust what they are told because they are lazy. That isn't the LLMs fault.
I mean it's a problem in the marketing and common usage of LLMs. That's exactly it though, LLM companies, and people are describing LLMs as a way to do research.
IE you could say these criticisms come in things like wikipedia too. IE anyone can write what they want, but what does wikipedia require? right every single claim has to be cited. So if you go to wikipedia find misinformation, you click on the number and see it.
If you ask chatgpt What diseases should I be concerned about in africa, it lists you a few. You can then... google it, find the wikipedia page, and look for what's there. It's a tool without a purpose at that point. because it literally doesn't save you any steps. It doesn't guide you to the source to check it's facts, when it tells you them it may or may not be making up the sources. At which point, it has no factual use, or use in even directing to the facts.
Arguably it is a problem with the LLMs because they are being trained on and unknowable amount of garbage data. It's a garbage in garbage out problem, if the people training their LLMs are not vetting the data being input then you have to assume that any data output by the LLM contains some level of garbage.
The solution is to only use them for non-critical use cases and vett everything they output.
This is the missing conceptual understanding that probably 90% of LLM users lack. They really don't know how LLMs work, and treat them like AGI. Sadly this includes adult policy makers in our society too. Efforts like those of these these researchers act to educate the public. I'm hopeful this will spark some critical thinking on the part of regular, otherwise ignorant, LLM users.
Thanks for saying this in a nicer way than I would've.
It's known that AI companies will harvest content without care for its veracity and train LLMs on it. These LLMs will then regurgitate that content as fact.
This isn't a particularly novel finding but the experiment illustrates it rather well.
The researchers you consider to have acted so immorally did add useless information to the knowledge pool – but it was unadvertised, immediately recognizable useless information that any sane reviewer would've flagged. They included subtle clues like thanking someone at Starfleet Academy for letting them use a lab aboard the USS Enterprise. They claimed to have gotten funding from the Sideshow Bob Foundation. Subtle.
By providing this easily traceable nonsense, they were able to turn the generally-but-informally known understanding that LLMs will repeat bullshit into a hard scientific data point that others can build on. Nothing world-changing but still valuable. They basically did what Alan Sokal did.
Instead of worrying about this experiment you should worry about all the misinformation in LLMs that wasn't provided (and diligently documented) by well-meaning researchers.
Seems like the logical conclusion would be then that people who train LLMs should be responsible for curating the data, not expecting that the data will just be sound. People have been lying on the internet since it was invented, the advent of LLMs isn't suddenly going to create an internet that doesn't occur in.
And people have been launching products without thought to the ramifications since the dawn of time. I don't think that will change, either. What we need to do is educate ourselves better when it comes to identifying potential fraud. Taking anything at face value, regardless of it's source, is dumb. If it's worth knowing, it's worth verifying.
Edit: This ratio on this post is a monument to band-wagoning.
The studies contain parts like
and
as well as
Any human actively reading those studies would notice something off.
Besides, the author didn't feed it to the AI himself, he just published the study as a preprint, not even officially. Everything after that was done by the crawlers. This specific study was an experiment to see how far these crawlers go and if anything gets reviewed, but it could just as well have been a satirical paper published on April 1st and the crawlers would still see it as truth.
This should be top comment, the researchers did such a good job to make sure anyone with even the slightest reading comprehension would realise this is parody.
Regardless of that, the internet has always been full of lies and we cannot expect bad actors to not exploit this.
I admire your optimism but you severely overestimate the power of stupidity.
For normal people who just read stuff on the internet my expectations of reading comprehension is not that high.
For peer scientists and magazines that would publish science though.
A school teacher would catch all of these during grading.
I thought the author used she/her pronouns?
Yeah, seems like it, my bad.
In the article she is called Osmanovic Thunström twice, which definetly sounds male, but further up they also wrote her first name Almira. Kinda skimmed over that part.
News outlets are liable for what they publish. LLM vendors should be as well.
"Liable" means they might post a correction later that nobody will see because corrections aren't sexy to algorithms. Big deal. LLM vendors are liable in practically the same way.
LLM companies can just say it's for entertainment purposes only, kinda like Fox News.
Corrections are the piece that the public sees, but liability has more to do with being able to prove in court that you took reasonable steps to make sure you were providing accurate information.
They even have the same fix - just post somewhere quietly that it's "entertainment"
This is about the untraceability of AI slop and the tendency of people to blindly believe stuff that LLMs put out. These news outlets just publish LLM outputs as facts without checking sources. Anyone could poison these LLMs so this is more of a threat model demonstration.
They uploaded the papers to a single preprint server. That's important.
Preprints are papers predating any sort of peer review; as such, there's a lot of junk mixed in — no big deal if you know the field, but a preprint server is certainly not a source of reliable information, nor it should be treated as such. On the other side, news outlets are expected to provide you reliable information, curated and researched by journalists.
And peer review is a big fucking deal in science, because it's what sorts all that junk out. Only muppets who don't fucking care about misinformation would send bots to crawl preprints, and feed the resulting data into a large model; or to use the potential misinfo from the bot as if it was reliable. (Those two sets of muppets are the ones violating ethic and moral principles, by the way.)
So no, your comparison is not even remotely accurate. What they did is more like writing bullshit in a piece of paper, gluing it on a random phone pole, and checking if someone would repeat that bullshit.
They also went through the trouble to make sure that no reasonably literate human being would ever confuse that thing with an actually scientific paper. As the text says:
Yes, it is different. Because the large token model won't simply "repeat" things, it'll mix and match them and form all sorts of bullshit, even if you didn't feed it with any bullshit.
Here's an example of that, fresh from the oven. I don't reasonably expect people to be feeding misinfo regarding Latin pronunciation into bots, and yet a lot of this table is nonsense:
Compare the table above with this table and this one and you'll notice the obvious errors:
All it had to do was to copy info from Wiktionary, as it includes even phonetic and phonemic info. But since the bot is not just "regurgitating" info — it's basically predicting what should come next, and doing so with no regards to truth value — it's mixing-and-matching shit into nonsense.
If you actually read the bloody article instead of assuming, you'd know why the researchers did this: they don't expect the bot to do science on its own, they expect people to treat info from those bots as potentially incorrect.
And your job is to not trust it if it tells you "Yes, you are completely right! The colour of the sky is always purple. Do you need further information on other naturally purple things?"
[Replying to myself as this is a tangent]
I think the "bots can generate misinfo even if you just feed them correct info" point deserves its own example.
Let's say you're making a model. It looks at the preceding word, and tries to predict the next. And you feed it the following sentences, both true:
1. Humans are apes.
2. Cats are felines.
From both the bot "learnt" five words. And also how to connect them; for example "are" can be followed by either "apes" and "felines", both having the same weight. Then, as you ask the bot to generate sentences, it generates the following:
3. Humans are felines.
4. Cats are apes.
And you got bullshit!
What large models do is a way more complex version of the above, looking at way more than just the immediately preceding word, but it's still the same in spirit.
Did you even read the article? They say all over the paper that it is fake. And they didn't feed it to an LLM, they posted it online, where an LLM trying to scrape the entire internet sucked it up.