this post was submitted on 27 Apr 2026
1404 points (98.0% liked)

Science Memes

20105 readers
1446 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Oriion@jlai.lu 57 points 1 week ago (11 children)

And without hallucinations ??? That sounds freaking awesome

[–] a_non_monotonic_function@lemmy.world 145 points 1 week ago (1 children)
[–] OfCourseNot@fedia.io 43 points 1 week ago (1 children)
[–] WhyIHateTheInternet@lemmy.world 20 points 1 week ago (1 children)

You're them! You're the person! Holy shit!!

[–] msage@programming.dev 8 points 1 week ago (1 children)

That's why you hate the internet???

[–] Klear@quokk.au 7 points 1 week ago (1 children)
[–] 0ops@piefed.zip 4 points 1 week ago

Sorry 'bout that

[–] Madrigal@lemmy.world 100 points 1 week ago (1 children)

Yeah they added “Don’t hallucinate” to the prompt.

[–] fartographer@lemmy.world 8 points 1 week ago

Seems like the kind of prompt a hallucination would say

[–] morto@piefed.social 82 points 1 week ago

And without hallucinations ???

Likely not

[–] FiskFisk33@startrek.website 49 points 1 week ago

Have they solved the huge unsolved problem no one else has solved

yeah, no.

[–] iceberg314@slrpnk.net 47 points 1 week ago (2 children)

It probably uses Retrieval Augmented Generation, which can still hallucinate, but usually does a better job for niche questions and it can even provide a source sometimes depending on how you set it up

[–] expr@piefed.social 20 points 1 week ago

Obviously not, because that's not possible.

[–] DarrinBrunner@lemmy.world 11 points 1 week ago

What fun would that be?

[–] Atelopus-zeteki@fedia.io 10 points 1 week ago (1 children)

I'll keep the hallucinations for myself, tyvm.

Per sci-hub.ru this has been available since March 6th.

"Hear the good news: recent advances in artificial intelligence enabled Sci-Hub to launch a robot that gives scientifically-grounded responses to questions. The robot starts with searching for relevant literature in Sci-Hub database, then turns to selecting and reading most recent studies, and composes the answer based on this information. The answer includes all the references, and each referenced article can be read on Sci-Hub with one click.

Unlike question-answering robots that were based upon the early generation of neural networks, Sci-Hub bot does not hallucinate and is not making up scientific facts and does not cite sources that do not exist. To support its statements, Sci-Bot uses articles from Sci-Hub database. Questions can be asked in any language, and answers can be saved on server and shared.

The alpha version only supports answerig one question, and a more advanced variation that supports conversation mode is coming soon. Right column displays example questions that has been answered by robot - push the question to see the generated answer."

[–] Oriion@jlai.lu 9 points 1 week ago (1 children)

Thanks for doing what I should have done, I actually red that and thought it sounded great. The claim of "no hallucination" should of course be taken with a grain of salt, as other comments have pointed out.

[–] Atelopus-zeteki@fedia.io 2 points 1 week ago (1 children)

Sci-hub has been an invaluable resource. I posted a question yesterday at work. There was a queue, and it was time to leave, so I'll see what the result was when I get over there. I've avoided using AI, but this was too tempting. My question was in a area where I have some knowledge, so I'm hoping I'll be able to spot any problems in the reply.

[–] Oriion@jlai.lu 1 points 1 week ago

I'd be interested in having your feedback !!

[–] IrateAnteater@sh.itjust.works 4 points 1 week ago (1 children)

From what I understand from the sales brochure, these types of "AI" that are modeled on highly curated data are far less prone to hallucinations.

[–] sobchak@programming.dev 3 points 1 week ago

I doubt it's fine-tuned, it's likely just one of the open-weight LLMs with RAG. I've done similar things, and they don't really work as well as I'd like (the most relevant chunks of text aren't always ranked the highest/have the least embedding distance, and the models still hallucinate sometimes).

[–] takeda@lemmy.dbzer0.com 1 points 1 week ago

LOL, of course not.

Speaking of hallucinations, I think the best way to see them is to go to Google Gemini (Reddit is selling them Reddit posts) and start a conversation about Reddit account you have and act as you don't know anything. It usually starts good but as it progresses you can see how it is making shit up. The more you ask the more insane it gets.

And this is supposedly having all the comments at its disposal.

I also tried Lemmy as I'm sure they are also indexing it. It is telling me that I'm actually admin who created Lemmy.dbzer0.com