jackr

joined 11 months ago
[–] jackr@lemmy.dbzer0.com 1 points 14 hours ago (1 children)

I think homestucks figured out how to prevent this ages ago

[–] jackr@lemmy.dbzer0.com 1 points 1 month ago (1 children)

what text are you reading that has a 0% error rate?

as I said, the text has a 0% error rate about the contents of the text, which is what the LLM is summarising, and to which it adds it's own error rate. Then you read that and add your error rate.

the question is can we make a system that has an error rate that is close to or lower than a person's

can we???

could you read and summarize 75 novels with a 0% error rate?

why… would I want that? I read novels because I like reading novels? I also think that on summaries LLMs are especially bad, since there is no distinction between "important" and "unimportant" in the architecture. The point of a summary is to only get the important points, so it clashes.

provide a page reference to all of the passages written in iambic pentameter?

no LLM can do this. LLMs are notoriously bad at doing any analysis of this kind of style element because of their architecture. why would you pick this example

Meanwhile an LLM could produce a summary, with citations generated and tracked by non-AI systems, with an error rate comparable to a human (assuming the human was given a few months to work on the problem) in seconds.

I still have not seen any evidence for this, and it still does not adress the point that the summary would be pretty much unreadable

[–] jackr@lemmy.dbzer0.com 2 points 1 month ago (3 children)

The study of this in academia

you are linking to an arxiv preprint. I do not know these researchers. there is nothing that indicates to me that this source is any more credible than a blog post.

has found that LLM hallucination rate can be dropped to almost nothing

where? It doesn't seem to be in this preprint, which is mostly a history of RAG and mentions hallucinations only as a problem affecting certain types of RAG more than other types. It makes some relative claims about accuracy that suggest including irrelevant data might make models more accurate. It doesn't mention anything about “hallucination rate being dropped to almost nothing”.

(less than a human)

you know what has a 0% hallucination rate about the contents of a text? the text

You can see in the images I posted that it both answered the question and also correctly cited the source which was the entire point of contention.

this is anecdotal evidence, and also not the only point of contention. Another point was, for example, that ai text is horrible to read. I don't think RAG(or any other tacked-on tool they've been trying for the past few years) fixes that.

[–] jackr@lemmy.dbzer0.com 2 points 1 month ago (5 children)

see, the problem is that I am not going to be reading that text because I know it is unreliable and ai text makes my eyes glaze over, so I will be clicking on all those links until I find something that is reliable. On a search engine I can just click through every link or refine my search with something like site:reddit.com site:wikipedia.org or format:pdf or something similar. With a chatbot, I need to write out the entire question, look at the four or so links it provided and then reprompt it if it doesn't contain what I'm looking for. I also get a limited amount of searches per day because I am not paying for a chatbot subscription. This is completely pointless to me.

[–] jackr@lemmy.dbzer0.com 3 points 1 month ago

good basilisk save us all. so they built a script for their chatbot that allows it to purchase more chatbots? seems like a great use of money. Also, what's with the insanely placed emdashes? did conway write this for him or has his brain been rotted so much that he writes like an LLM? large parts seem human-written at least…

[–] jackr@lemmy.dbzer0.com 5 points 2 months ago (3 children)

the screenshot is very very clearly LLM generated right? This is so insanely stupid

[–] jackr@lemmy.dbzer0.com 4 points 2 months ago

hate it when you are working on a major featue for the next release but tim keeps continvoucly morging

[–] jackr@lemmy.dbzer0.com 16 points 2 months ago (9 children)

so I fail to see why I should be using an LLM at all then. If I am going to the webpages anyway, why shouldn't I just use startpage/searx/yacy/whatever?

[–] jackr@lemmy.dbzer0.com 15 points 2 months ago (17 children)

so we are using the "regular search which has always given you garbage" and taking that garbage automatically to get summarised by the hallucinator and we are supposed to trust the output somehow?

[–] jackr@lemmy.dbzer0.com 3 points 2 months ago (2 children)

unfortunately, julia has been adding "agentic code" to their codebase for a while now.

[–] jackr@lemmy.dbzer0.com 3 points 2 months ago* (last edited 2 months ago)

bluemonday1984@awful.systems update: quokka reached out to me and apparently you had been banned on another instance for report abuse and that ban had synchronised to quokk.au. You should be unbanned now which means the next stubsack should federate again.

E: I do not know how tags work E2: why does that format to a mailto link?

[–] jackr@lemmy.dbzer0.com 5 points 2 months ago (1 children)

OT: I tried switching away from my instance to quokk.au because it becomes harder and harder to justify being on a pro-ai instance, but it seems that your posts stopped federating recently. Which makes that a bit harder to do. Any clue why?

 

cross-posted from: https://sh.itjust.works/post/36201155

We're sorry we created the Torment Nexus

view more: next ›