what text are you reading that has a 0% error rate?
as I said, the text has a 0% error rate about the contents of the text, which is what the LLM is summarising, and to which it adds it's own error rate. Then you read that and add your error rate.
the question is can we make a system that has an error rate that is close to or lower than a person's
can we???
could you read and summarize 75 novels with a 0% error rate?
why… would I want that? I read novels because I like reading novels? I also think that on summaries LLMs are especially bad, since there is no distinction between "important" and "unimportant" in the architecture. The point of a summary is to only get the important points, so it clashes.
provide a page reference to all of the passages written in iambic pentameter?
no LLM can do this. LLMs are notoriously bad at doing any analysis of this kind of style element because of their architecture. why would you pick this example
Meanwhile an LLM could produce a summary, with citations generated and tracked by non-AI systems, with an error rate comparable to a human (assuming the human was given a few months to work on the problem) in seconds.
I still have not seen any evidence for this, and it still does not adress the point that the summary would be pretty much unreadable


Which makes that a bit harder to do. Any clue why?
I think homestucks figured out how to prevent this ages ago