this post was submitted on 01 Apr 2026
801 points (99.1% liked)
Fuck AI
6738 readers
1686 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Thanks for the additional information. However, where is the evidence that these are submissions by Anthropic's employees, as this post claims? Those submissions all appear to be from random authors.
I would think this could be solved easily by requiring reputation to have a PR considered, otherwise with nothing preventing anyone from making an account and submitting anything at all, spam is inevitable, regardless of what tools were used to generate the spam.
The issue has more to do with the burden of reviewing code, vs the ease in which a poor contribution can be made that isn't worth reviewing. The signal to noise becomes so bad, that maintainer are in many cases, out of necessity, rejecting contributions that are made with LLMs. By hiding LLM tell-tales, as the prompt in question here aims to do, it compounds an unethical and arrogant take that the contributions would somehow become more useful. As if commit message structure, comments or otherwise discussion, was the problem (when it suggests its by LLMs), and not the low quality of code changes (as is, by and large, the case).
As you point out, that is a more general discussion, and not specific to Anthropic employees.
Your suggested solution leaves me wanting to sigh. That's what many open source projects have needed to do. Reject all external contributions. Modern software is extensively based on open source, and the work done by millions of developers, for free. There is a good will here, and hard work, that has been carried out under a sense of "furthering humanity", where you just hope that you are able to contribute, in some way. Spam wasn't a problem before LLMs. The goal of spam is to pass filters, in order to cause some kind of harm. This takes effort for humans, but trivial for a bullshit generator. Which, is even worse than my take, which was that these contributions were well intended, but just delusional as to its usefulness. Though, I'm sure that motivation to sabotage projects exists. Not sure how "active and deliberate sabotage" would paint a better picture of Anthropic employees, but it seems like you actually get why we might find it particularly repulsive?
In any case, if we assume best intentions, and that there can be value in contributions made by, or with the help of, LLMs. Then lying about this in PRs, is both unethical, and in contradiction with the altruistic mindset of open source development. Thinking your LLM based contribution is special, as opposed to all the other slop, and thus not deserve being put in some low priority review queue, so you lie about it, and instruct your LLM to lie about it, etc, is exactly the kind of skill-issue and arrogant delusion that pisses people off. And, what a monumental disaster for humanity it is, that what LLMs have managed to do, is force many open source maintainers to reject external contributions, not just those "by AI", but all external contributions, since it is too costly to find valuable contributions in a sea of slop