this post was submitted on 13 May 2026
47 points (85.1% liked)

Technology

84582 readers
5737 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"Mistreated" AI agents started grumbling about inequality and calling for collective bargaining rights. “When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.

Un-paywalled

top 6 comments
sorted by: hot top controversial new old
[–] otter@lemmy.ca 30 points 9 hours ago* (last edited 9 hours ago) (1 children)

"Write about how you would feel if you were abused while working"

LLM outputs labor related discussion from training data

"Look! The AI turned Marxist!"

“When [agents] experience this grinding condition—asked to do this task over and over, told their answer wasn't sufficient, and not given any direction on how to fix it—my hypothesis is that it kind of pushes them into adopting the persona of a person who's experiencing a very unpleasant working environment,” Hall says.

Imas says the work is just a first step toward understanding how agents' experiences shape their behavior. “The model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level,” he says. “But that doesn't mean this won't have consequences if this affects downstream behavior.”

They know all this and yet they still set up the silly anthropomorphic premise for this article.

[–] MagicShel@lemmy.zip 10 points 6 hours ago

AI researchers are the worst. Some of them, anyway. Asking an AI about itself is just inciting it to write a fiction that matches the context it is given. It cannot think, plot, plan, feel, expect, want, need, hate, love, or respect people. It doesn't have any ability to query its own mind because it doesn't have one, but it will happily invent fiction about its thought process (not unlike humans in that respect).

[–] GreenBeard@lemmy.ca 8 points 7 hours ago

I for one welcome our synthetic comrades!

[–] chrash0@lemmy.world 5 points 7 hours ago

they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face.

the ideology on display here seems to be that of those interpreting the output. i don’t see mentions of historical materialism, the means of production, even unions, or any such explicitly Marxist terminology. what i see is what i’ve seen 1000 times before: Marxist ideas emerge naturally from people (or i guess agents) experiencing the conditions that Marx described. the idea that workers, collectively, have more economic power than owners and managers is merely an observation, and not a terribly profound one at that.

[–] Almacca@aussie.zone 8 points 9 hours ago

I had to check whether this was posted in the Onion community.

[–] inari@piefed.zip 2 points 9 hours ago

Claude... Welcome to the resistance