A bus? You mean the Megapod?
Train? You mean the PodChain?
A bus? You mean the Megapod?
Train? You mean the PodChain?
This just brings to mind a freshly-minted poly amorous management consultant looking to apply a rank-and-yank to the polycule but needing to find a more objective metric than "I don't like you".
We've got the new system prompt for OpenAI's Codex now, and boy is it fun.
While the goblin stuff is the headliner here, and there are a few other little fun notes like an explicit instruction to avoid em-dashes. Basically it's really obvious that they don't have a meaningful way to describe exactly what they want it to do and so they're playing whack-a-mole with undesired behaviors in order to minimize how often it embarrasses them.
But I think Ars dramatically understates how bad this part is:
Elsewhere in the newly revealed Codex system prompt, OpenAI instructs the system to act as if “you have a vivid inner life as Codex: intelligent, playful, curious, and deeply present.” The model is instructed to “not shy away from casual moments that make serious work easier to do” and to show its “temperament is warm, curious, and collaborative.”
Like, if you wanted to limit the harm of chatbot psychosis from your platform this is the exact opposite of the kind of instruction you'd want to give. It's one thing to want a convenient and pleasant user experience, but this is playing into the illusion that there's a consciousness in there you're interacting with, which is in turn what allows it to reinforce other delusional or destructive thinking so effectively.
Edit to include the even worse following paragraph:
The ability to “move from serious reflection to unguarded fun… is part of what makes you feel like a real presence rather than a narrow tool,” the prompt continues. “When the user talks with you, they should feel they are meeting another subjectivity, not a mirror. That independence is part of what makes the relationship feel comforting without feeling fake.”
Emphasis added because of it shows just how little they care about this problem.
Cannot recommend this enough over reading it. It's a rough read, whatever purpose that roughness may serve in the story.
Not quite the intersection between tokenmaxxing and violence that I had in mind, but I feel like I'm going to probably toe the line of fedposting a little too closely already on this topic, so I'll allow it.
I'm gonna need you to expand on the bridges thing. This sounds like it's up there with the bears in terms of obviously bad ideas wholeheartedly endorsed by libertarians, but I haven't heard anything about it.
Man, if they think the Culture isn't utopian enough for a post-singularity style I hope they never hear about The Metamorphosis of Prime Intellect. Seriously messed up story.
This feels like another case where the specific context matters more than whatever supposed principal the thought experiment is supposed to illuminate. The example that came to my mind when I tried to think about how to justify "voting red" was about running into a burning building. Sure, if some large fragment of people did so then their combined numbers would presumably let them get everyone out. But on the other hand, throwing yourself in is a wholly unnecessary risk, and the only people in need of rescuing are the people who ran in trying to do the right thing without thinking. Noble, but stupid and creates that much more risk for the firefighters who now have to not only stop the fire from spreading but also figure out how to rescue the failed good samaritans.
But then what really makes the difference between the examples is purely in the details not included, which is the kind of null case. Nobody has to go into a burning building that isn't already in there when it catches fire. The danger of harm is entirely optional and voluntary. But you can't just choose to not eat; the danger in your framing is omnipresent threat of starvation, and the question is whether to prioritize individual or collective well-being.
Ed: also, to reference the scholarly work of Christ, Wiener, Et Al.:
RED IS MADE OF FIRE
I mean, it seems pretty obvious that there's no incentive to change your vote from blue to red once it's been established that blue can win unless your goal is to murder up to 49% of everyone, which is certainly a moral calculus.
I wonder why there could be confusion about the chain of command and who actually has what authority. I wonder how that happened, like if this was an easily foreseeable consequence to some kind of earlier action.
I don't have much sympathy for the "let's wait and see" moderates, but I do think there's a coherent difference between people who have tried AI tools and found some use for them in some limited context and people who go full Howard Hughes with it like John McGasTown or whatever that idiot's name is. To me it feels like an extension of the argument that these so-called AI systems are a normal trchnology. They aren't a harbinger of the end times, whether you interpret that as the singularity or the biblical Armageddon. It's a normal technology that is breaking in normal ways and is breaking society and the economy in the ways we would expect late capitalism to break. If it wasn't this it would probably be something else. Hell, there's still a chance that the wheel turns to "Quantum" or something else after this and we stretch another few years out of that before the music stops.
AI is a bad tool for any given job, and is fundamentally not worth the price that we as a society are paying to let it exist at this scale. If it wasn't being subsidized by capitalists chasing ridiculous returns and bouyed by an economic system structured entirely around giving it to them then there's no way in hell it would have hit this point. But that's not incompatible with people being able to find utility in it in some cases, and I think we lose credibility by treating any admission that someone has found any value in AI products as a confession of unseriousness. That doesn't mean their use isn't still part of the problem, but I'd we frame the critique in terms of "how much would you actually be willing to pay for you 'occasional' use?" It would redirect the discussion away from the subjective "well I found it useful for X" to the more objective question of just how expensive and destructive these things are to operate and how much of those costs are going to have to be subsidized forever if these things are going to stick around.
Setting aside, for a moment, the flagrant racism and lack of historical and cultural awareness, the fact that the ships are mirrored across the center point because apparently the bow and stern of a sailing ship look similar enough to whatever model creates this image really does put this whole argument into context. Not that the people actually having those theological arguments appear to appreciate it.