lagrangeinterpolator

joined 11 months ago
[–] lagrangeinterpolator@awful.systems 9 points 1 day ago (1 children)

I attended a town hall hosted by the department at my university supposedly for general discussion about department affairs. Considering the university had recently made moves such as adding "AI" into the very name of the department, I had suspicions that much of the discussion would be about AI. (I realize I'm doxxing myself but whatever.) I mostly came for the free food, but I was also interested in seeing what people thought about AI.

The event started with a talk by a prominent professor with major administrative power in the department, and indeed the talk was mostly about AI. His views were that he personally didn't like AI, but he believed that it had changed the world (particularly in programming), and that it was going to stay. One of his justifications for pivoting the department to AI was ensuring universities had some say in AI and not letting all the control go to unaccountable corporations.

The reaction from the audience was a pleasant surprise to me. He asked everyone how much they were excited about AI (hardly anyone) and how much they were worried (most of the audience). By far the most amusing moment was when someone asked, "What if the assumption that AI is inevitable is wrong? What if AI does not live up to its promises?" (Sadly, I don't remember the exact words that the person said.) The professor's response was that by this point, there are so many trustworthy, smart, prominent people who definitely wouldn't fall for scams, and they have adopted AI. He trusts those people, so he trusts that AI is genuine. I don't know if the audience member accepted this explanation, but I hope not. Our modus operandi is FOMO.

The pizza was only ok, not really worth a 90 minute event.

This really goes to show how much they need to rely on the LLMentalist effect, despite the AI boosters insisting that the AI is totally different now, everything changed in the last few months. They do not care about creating a useful, reliable tool. That concept doesn't even occur to them, since why do that when AI is magic?

In any case, they are incapable of creating a useful, reliable tool. Deep down, the only thing the AI companies have at their disposal is the ELIZA effect. OpenAI has every incentive not to truly eliminate AI psychosis, because they need engagement. They only want to mitigate the extreme cases where people go insane and cause bad PR for them. But mild AI psychosis is totally fine, it's great when people are addicted to your product and make the numbers go up!

[–] lagrangeinterpolator@awful.systems 5 points 1 day ago (1 children)

Somehow this is no worse than his usual fare, such as a thumbnail that is just a bunch of colored lines resembling a line chart but without representing any actual data, with some random marked points labeled "Dark Farms" and "Human Zoo".

No, I'm not kidding.

Unfortunately, our problem right now is not Donna the below-average Democrat but Donald the fascist. And when it comes to fascists I do not ask if they are above or below average.

[–] lagrangeinterpolator@awful.systems 15 points 1 week ago* (last edited 1 week ago) (3 children)

The fire code thing really is an excellent example of LessWrong Brain. Fire truck drivers insist on needlessly large trucks (no citation) which makes roads 30% wider than they would otherwise be (no citation) which has "probably" "non-trivially" contributed to larger cars (no citation) leading to enough additional road fatalities to cancel out the lives saved by stricter fire codes (no citation).

The LessWrong Brain argument starts with a deliberately contrarian conclusion and proves it with a Rube Goldberg chain of logical syllogisms. Of course, citations are strictly optional, and they are free to misinterpret them as they see fit. The only real standard of each claim is "looks good to me", but you are supposed to be impressed that they managed to string a dozen of them together to reveal some shocking, deep truth of the world that nobody else knows about. The AI 2027 nonsense is an infamous example of this.

He uses the word "fermi" which is cult jargon based on Fermi estimation, a.k.a. guessing shit with back-of-the-envelope calculations. Not exactly what you want if you want to convince people to reform fire codes, especially if you have zero citations for anything.

I guess people just aren't rational enough, and the only reason the fire codes are so irrational is because people are emotional about fire codes. Firefighters are apparently revered as heroes, when it is the LWers who should be the heroes. After all, firefighters merely save people from fires, while LWers buy multimillion dollar mansions to talk about saving quadrillions of hypothetical people from hypothetical basilisks!

[–] lagrangeinterpolator@awful.systems 10 points 1 week ago (1 children)

It's fine, spyware is only a risk when it's bad people's spyware. It's totally fine when it's Anthropic™-approved spyware!

As for Mythos catching things, maybe they should have used Mythos on their very own Claude Code considering that it has hilariously obvious security exploits, such as this one which inserts an arbitrary string into a shell command. Actually, never mind I don't see anything wrong here, maybe we should burn another $20k in electricity running Mythos on it again to find out.

[–] lagrangeinterpolator@awful.systems 9 points 1 week ago* (last edited 1 week ago) (1 children)

In basically every case in history where people decided to kill a bad king, there was a period of chaos and violence that followed it. The killing of Charles I happened during the English Civil War, and the killing of Louis XVI happened during the French Revolution. This has happened many times in Chinese history, with the fall of an imperial dynasty leading to several decades of civil war (most recently in the early 1900s). But I guess if you have a big clever brain with big clever thoughts, you don't need to look at history.

If the only way to get rid of a bad king is to kill him, he will do anything he can to defend his power, including using as much violence as necessary. (People generally do not like being killed.) Even if you successfully get rid of him, good luck establishing a proper government afterwards with all the violence you've caused. And who knows if the new king is gonna be better or worse? A better system would instead have a mechanism that replaces officials on a regular basis, say every few years, and ensure that these replacements are peaceful. Oh wait, that's liberal democracy. If we do something boring like support democracy, how will people ever think of us as special, clever thinkers with bold, contrarian thoughts?

It’s still One Person. A mortal, fleshy person. Their defence is that they’re inoffensive, things are stable, nothing is directly their fault and people are bound by law and oath.

Bro, your system involves giving all the power to one person. You cannot then say they have no responsibility or that they're "inoffensive" when they abuse it.

I've seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.

Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I'm not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?

[–] lagrangeinterpolator@awful.systems 0 points 3 weeks ago (1 children)

I think the main difference here is that breaking RSA now just requires scaling up existing approaches, while breaking LWE or anything like that would need a major conceptual breakthrough. The former possibility is much more likely, and in any case, cryptographers are the most paranoid people on the planet for a reason.

Unfortunately, one can never be sure about much in cryptography until P vs NP is solved (and then some).

(Of course, just because some people say that scaling up is enough doesn't mean it's actually true. For breaking RSA, we know have Shor's algorithm, while the only evidence AI bros have from superintelligence coming from scaling is "trust me bro".)

[–] lagrangeinterpolator@awful.systems 9 points 4 weeks ago* (last edited 4 weeks ago) (5 children)

This is what happens when your worldview is based on anime.

(A lot of anime has heavy themes, but most people understand that it's not real life, just like all such art. Unlike Yud, most people's worldviews on coding and math are based on actual coding and math.)

We can see that one 9 of availability is 90% = 0.9, two 9s is 99% = 0.99, three 9s is 99.9% = 0.999, etc. In general, for positive integers n, n 9s of availability is 1 - (1/10)^n, and we can extrapolate that to non-integer values of n. The value γ needed for 87.5% availability is the solution to 1 - (1/10)^γ = 7/8, or γ = log_10(8) = 0.903089987. γ is transcendental by Gelfond-Schneider (see this for a reference proof).

Right now, Sora is at zero 9s of availability.

[–] lagrangeinterpolator@awful.systems 9 points 1 month ago* (last edited 1 month ago) (3 children)

By far the dumbest "feature" in the codebase is this thing called "Buddy" (described in a few places such as here). Honestly, I don't really know what it's for or what the point is.

BUDDY - A Tamagotchi Inside Your Terminal

I am not making this up.

Claude Code has a full Tamagotchi-style companion pet system called "Buddy." A deterministic gacha system with species rarity, shiny variants, procedurally generated stats, and a soul description written by Claude on first hatch like OpenClaw.

...

On top of that, there's a 1% shiny chance completely independent of rarity. So a Shiny Legendary Nebulynx has a 0.01% chance of being rolled. Dang.

Great, so they were planning on a gacha system where you can get an ASCII virtual pet that, uhh, occasionally makes comments? Truly a serious feature for a serious tool for the serious discipline of software engineering. Imagine if IntelliJ decided to pull this bullshit.

But also, Claude Code is leaning hard into gambling addiction — the “Hooked” model. You reward the user with an intermittent, variable reward. This keeps them coming back in the hope of the big win. And it turns them into gambling addicts.

The Onion could not have come up with a better way to illustrate this very point.

view more: next ›