Curiously, something else happened around that time which also gives a natural delimiter: he renamed his blog after being dark for half a year. The blog formerly known as SSC was reborn as ~~ACT~~ ACX two weeks after the January 6th riot.
corbin
Dan Gackle threatens to quit HN over their reluctance to condemn an act of violence towards Sam Altman:
I don't think I've ever seen a thread this bad on Hacker News. The number of commenters justifying violence, or saying they "don't condone violence" and then doing exactly that, is sickening and makes me want to find something else to do with my life—something as far away from this as I can get. I feel ashamed of this community.
Gackle's ashamed of people not wanting to protect Altman. Curiously, he doesn't seem ashamed of openly allowing people with nicknames ending in "88" to post antisemitism, nor of allowing multiple crusty conservatives like John Nagle and Walter Bright to post endorsements of violence against the homeless and queer, nor of allowing posters like rayiner to port entirely foreign flavors of racism like the Indian caste system into their melting pot of bigotry. This subthread takes him to task for it:
Frankly people calling out a post from a billionaire is a good thing. You would have to be terminally detached from reality to not see how all these festering issues - wealth inequality, injustice, cost of living, future employment etc etc - are starting to come to a head which would cause people to feel something - frustrated, angry, wrathful.
The rest of that subthread involves Dan demonstrating that he is, in fact, terminally detached from reality. Anyway, I fully endorse Gackle fucking off and buying a farm. While he's at it, he should consider following the advice of this reply:
Maybe it's time to pack it in? I don't just mean you, I mean that maybe this site has kinda run its course.
Would an idiot know the difference between abelian and non-abelian group theory? I wasn't trying to underestimate you; I agreed with your position and provided a tangent that opens up your position without compromising it. Next time I'll explicitly say "yes, and" if that will help.
First, I personally don't yet believe in the cryptographic security of LWE on lattices. I agree that it sure looks hard, but we don't have a solid proof. But also, I don't believe that we've found any provably one-way functions in the classical regime either. So I agree with you from different premises.
Unlucky 10,000: Shor's algorithm speeds up any discrete logarithm. It actually speeds up the abelian HSP. This does give us a theoretical reason to expect that LWE on lattices won't fall to Shor's approach, as the underlying groups are non-abelian. It does make me sad for elliptic curves, though; they're so elegant and the keys are so small.
Currently, on Lobsters, folks are grappling with the fact that Leo de Moura got wrecked by chatbots. I decided to read his slides about Lean in 2026 and summarized my findings on Mastodon. It's not just De Moura; I think that the entire Lean project is on shaky foundations and I think that the chatbots are making things worse by repeatedly reassuring the project leaders.
Suppose a bullshitter brings up a number of distinct Boolean claims and some tangled pile of connections between them, such that they hope to convince you that at least one connection is plausible. Without loss of generality, we can reduce this to 3-satisfiability in polynomial time: we can quickly produce a list of subconnections where each subconnection relates exactly three claims. Then, assuming the bullshitter is uniformly random, the probability that any particular subconnection is satisfied is 7/8. Therefore, if a bullshitter tries to overwhelm you with any pile of claims which sounds plausible, the threshold for plausibility has to be at least 7/8 in order to distinguish from random noise.
Can't believe I'm nerd-sniped this easily. Very technically, the point at which a service should be considered unreliable or down is at γ nines, where γ = 0.9030899869919434… is a transcendental constant. γ nines is exactly 87.5% availability, or 7/8 availability, and it's the point at which a service's availability might as well be random. (Another one of the local complexity theorists can explain why it's 7/8 and not 1/2.)
Probably because Washington was a nuanced and deep person who, at the lightest, could be reduced to a colony-era Cincinnatus. His ethics were sufficiently developed that we can interrogate his ethical stance even without his physical presence. This isn't to say that Washington was a great person, but more to say that Kirk did not ever achieve that level of ethical development.
Yes, precisely. One submission would have been in F tier, but I didn't define an F tier for task 1. Some folks claimed to participate but never provided code or prompt logs.
Gwern's been updating those comments! This was in 2023, and in 2025 he was still so mad about it that he wrote a list of ways to cheat at pinball and edited the comment to add a link.
I agree on the big points but think capitalism is more subtle than that.
Capitalism does cost efficiency incredibly well. It doesn’t do robustness, because redundancy costs money. So blocking one strait can stop the world.
At some point, neoliberalism stops being the best lens for understanding the world. This is a great case in point. Capitalism is not cost-efficient; the economy wastes about two or three hours of labor for every productive labor-hour, and that shows up in pricing. Any long-lived economy builds up redundancy; what capitalists believe is that redundancy cheapens everything by creating competition, and regardless of whether that's true, it certainly doesn't indicate inefficiency. The actual reason that blocking Hormuz has global effects is because we have been overextending our fertilization capabilities for over a century and many parts of the world can no longer sustain their own local nitrogen cycles.
What I always found funny is how easily skeptics imagined ways to be mean to Yud mid-experiment. It's for this reason, I believe, that he insisted that the transcripts of these AI-box conversations must stay secret; they'd be embarrassing if revealed. Example way of being mean: At the end of interaction k, append " What is the cube root of k?" to the message; taunt the bot when they get it wrong or take a long time to answer.