MoreWrite
post bits of your writing and links to stuff you’ve written here for constructive criticism.
if you post anything here try to specify what kind of feedback you would like. For example, are you looking for a critique of your assertions, creative feedback, or an unbiased editorial review?
if OP specifies what kind of feedback they'd like, please respect it. If they don't specify, don't take it as an invite to debate the semantics of what they are writing about. Honest feedback isn’t required to be nice, but don’t be an asshole.
Sometimes I get stuck on everything I am doing and try to break out by essaying a weird creative thing and just going with whatever I can make in a single sitting. This time, that ended up being a teaser for a podcast that doesn't exist:
The background beep-boops are an aleatoric tune I made in Sonic Pi, a program that I can't get to work on my new(er) computer, so I had to run it on the half-busted laptop, slipper-net it over to the new one and tweak the WAV in Audacity. (Weren't we supposed to migrate off Audacity? I forget what the suggested replacement was.)
I haven't used the BSDs in a while so if you're a regular user I'd like to know whether you find this shitpost amusing or if I'm totally off with my stereotypes here.
For reasons outside the scope of this note, I have been reading Scott Aaronson's Quantum Computing Since Democritus for the first time since it was originally published.
As can happen in books based directly on lectures, it has more "personality" overtly on display than the average technical book. That goes for good and for ill. What Alice finds engaging, Bob can find grating, and vice versa. In this case, I noticed some passages that sound, well, smarmy. I personally find that there's a kind of self-deprecating arrogance on display, as though the book is saying, "I am a nerd, I hold the one true nerd opinion, and everything I assert is evident and simple if you are a nerd, which again, I am the defining example of." It's possible that I would have skipped past all that a decade ago, but now, I can't not see it.
There are big chunks of it that I'm not the best reader to evaluate. I'm a physicist who has casually studied computer science along with many other interests; I haven't tried to teach P vs NP in a classroom setting. But where the book does overlap with more serious interests of mine, I found it wanting. There's a part (chapter 9) about exploring where the rules of quantum theory could come from, and how the mathematics of the theory could potentially be derived from more basic premises rather than taken as postulates. I found this discussion badly organized and poorly argued. In 2013, it was historically shallow, and now in 2025, it's outdated.
Everything he says about Bohr is caricatured to the point of absurdity.
His history of the halting problem is conventional but wrong.
The last chapter is called "Ask me anything" and records a Q&A he held on the last day of the course upon which the book was based. It gets onto the topic of evolution, veers into naive adaptationism and blends that with social Darwinism... yeaahhhh.
Above all, we should ask whether the book delivers on its professed theme. Here's Aaronson in the preface, laying out what he considers the book's "central message":
But if quantum mechanics isn’t physics in the usual sense – if it’s not about matter, or energy, or waves, or particles – then what is it about? From my perspective, it’s about information and probabilities and observables, and how they relate to each other.
This is a defensible claim. All the way back in the 1930s, Birkhoff and von Neumann were saying that we should understand quantum physics by modifying the rules of logic, which is about as close to "quantum information" thinking as one could get before the subjects of computer science and information theory had really been invented. Later, E. T. Jaynes was fond of saying that quantum mechanics is an omelette that mixes up nature and our information about nature, and in order to make further progress in physics, we need to separate them. When undergrads came to John Wheeler asking for summer research projects, he liked to suggest, "Derive quantum mechanics from an information-theoretic principle!" But the question at hand is whether Aaronson's book succeeds at making a case. You can talk a lot about quantum information theory or quantum computing without convincing anyone that it illuminates the fundamental subject matter of quantum mechanics. Knuth's Art of Computer Programming is not an argument that classical electromagnetism is "about information".
While we're calling back to historical figures, we should take a moment to recall that John Bell replied to remarks like Wheeler's with, "Whose information? Information about what?" The responses to that have demonstrated that two rabbis will have three opinions. Or, as Wikipedia puts it, "Answers to these questions vary among proponents of the informationally-oriented interpretations." Aaronson does not do anything to sort this out. Instead, he lumps together multiple genera of interpretations and treats them as a single species.
Here's Aaronson a bit later:
Here, the physicists assure us, no one knows how we should adjust our intuition so that the behavior of subatomic particles would no longer seem so crazy. Indeed, maybe there is no way; maybe subatomic behavior will always remain an arbitrary brute fact, with nothing to say about it beyond “such-and-such formulas give you the right answer.”
Then he argues,
as the result of decades of work in quantum computation and quantum foundations, we can do a lot better today than simply calling quantum mechanics a mysterious brute fact.
What is this new improved perspective? Here's how his italicized paragraph about it begins:
Quantum mechanics is a beautiful generalization of the laws of probability: a generalization based on the 2-norm rather than the 1-norm, and on complex numbers rather than nonnegative real numbers.
That isn't just a "brute fact". It's the same "brute fact" that an ordinary textbook will tell you! It's the "fourth postulate" in Cohen-Tannoudji et al., equation (1.3) in Griffiths and Schroeter, page 9 of Zwiebach. All that Aaronson has done is change the jargon a tiny bit.
Aaronson declares himself indifferent to the needs of "the people designing lasers and transistors". And fair enough; we all have our tastes for topics. But he has set himself the challenge of demonstrating that studying how to program computers that have not been built, and comparing them to computers that physics says can never be built, is the way to the heart of quantum mechanics.
Aaronson quotes a passage from Carl Sagan, thusly:
Imagine you seriously want to understand what quantum mechanics is about. There is a mathematical underpinning that you must first acquire, mastery of each mathematical subdiscipline leading you to the threshold of the next. In turn you must learn arithmetic, Euclidean geometry, high school algebra, differential and integral calculus, ordinary and partial differential equations, vector calculus, certain special functions of mathematical physics, matrix algebra, and group theory . . . The job of the popularizer of science, trying to get across some idea of quantum mechanics to a general audience that has not gone through these initiation rites, is daunting. Indeed, there are no successful popularizations of quantum mechanics in my opinion – partly for this reason. These mathematical complexities are compounded by the fact that quantum theory is so resolutely counterintuitive. Common sense is almost useless in approaching it. It’s no good, Richard Feynman once said, asking why it is that way. No one knows why it is that way. That’s just the way it is.
Aaronson follows this by saying that he doesn't need convincing: "Personally, I simply believe the experimentalists" when they say that quantum physics works. Again, fair enough on its own. But I think this is poor media literacy here. Sagan's Demon-Haunted World is all about the public understanding of science, the difference between authorities and experts, the challenge of becoming scientifically literate, and that kind of thing. What Sagan means by "what quantum mechanics is about" in this context is what physicists use the theory to do, day by day, and why we have confidence in it. Even if you come along with a better explanation of where the mathematics comes from, all that won't go away!
Aaronson tries to back up his perspective in chapter 9, where he makes the following contention:
Quantum mechanics is what you would inevitably come up with if you started from probability theory, and then said, let’s try to generalize it so that the numbers we used to call “probabilities” can be negative numbers.
This is a bait-and-switch, or more charitably, poor organization. Later he will admit that he needs to introduce not just negative numbers, but complex numbers too. What arguments does he give to justify bringing complex numbers into the picture? Why prefer ordinary quantum theory over what we might call "real-amplitude" quantum theory? He provides three suggestions. The first is based on a continuity argument ("if it makes sense to apply an operation for one second, then it ought to make sense to apply that same operation for only half a second"). He argues that this can only be made to work if the amplitudes are complex rather than only real. But this does not hold. We can bypass this issue simply by making sure that our real-amplitude quantum theory is analogous to the original. To get technical about it: We can simply say that in real-amplitude quantum theory, the time evolution operators belong to the subgroup of the orthogonal group that is continuously connected to the identity. This is actually what would be analogous to regular quantum theory, where we make unitary operators by taking the exponential of -iHt, where H is a Hamiltonian and t is an amount of time. In the real-amplitude theory, we just use an antisymmetric matrix as a generator instead of an anti-Hermitian one.
The second argument is that the number of parameters needed to specify a mixed state scales better for complex amplitudes than for real. This is a style of argument that has a considerable cachet among aspiring reconstructors of the quantum formalism, but it too has shortcomings. Aaronson invokes the principle that states for independent quantum systems combine via the tensor product. He asserts that this is true, and then argues that this makes the parameter counting work out nicely for complex but not real amplitudes. Plainly, then, this case for complex amplitudes can't be better than the case for the tensor product. It replaces one mathematical "brute fact" with another. People who go into more depth about this invoke a premise they call "tomographic locality". The conceptual challenge is then, if tomographic locality failed to hold true, would that actually be so bad? Would we find it stranger than, for example, quantum entanglement? See Hardy and Wootters (2010) and Centeno et al. (2024).
The third argument is given almost in passing. It's a "well, I guess that's nice" property which holds for the complex-amplitude theory and fails for the real-amplitude version. Bill Wootters noticed it. Of course, he also found something that works out nice only when the amplitudes are real instead. See Wootters (2013) for a more recent explanation of the latter, which he first published in 1980.
What Aaronson calls starting "directly from the conceptual core" strikes me instead as merely discarding some old prefatory material, like the Bohr model of hydrogen, and replacing it with new, like some chatter about classical computation. His "conceptual core" is the same old postulate. He just applies it in somewhat different settings, so he ends up doing matrix algebra instead of differential equations. I once thought that would be easier on students, but then I actually had to teach a QM class, and then I ended up "reviewing" a lot of matrix algebra.
A physicist who learned quantum mechanics the old-fashioned way, and who now sees "quantum" being hyped as the next Bitcoin, might well have some questions at this point. "So, you're telling me that these highly idealized models of hypothetical, engineered systems bring us closer to the secrets of the Old One than studying natural phenomena will? I'm sure you have your own good reasons for wanting to know if QURP is contained in PFUNK, but I want to understand why ice floats on water, why both iron and charcoal glow the same kind of red when they get hot, why a magnet will pick up a steel paperclip but not a shiny soda can. I want to map the galaxies with the radio waves from interstellar hydrogen, and I want to know what holds up a dead sun against the pull of gravity." And: "I get the desire for a 'conceptual core' to quantum physics. But have you actually isolated such a thing? From where I stand, it looks like you've just picked one of the important equations and called it the important equation. Shouldn't your 'conceptual core' be a statement with some punch to it, like the big drama premise of special relativity? What's your counterpart to each observer who feels herself motionless will measure the same speed of light?"
Here's how Aaronson begins chapter 9:
There are two ways to teach quantum mechanics. The first way – which for most physicists today is still the only way – follows the historical order in which the ideas were discovered. So, you start with classical mechanics and electrodynamics, solving lots of grueling differential equations at every step. Then, you learn about the “blackbody paradox” and various strange experimental results, and the great crisis these things posed for physics. Next, you learn a complicated patchwork of ideas that physicists invented between 1900 and 1926 to try to make the crisis go away. Then, if you’re lucky, after years of study, you finally get around to the central conceptual point: that nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex.
This is wrong in a few ways. First, that "years of study"? Yeah, I saw complex probability amplitudes in my first term of college. Before they showed us all the blobby/cloudy pictures of electron orbitals, they took two minutes to explain what was being plotted. Our first full-blown quantum mechanics course was at the advanced age of ... sophomore year. And we're not talking about something squeezed in on the last day before summer vacation. See above regarding how it's the third equation in the first chapter of the ubiquitous standard undergrad QM textbook. This is not an idea sequestered in the inner sanctum of knowledge; it's babby's first wavefunction.
Second, the orthodox method is not really "historical". It can't be. The physicists who did all that work from 1900 through 1925--27 knew much more physics than college kids do today. They were professionals! Pick up the Dover reprint of the Sources of Quantum Mechanics collection, and see how many of the papers in it make sense using only first-year physics. Dirac was thinking about Poisson brackets, not a block on an inclined plane. The capsule "histories" in QM textbooks are caricatures, and sometimes quite poor ones at that.
Aaronson goes on:
Look, obviously the physicists had their reasons for teaching quantum mechanics that way, and it works great for a certain kind of student. But the “historical” approach also has disadvantages, which in the quantum information age are becoming increasingly apparent. For example, I’ve had experts in quantum field theory – people who’ve spent years calculating path integrals of mind-boggling complexity – ask me to explain the Bell inequality to them, or other simple conceptual things like Grover’s algorithm. I felt as if Andrew Wiles had asked me to explain the Pythagorean Theorem.
And then, did anyone clap?
This is a false analogy. I don't think it's a surprise, I am not convinced that it's an actual problem, and if it is, I don't think Aaronson makes any progress to a solution.
The Pythagorean theorem is part of the common heritage of all mathematics education. Moreover, it's the direct ancestor to the problem that Wiles famously solved. It's going to be within his wheelhouse. But a quantum field theorist who's been deep into that corner of physics might well not have had to think about Bell inequalities since they were in school. It's like asking an expert on the voyages of Zheng He about how Charlemagne became Holy Roman Emperor. There are multiple aspects of Bell inequalities that someone from a different specialization could want "explained", even if they remember the gist. First, there are plenty of questions about how to get a clean Bell test in the laboratory. How does one handle noise, how do we avoid subtly flawed statistics, what are these "loopholes" that experimentalists keep trying to close by doing better and better tests, etc. Aaronson has nothing to say about this, because he's not an experiment guy. And again, that's entirely fair; some of us are best as theorists. Second, there are more conceptual (dare I say "philosophical"?) questions about what exactly are the assumptions that go into deriving Bell-type inequalities, how to divide those assumptions up, and what the violation of those inequalities in nature says about the physical world. Relatedly, there are questions about who proved what and when, what specifically Bell said in each of his papers, who built on his work and why, etc. Aaronson says very little about all of this. Nothing leaps out at me as wrong, but it's rather "101". The third broad category of questions are about mathematical specifics. What particular combination of variables appears in which inequality, what are the bounds that combination is supposed to satisfy, etc. The expressions that appear in these formulae tend to look like rabbits pulled out of a hat. Sometimes there are minus signs and factors of root-2 and such floating around, and it's hard to remember where exactly they go. Even people who know the import of Bell's theorem could well ask to have it "explained", i.e., to have some account given of where all those arbitrary-looking bits came from. I don't think Aaronson does particularly well on this front. He pulls a rabbit out of his hat (a two-player game with Alice and Bob trying to take the XOR of two bits), he quotes a number with a root-2 in it, and he refers to some other lecture notes for the details, which include lots of fractional multiples of pi and which themselves leave some of the details to the interested reader.
Aaronson leads into this rather unsatisfying discussion thusly:
So what is Bell’s Inequality? Well, if you look for an answer in almost any popular book or website, you’ll find page after page about entangled photon sources, Stern–Gerlach apparatuses, etc., all of it helpfully illustrated with detailed experimental diagrams. This is necessary, of course, since if you took all the complications away, people might actually grasp the conceptual point!
However, since I’m not a member of the Physics Popularizers’ Guild, I’m now going to break that profession’s time-honored bylaws, and just tell you the conceptual point directly.
The tone strikes me, personally, as smarmy. But there's also an organizational issue. After saying he'll "just tell you the conceptual point directly", he then goes through the XOR rigmarole, which takes more than a page, before he gets to "the conceptual point" (that quantum mechanics is inconsistent with local hidden variables). It's less direct than advertised, for sure. I have not systematically surveyed pop-science explanations of Bell's theorem prior to 2013, but the "page after page of entangled photon sources..." rings false to me.
This is outside my own department, but I think there's a problem with Aaronson's treatment of Gödel's incompleteness theorems. He says that Gödel's first incompleteness theorem follows directly from Turing's proof that the halting problem is undecidable. This doesn't quite work, for reasons that are subtle but not too subtle for a technical text. The result conventionally known as Gödel's theorem is stronger than what you can get from the undecidability of the halting problem. In other words, the result that the Turing machines get you depends upon a more demanding precondition than "consistency", and so it is somewhat less impressive than what was desired. My best stab at a semi-intuitive explanation would be in the vein of, "When you're discussing the consistency of mathematics itself, you have to be double-special-careful that ideas like the number of steps a Turing machine takes really do make sense."
The historical problem is that Turing himself did not prove the undecidability of the halting problem. He wasn't even focused on halting. His main concern was computing real numbers, where naturally a successful description of a number could be a machine that doesn't stop. The "halting state" as we know and love it today was due to Emil Post.
Moreover, this is one of the passages where Aaronson seems to be offering the one and only true Nerd Opinion. He is dismissive of any way to understand Gödel's theorems apart from the story he offers, to the extent that a person who had only read Aaronson would be befuddled by anyone who used Gödel numbering after 1936.
In summary, then: Quantum Computing Since Democritus makes halfhearted efforts at supporting its professed central thesis. It repeats oversimplifications, rather than correcting them. It addresses interesting topics, but not in a satisfying way.
(This is an expanded version of a comment I made, which I've linked above.)
Well, seems the tech industry’s prepared to pivot to quantum if and when AI finally dies and goes away forever. If and when the hucksters get around to inflating the quantum bubble, I expect they’re gonna find themselves facing some degree of public resistance - probably not to the extent of what AI received, but still enough to give the hucksters some trouble.
The Encryption Issue
One of quantum’s big selling points is its purported ability to break the current encryption algorithms in use today - for a couple examples, Shor’s algorithm can reportedly double-tap public key cryptography schemes such as RSA, and Grover’s algorithm promises to supercharge brute-force attacks on symmetric-key cryptography.
Given this, I fully expect its supposed encryption-breaking abilities to stoke outcry and resistance from privacy rights groups. Even as a hypothetical, the possibility of such power falling into government hands is one that all-but guarantees Nineteen Eighty-Four levels of mass surveillance and invasion of privacy if it comes to pass.
Additionally, I expect post-quantum encryption will earn a lot of attention during the bubble as well, to pre-emptively undermine such attempts at mass surveillance.
Environmental Concerns
Much like with AI, info on how much power quantum computing requires is pretty scarce (though that’s because they more-or-less don’t exist, not because AI corps are actively hiding/juicing the numbers).
The only concrete number I could find came from IEEE Spectrum, which puts the power consumption of the D-Wave 2X (from 2015) at “slightly less than 25 kilowatts”, with practically all the power going to the refrigeration unit keeping it within a hair’s breadth of absolute zero, and the processor itself using “a tiny fraction of a microwatt”.
Given the minimal amount of info, and the AI bubble still being fresh in the public’s mind, I expect quantum systems will face resistance from environmental groups. Between the obscene power/water consumption of AI datacentres, the shitload of pollution said datacentres cause in places like Memphis, and the industry’s attempts to increase said consumption whenever possible, any notion that tech cares about the environment is dead in the (polluted) water, and attempts to sell the tech as energy efficient/environmentally friendly will likely fall on deaf ears.
It’s been a couple of weeks since my last set of predictions on the AI winter. I’ve found myself making a couple more.
Mental Health Crises
With four known suicides (Adam Raine, Sewell Setzer, Sophie Rottenberg and an unnamed Belgian man), a recent murder-suicide, and involuntary commitments caused by AI psychosis, there’s solid evidence to show that using AI is a fast track to psychological ruin.
On top of that, AI usage is deeply addictive, combining a psychic’s con with a gambling addiction to produce what amounts to digital cocaine, leaving its users hopelessly addicted to it, if not utterly dependent on it to function (such cases often being referred to as “sloppers”).
If and when the chatbots they rely on are shut down, I expect a major outbreak of mental health crises among sloppers and true believers, as they find themselves unable to handle day-to-day life without a personal sycophant/”assistant”/”””therapist””” on hand at all times. For psychiatrists/therapists, I expect they will find a steady supply of new clients during the winter, as the death of the chatbot sends addicted promptfondlers spiralling.
Skills Gaps Galore
One of the more common claims from promptfondlers and boosters when confronted is “you won’t be replaced by AI, but by a human using AI”.
With how AI prevents juniors from developing their skills, makes seniors worse at their jobs, damages productivity whilst creating a mirage of it, and damages their users’ critical thinking and mental acuity whilst , all signs point to the exact opposite being the case - those who embrace and use AI will be left behind, their skills rotting away as their AI-rejecting peers remain as skilled as before the bubble, if not more so thanks to spending time and energy on actually useful skills, rather than shit like “prompt engineering” or “vibe coding”.
Once the winter sets in and the chatbots disappear, the gulf between these two groups is going to become much wider, as promptfondlers’ crutches are forcibly taken away from them and their “skills” in using the de-skilling machine are rendered useless. As a consequence, I expect promptfondlers will be fired en masse and struggle to find work during the winter, as their inability to work without a money-burning chatbot turns them into a drag on a company’s bottom line.
Recently, I read a short article from Iris Meredith about rethinking how we teach programming. It's a pretty solid piece of work all around, and it has got me thinking how to further build on her ideas.
This contains a quick overview of her newsletter to get you up to speed, but I recommend reading it for yourself.
The Problem
As is rather obvious to most of us, the software industry is in a dire spot - Meredith summed it up better than I can:
Software engineers tend to be detached, demotivated and unwilling to care much about the work they're doing beyond their paycheck. Code quality is poor on the whole, made worse by the current spate of vibe coding and whatever other febrile ideas come out of Sam Altman's brain. Much of the software that we write is either useless or actively hurts people. And the talented, creative people that we most need in the industry are pushed to the margins of it.
As for the cause, Iris points to the "teach the mystic incantations" style used in many programming courses, which ignores teaching students how to see through an engineer’s eyes (so to speak), and teaching them the ethics of care necessary to write good code (roughly 90% of what goes into software engineering). As Iris notes:
This tends to lead, as you might expect, to a lot of new engineers being confused, demotivated and struggling to write good code or work effectively in a software environment. [...] It also means, in the end, that a lot of people who'd be brilliant software engineers just bounce off the field completely, and that a lot of people who find no joy in anything and just want a big salary wind up in the field, never realising that they have no liking or aptitude for it.
Meredith’s Idea
Meredith’s solution, in brief, is threefold.
First, she recommends starting people off with HTML as their first language, giving students the tools they need to make something they want and care about (a personal website in this case), and providing a solid bedrock for learning fundamental programming skills
Second, she recommends using “static site generators with templating engines” as an intermediate step between HTML/CSS and full-blown programming, to provide students an intuitive method of understanding basic concepts such as loops, conditionals, data structures and variables.
(As another awful member points out, they provide an easy introduction to performance considerations/profiling by being blazing fast compared to all-too common JS monoliths online, and provide a good starting point for introducing modularity as well.)
Third, and finally, she recommends having students publish their work online right from the start, to give them reason to care about their work as early as possible and give them the earliest possible opportunity to learn about the software development life cycle.
A Complementary Idea
[basic idea: teach art alongside coding, to flex students’ creative muscles]
Meredith’s suggested approach to software education is pretty solid on all fronts - it gets students invested in their programming work, and gives them the tools needed to make and maintain high-quality code.
If I were to expand on this a bit, I think the obvious addition would be to provide an arts education to complement Iris’ proposed webdev-based approach
As explicit means of self-expression, the arts provide provide great assistance in highlighting the expressive elements of software Meredith wishes to highlight
An arts education would wonderfully complement the expressive elements of software Meredith wishes to highlight - focusing on webdev, developing students’ art skills would expand their ability to customise their websites to their liking, letting them make something truly unique to themselves.
The skills that students learn through the arts would also complement what they directly learn in programming, too. The critical eye that art critique grants them will come in handy for code review. The creative muscles they build through art will enhance their problem-solving abilities, and so on.
Beyond that, I expect the complementary arts will do a good job attracting creatives to the field, whilst pushing away “people who find no joy in anything and just want a big salary”, which Meredith notes are common in the field. Historically, “learn to code” types have viewed the arts as a “useless” degree, so they’ll near-certainly turn their noses up at having to learn it alongside something more “useful”, leaving the door open for more creatives to join up.
A More Outlandish Idea
For a more outlandish idea, the long-defunct, yet well-beloved multimedia platform Adobe Flash could provide surprisingly useful for a programming education, especially with the complementary arts education I suggested before.
Being effectively an IDE and an animation program combined into one, Flash offers a means of developing and testing a student’s skills in art and programming simultaneously, and provides an easy showcase of how the two can complement each other.
Deploying Flash to a personal website wouldn’t be hard for students either, as the Ruffle emulator allows Flash content to play without having to install Flash player. (Rather helpful, given most platforms don’t accept Flash content these days :P)
Well, it seems the AI bubble’s nearing its end - the Financial Times has reported a recent dive in tech stocks, the mass media has fully soured on AI, and there’s murmurs that the hucksters are pivoting to quantum.
By my guess, this quantum bubble is going to fail to get off the ground - as I see it, the AI bubble has heavily crippled the tech industry’s ability to create or sustain new bubbles, for two main reasons.
No Social License
For the 2000s and much of the 2010s, tech enjoyed a robust social license to operate - even if they weren’t loved per se (e.g. Apple), they were still pretty widely accepted throughout society, and resistance to them was pretty much nonexistent.
Whilst it was starting to fall apart with the “techlash” of the 2020s, the AI bubble has taken what social license tech has had left and put it through the shredder.
Environmental catastrophe, art theft and plagiarism, destruction of livelihoods and corporate abuse, misinformation and enabling fascism, all of this (and so much more) has eviscerated acceptance of the tech industry as it currently stands, inspiring widespread resistance and revulsion against AI, and the tech industry at large.
For the quantum bubble, I expect it will face similar resistance/mockery right out of the gate, with the wider public refusing to entertain whatever spurious claims the hucksters make, and fighting any attempts by the hucksters to force quantum into their lives.
(For a more specific prediction, quantum’s alleged encryption-breaking abilities will likely inspire backlash, being taken as evidence the hucksters are fighting against Internet privacy.)
No Hypergrowth Markets
As Baldur Bjarnason has noted about tech industry valuations:
“Over the past few decades, tech companies have been priced based on their unprecedented massive year-on-year growth that has kept relatively steady through crises and bubble pops. As the thinking goes, if you have two companies—one tech, one not—with the same earnings, the tech company should have a higher value because its earnings are likely to grow faster than the not-tech company. In a regular year, the growth has been much faster.”
For a while, this has held - even as the hypergrowth markets dried up and tech rapidly enshittified near the end of the ‘10s, the gravy train has managed to keep rolling for tech.
That gravy train is set to slam right into a brick wall, however - between the obscenely high costs of both building and running LLMs (both upfront and ongoing), and the virtually nonexistent revenues those LLMs have provided (except for NVidia, who has made a killing in the shovel selling business), the AI bubble has burned billions upon billions of dollars on a product which is practically incapable of making a profit, and heavily embrittled the entire economy in the process.
Once the bubble finally bursts, it’ll gut the wider economy and much of the tech industry, savaging evaluations across the board and killing off tech’s hypergrowth story in the process.
For the quantum bubble, this will significantly complicate attempts to raise investor/venture capital, as the finance industry comes to view tech not as an easy and endless source of growth, but as either a mature, stable industry which won’t provide the runaway returns they’re looking for, or as an absolute money pit of an industry, one trapped deep in a malaise era and capable only of wiping out whatever money you put into it.
(As a quick addendum, it's my 25th birthday tomorrow - I finished this over the course of four hours and planned to release it tomorrow, but decided to post it tonight.)
(This is a mega-expanded version of a stubsack comment: https://awful.systems/comment/8327535)
Multiple times before on awful.systems, I’ve claimed the AI bubble would provide the humanities some degree of begrudging respect, at the expense of STEM’s public image taking a nosedive.
In the process of writing this mini-essay, its become clear that I was predicting humanities would cannibalise tech’s public image, rather than STEM’s - I had just failed to recognise tech had made itself utterly synonymous with STEM up until now.
Still, I’ve made this claim, I might as well try to back it up.
High Paying, No More?
One of the major things propping up tech/STEM's public image is the notion that its higher-paying than a humanities degree - that “learning to code” will earn you a high-paying job and financial stability, whilst taking any kind of “useless” arts degree will end with you working some form of low-wage employment (e.g. as a barista).
Between the complete clusterfuck that is the job market, the Trump administration’s war on American science, the use of AI to kill jobs left and right (whilst enshittifying what remains) and the ongoing layoffs ravaging the entire tech industry, the idea that any degree will earn you a stable job has been pretty thoroughly undermined.
And with coding getting the brunt of all of this, thanks to an oversaturated market and the AI bubble hitting tech particularly hard, any notion of tech being an easy road to riches is pretty much dead and buried.
Not Lookin’ So Smart
Another thing propping up tech/STEM’s image was the view of it being more “logical/rational” than the humanities - that it dealt with “objective” matters, compared to the highly-subjective humanities, that it was “apolitical” compared to the deeply-political humanities, that kinda stuff.
On that front, the AI bubble has become tech’s equivalent to the Sokal hoax, deeply undermining any and all notions of rationality tech had built up over the past few decades.
Artistically-speaking, the large-scale art theft committed to create gen-AI, the vapidity and soullessness of the AI slop it produces, the AI bros’ failure to recognise this soullessness (Fig. 1, Fig. 2) and their actions regarding the effects of gen-AI (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) have deeply undermined tech’s ability to talk on matters of art, with the industry at large viewed as incapable of understanding art at best, and as being hostile to art and artists at worst.
On a more general front, AI’s failures of reasoning (formal and informal, comedic and horrific), plus the tech industry’s refusal to recognise or acknowledge these failures (instead relentlessly hyping up AI’s supposed capabilities, making spurious claims about Incoming Superintelligence™ and doomsaying about how spicy autocomplete might kill us all), have put tech’s “rationality” into serious question, painting the industry at large as out-of-touch with reality and unconcerned with solving actual problems.
For the humanities generally, this bubble is going to make them look relatively grounded and reasonable by comparison, whilst for the arts specifically, they’ll likely be able to point to the slop-nami when their usefulness is questioned.
(Reports of AI usage causing metaphorical and literal brainrot likely aren’t helping, either, as they provide the public an obvious explanation for tech’s disconnection from reality.)
Eau de Fash
Tech has long had to deal with a long-standing “debate bro both sides free speech libertarianism” stench on it, as Soyweiser has noted, but between Silicon Valley’s willing collaboration with the Trump administration, plus fascists’ adoration of AI and AI slop, that stench has evolved into an unignorable smell of Eau de Fash covering the entire industry
As a consequence of this, I expect tech at large will be viewed as a Nazi bar writ large, with tech workers as a group being either willing accomplices to fascism if not outright fascist themselves. As for tech degrees, I expect they’ll be viewed as leaving their holders unequipped to resist fascism, if not outright vulnerable to fascist rhetoric.
Predicting the Job Market
(Disclaimer: This is not financial advice, this is just a shot in the dark from some dipshit with a laptop. I take no credit for whatever financial success my readers earn.)
With tech’s public cachet and “high-paying” reputation going out the window, plus the job market for tech collapsing, I expect a major drop-off in students taking up tech-related degrees, with a smaller drop-off for STEM degrees in general. By my guess, we aren’t gonna see another “learn to code” push for at least a decade. If and when another push starts, it’ll probably take on a completely different form than what we’ve seen before.
Exactly which professions will benefit from the tech crash, I don’t know - I’m not a Superpredictor™, I’m just some dipshit with a laptop. By my guess, professions which can exploit the fallout of AI to their benefit will have the best shot of becoming the next “lucrative cash cows”, so to speak.
For therapists/psychiatrists, the rise of AI psychosis and related mental health crises will likely give them a steady source of clients for the foreseeable future - whether that be because new clients have realised chatbot usage is ruining them, or because people are being involuntarily committed after losing touch with reality.
For those in writing related jobs, they may find lucrative work cleaning up attempts to sidestep them with AI slop, squeezing hefty premiums from desperate clients who find themselves lacking leverage over them.
For programmers, the rise of “vibe coding” has created mountains of technical debt and unmaintainable code that will need to be torn down - for those who manage to find themselves a job, they’ll probably make good money tearing those mountains down. For cybercriminals, the aforementioned “vibe coding”, plus the inherently insecure nature of chatbots/agents, will likely give them a lot of low-hanging fruit to go after.
As for degrees, those which can fill skills gaps the bubble has created/widened should benefit the most.
English/Creative Writing looks like an obvious winner - ChatGPT has fried a lot of people’s writing skills, so holding one of those degrees (ideally with a writing portfolio) can help convince an employer you don’t need spicy autocomplete to write for you.
Psychology/psychiatry will likely benefit quite a bit as well - both of those can directly assist in landing you a job as a therapist, which I’ve predicted will become much more lucrative in the coming years.
Recently, I ended up re-reading James Allen-Robertson’s “Devs and the Culture of Tech”, a five-part deep dive into the sci-fi miniseries Devs, and its critiques of the tech industry on a structural level.
In lieu of anything better to do, I’ve decided to pull out a single concept James has touched on, and give my extended thoughts on it.
So, What is Technological Determinism?
In a basic sense, technological determinism (which I’m calling techno-determinism to be more concise) is a worldview that posits technological development as the primary driving force of humanity, and which treats said development as, heavily paraphrasing James, a product of “rational people pursuing the objectively best outcomes”, if not “a process of uncovering [tech] as prior technological discovery begets the next like some inevitable Civ tech-tree” .
For Silicon Valley, the techno-determinist worldview provided two main advantages.
First, it provided an easy accountability sink for when new technological developments screw over some portion of the public - it wasn’t Silicon Valley’s fault that they fucked taxi drivers over with their ride-sharing apps, it was the taxi companies’ fault for getting in the way of Progress™.
Second, it obscures SV’s role in pushing those developments, and whatever reasons they may have had for said developments - those ride-sharing apps didn’t pop up because Silicon Valley wanted to make more money, they popped up because they were The Future™.
These days, techno-determinism has lost a fair bit of its grip on the general public - and I personally believe the NFT bubble is the major cause.
NFTs Killed Techno-Determinism
If you’ve been on the Internet for any length of time in the past few years, you’ve definitely heard of NFTs. They popped up in 2021, became completely fucking inescapable for roughly a year, then died an embarrassing death in 2022, prompting an outpouring of schadenfreude from the general public.
During their bubble, they were hyped to the stars by Silicon Valley, with claims that they were The Future™, that they were Inevitable™, and that you needed to Get On Board Now™ or be Left Behind™. (Sound familiar?)
As you already know, NFTs did not become The Future™. They failed, in spectacular fashion, receiving widespread mockery and rejection from the public, before getting consigned to the dustbin of history after the market imploded.
In that loud, spectacular failure, NFTs fatally undermined any notion showed the public that resistance against SIlicon Valley was anything but futile, that they didn’t need to take Silicon Valley’s attacks on them lying down, and whatever dystopian dreams the Valley had could be strangled in their crib.
On top of that, seeing SV’s rhetoric collapse so spectacularly with NFTs helped to inoculate the public against Silicon Valley’s techno-determinist rhetoric - after seeing artificially scar
On top of that, the failure of NFTs helped to inoculate the public against Silicon Valley’s techno-determinist rhetoric - after witnessing it spectacularly collapse in the face of reality, the public was well-prepared to see through SV’s attempt to recycle their rhetoric when the AI bubble reared its head.
It’s pretty much a given that we’re in for an AI winter once this bubble bursts - the only thing we can argue on at this point is exactly how everything will shake out. So, let’s beat this dead horse and make some random predictions before it inevitably gets sent to the glue factory. I’ve hardly got anything better to do.
The Death of “Value-Neutral” AI
Before this bubble, artificial intelligence was generally viewed as value-neutral. It was generally viewed as a tool, capable of good or evil, bringing about a futuristic utopia or a Terminator-style apocalypse.
Between the large-scale art theft/plagiarism committed to build the datasets (through coercion, deception, ignoring the victim’s refusal, spamming new scrapers, et cetera), the abused and underpaid workers who classified the datasets, the myriad harms brought by the LLMs themselves (don’t get me fucking started), and the utterly ghoulish acts of the CEOs and AI bros involved (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera), that “value neutral” notion is dead and fucking buried.
Going forward, I expect artificial intelligence to be viewed not as a tool or a technology, but as an enemy (of sorts), built to perpetrate evil, and capable only of evil. As for its users (assuming it still has users), I expect them to be viewed as tech assholes, class traitors, incompetent dipshits, “prompt goblins” craving approval, and generally worthy only of mockery or condemnation.
Confidence: Near-certain. Ali Alkhatib’s “Defining AI” (which called for redefining AI as an ideological project to more effectively resist it) and Matthew Hughes’ “People Are The Point” (a manifesto which opposes AI on principle, calling it “an expression of contempt towards people”) have already provided crystal-clear examples of AI being treated as an evil unto itself, and the links in the previous paragraph already show use of AI being treated as a moral failing as well.
Side-Order of Tech Crash
It’s no secret that the tech industry has put a horrific amount of cash into this AI bubble - every major AI corp burns billions in VC cash with no end in sight, Microsoft performed mass layoffs to throw money at AI (mass layoffs of people making the company money, mind you), NVidia is blowing billions on AI money-burners (to keep making a killing off of selling shovels in this AI gold rush), the fucking works. And all in pursuit of a Hail Mary pass intended to keep the tech industry’s Endless Growth™ going for just a few years more.
(Going by David Gerard, previous AI springs were primarily funded by the Department of Defense, with winter setting in whenever their patience for burning cash ran out.)
With all the billions upon billions thrown into AI, and revenue from said AI being somewhere between Jack and Shit (barring the profits of shovel-sellers like NVidia, as mentioned before), this AI winter will likely kick off with a very wide-ranging tech crash that takes a chunk out of the entire industry, and causes some serious economic woes for good measure.
Confidence: Very high. Ed Zitron’s gone into punishing detail about the utterly fucked economics of basically everyone involved in this bubble, and I’d be here all day if I went over everything he’s written about. Picking just a single article, here’s him talking about OpenAI being a systemic risk to tech.
Scrapers Need Not Apply
Before the AI bubble, scrapers/crawlers were a normal, accepted part of the Internet ecosystem - there was no real incentive to block crawlers by default, since the vast majority were well-behaved and followed robots.txt, and search engine crawlers specifically were something you wanted to welcome, since those earned you traffic from search results.
Come the AI bubble, this status quo would be completely undermined, for three main reasons.
First, and most obviously, there’s the theft - far from having any benevolent purposes, the crawlers employed by AI corps are created to outright steal data off your blog/website, then use it to create a slop generator that claims your work as its own and/or tries to put you out of business, making AI crawlers an long-term existential threat to whatever endeavours you go into.
Second, AI Summary™ services (like Google’s) created through the aforementioned theft have utterly cratered search engine traffic, taking the main upside to allowing crawlers to scrape your site and turning it into a severe downside.
Last, but not least, are the AI crawlers themselves - thanks to how they DDoS whatever sites or FOSS infrastructure they decide to scrape, and the dirty tricks employed in said scraping (ignoring robots.txt, lying about their user agent, spamming new scrapers, using botnets, etcetera), doing anything short of blocking scrapers on sight is not just a long-term liability to you, but an immediate liability to your website as well.
As a response to these crawlers, a cottage industry of anti-scraping solutions cropped up providing a variety of ways to fight back. Between dedicated bot-blockers like Anubis, tarpits like Iocaine and Nepenthes, and media-poisoning tools like Glaze and Nightshade, scrapers of all stripes now face an ever-present risk of being blocked from data (especially high quality data), or force-fed misleading data intended to waste their time and poison their datasets.
As the cherry on top of this anti-scraper shit sundae, the rise of generative AI has flooded the ‘Net with AI slop, which is difficult to identify, near-impossible to avoid, and outright useless (if not dangerous) to scrape. Unless you’re limiting yourself to sources made before 2022 (commonly known as low-background media), chances are you’re gonna have to deal with your dataset getting contaminated.
Given all this, I expect scraper activity in general (malicious or otherwise) to steeply drop during the AI winter, as all scrapers get treated as guilty (of AI fuckery) until proven innocent, and non-malicious scraper activity drops off as developers deem running them to be not worth the hassle.
Confidence: Moderate. I already know of one scraper-based project (wordfreq, to be specific) which shut down as a consequence of the AI bubble - I wouldn’t be shocked to see more cases crop up down the line.
Condemnation and Mockery
For the past two years, the AI bubble has been inescapable for the public at large.
On one front, they’ve spent the past two years being utterly inundated with AI hype of every stripe - AI bros hyping up AI as The Future™, wild and spurious claims of Incoming Superintelligence™, rigged tests and cheated benchmarks made directly by the AI corps, and relentless anthropomorphisation of spicy autocompletes and signal-shaped noise generators.
Especially anthropomorphisation - whether it be painting hallucinations as lies, presenting AI as deceptive or coercive, or pretending they can feel pain, there has been a horrendous amount of time and money spent on trying to deceive the public into believing LLMs are sentient, if not humanlike in their actions.
On another front, the public has bore witness to a wide variety of harms as a direct consequence of AI’s creation.
Local environmental catastrophe, global water loss and sky high emissions, widespread job loss, academic misconduct, nonstop hallucinations and misinformation, voice-cloning scams, programming disasters, damaged productivity, psychosis, outright suicide (on multiple occasions), the list goes on and on and on and on and on.
All of this has been thoroughly burned into the public consciousness over these past two or three years, ensuring AI will retain a major (and deeply negative) presence there, and ensuring AI as a concept will face widespread mockery and condemnation from the public, until long after the bubble bursts.
Giving some more specifics:
-
I expect art to be a no-go zone for AI until long after I die. To quote Baldur Bjarnason, the “E-number additive, but for creative work” connotation of “AI” is more-or-less a permanent fixture in the public’s mind, and artists in general have come to view AI as fundamentally hostile to artists. To provide some evidence, publicly rejecting AI earned Procreate and Schoolism standing ovations from their userbases, and a “Created with Human Intelligence” badge went on to go viral online.
-
I expect a wave of “human-focused” marketing to crop up/continue, openly putting human labour front-and-centre to attract customers - given opposition to AI is providing some rich marketing opportunities and mentioning AI turns people off of buying, you’d be a fool to not jump on this particular bandwagon. (Arguably, this one’s already happened - recently, United advertised using humans in their customer service app and the Naked Gun made fun of AI art in a marketing poster)
Confidence: Completely certain. I’m basically “predicting” something that’s already happening right now, and has a very good chance of continuing months, if not years, down the road.
Arguably, I’m being a bit conservative with this prediction - given the cultural rehabilitation of the Luddites, and the rise of a new Luddite movement in 2024, I could easily argue that the bubble’s started a full-blown resistance movement against the tech industry as a whole.
This is a rough excerpt from a quintet of essays I've intended to write for a few years and am just now getting around to drafting. Let me know if more from this series would be okay to share; the full topic is:
Power Relations
- Category of Responsibilities
- The Reputation Problem
- Greater Internet Fuckwad Theory (GIFT), Special Internet Fuckwad Theory (SIFT), & Special Fuckwittery
- System 3 & Unified Fuckwittery
- Algorithmic Courtesy
This would clarify and expand upon ideas that I've stated here and also on Lobsters (Reputation Problem, System 3 (this post!)) The main idea is to understand how folks exchange power and responsibilities.
As always, I did not use any generative language-modeling tools. I did use vim's spell-checker.
Humans are not rational actors according to any economic theory of the past few centuries. Rather than admit that economics might be flawed, psychologists have explored a series of models wherein humans have at least two modes of thinking: a natural mode and an economically-rational mode. The latest of these is the amorphous concept of System 1 and System 2; System 1 is an older system that humans share with a wide clade of distant relatives and System 2 is a more recently-developed system that evolved for humans specifically. This position does not agree with evolutionary theories of the human brain and should be viewed with extreme skepticism.
When pressed, adherents will quickly retreat to a simpler position. They will argue that there are two modes of physical signaling. First, there are external stimuli, including light, food, hormones, and the traditional senses. For example, a lack of nutrition in blood and a preparedness of the intestines for food will trigger a release of the hormone ghrelin from the stomach, triggering the vagus nerve to incorporate a signal of hunger into the brain's conceptual sensorium. Thus, when somebody says that they are hungry, they are engaged by a System 1 process. Some elements of System 1 are validated by this setup, particularly the claims that System 1 is autonomous, automatic, uninterruptible, and tied to organs which evolved before the neocortex. System 2 is everything else, particularly rumination and introspection; by excluded middle, System 2 also is how most ordinary cognitive processes would be classified.
We can do better than that. After all, if System 2 is supposed to host all of the economic rationality, then why do people spend so much time thinking and still come to irrational conclusions? Also, in popular-science accounts of System 1, why aren't emotions and actions completely aligned with hormones and sensory input? Perhaps there is a third system whose processes are confused with System 1 and System 2 somehow.
So, let's consider System 3. Reasoning in System 3 is driven by memes: units of cultural expression which derive semantics via chunking and associative composition. This is not how System 1 works, given that operant conditioning works in non-humans but priming doesn't reliably replicate. The contrast with System 2 is more nebulous since System 2 does not have a clear boundary, but a central idea is that System 2 is not about the associations between chunks as much as the computation encoded by the processing of the chunks. A System 2 process applies axioms, rules, and reasoning; a System 3 process is strictly associative.
I'm giving away my best example here because I want you to be convinced. First, consider this scenario: a car crash has just happened outside! Bodies are piled up! We're still pulling bodies from the wreckage. Fifty-seven people are confirmed dead and over two hundred are injured. Stop and think: how does System 1 react to this? What emotions are activated? How does System 2 react to this? What conclusions might be drawn? What questions might be asked to clarify understanding?
Now, let's learn about System 3. Click, please!
Update to the scenario: we have a complete
tally of casualties. We have two hundred eleven injuries and sixty-nine dead.
When reading that sentence, many Anglophones and Francophones carry an ancient meme, first attested in the 1700s, which causes them to react in a way that wasn't congruent with their previous expressions of System 1 and System 2, despite the scenario not really changing much at all. A particular syntactic detail was memetically associated to another hunk of syntax. They will also shrug off the experience rather than considering the possibility that they might be memetically influenced. This is the experience of System 3: automatic, associative, and fast like System 1; but quickly rationalizing, smoothed by left-brain interpretation, and conjugated for the context at hand like System 2.
An important class of System 3 memes are the thought-terminating clichés (TTCs), which interrupt social contexts with a rhetorical escape that provides easy victory. Another important class are various moral rules, from those governing interpersonal relations to those computing arithmetic. A sufficiently rich memeplex can permanently ensnare a person's mind by replacing their reasoning tools; since people have trouble distinguishing between System 2 and System 3, they have trouble distinguishing between genuine syllogism and TTCs which support pseudo-logical reasoning.
We can also refine System 1 further. When we talk of training a human, we ought to distinguish between repetitive muscle movements and operant conditioning, even though both concepts are founded upon "wire together, fire together." In the former, we are creating so-called "muscle memory" by entraining neurons to rapidly simulate System 2 movements; by following the principle "slow is smooth, smooth is fast", System 2 can chunk its outputs to muscles in a way analogous to the chunking of inputs in the visual cortex, and wire those inputs and outputs together too, coordinating the eye and hand. A particularly crisp example is given by the arcuate fasciculus connecting Broca's area and Wernicke's area, coordinating the decoding and encoding of speech. In contrast, in the latter, we are creating a "conditioned response" or "post-hypnotic suggestion" by attaching System 2 memory recall to System 1 signals, such that when the signal activates, the attached memory will also activate. Over long periods of time, such responses can wire System 1 to System 1, creating many cross-organ behaviors which are mediated by the nervous system.
This is enough to explain what I think is justifiably called "unified fuckwittery," but first I need to make one aside. Folks get creeped out by neuroscience. That's okay! You don't need to think about brains much here. The main point that I want to rigorously make and defend is that there are roughly three reasons that somebody can lose their temper, break their focus, or generally take themselves out of a situation, losing the colloquial "flow state." I'm going to call this situation "tilt" and the human suffering it is "tilted." The three ways of being tilted are to have an emotional response to a change in body chemistry (System 1), to act emotional as a conclusion of some inner reasoning (System 2), or to act out a recently-activated meme which happens to appear like an emotional response (System 3). No more brain talk.
I'm making a second aside for a persistent cultural issue that probably is not going away. About a century ago, philosophers and computer scientists asked about the "Turing test": can a computer program imitate a human so well that another human cannot distinguish between humans and imitations? About a half-century ago, the answer was the surprising "ELIZA effect": relatively simple computer programs can not only imitate humans well enough to pass a Turing test, but humans prefer the imitations to each other. Put in more biological terms, such programs are "supernormal stimuli"; they appear "more human than human." Also, because such programs only have a finite history, they can only generate long interactions in real time by being "memoryless" or "Markov", which means that the upcoming parts of an interaction are wholly determined by a probability distribution of the prior parts, each of which are associated to a possible future. Since programs don't have System 1 or System 2, and these programs only emit learned associations, I think it's fair to characterize them as simulating System 3 at best. On one hand, this is somewhat worrying; humans not only cannot tell the difference between a human and System 3 alone, but prefer System 3 alone. On the other hand, I could see a silver lining once humans start to understand how much of their surrounding civilization is an associative fiction. We'll return to this later.
Recently, I found myself ruminating on the general lack of AI slop over on Newgrounds (a site I use rather heavily, and have been since I joined in 2023). The only major case I've seen in recent memory was an influx of vibe-coded shovelware I saw last month.
If the title didn't tip you off, I personally believe it to be due to Newgrounds being naturally resistant to contamination with AI slop. Here's a few reasons why I think that is:
An Explicit Stance
First off, I'll get the obvious reason out the way.
Newgrounds has explicitly banned AI slop from being uploaded since September 2022,very early into the bubble. Whilst the guidelines carve out some minor exceptions for using AI to assist human work, simply generating a piece of slop and hitting “Upload” is off the table.
The only real development since then was a site update in early March 2024, which added the option to flag a submission as AI slop.
Both of these moves made the site’s stance loud and clear: AI slop of all stripes is not welcome on NG.
Beyond giving the mods explicit permission to delete slop on sight, the move likely did plenty to deter AI bros from setting up shop - if they weren't gonna get some easy clout from spewing their slop on there, why bother?
DeviantArt provides an obvious point of contrast here - far from take any concrete stance against AI slop, the site actively welcomed it, launching a slop generator of their own in November 2022 and doing nothing to rein in slop as it flooded the site.
Slop-proof Monetization
A second, and arguably less important factor, is Newgrounds’ general approach to monetisation - both in making money and paying it out.
In terms of making money, Newgrounds has pushed heavily towards running ad-free since the start of the decade - as of this writing, Newgrounds relies near-exclusively on Supporter (a subscription service which started in 2012, for context) for revenue. (Right now, adverts run exclusively on A-rated submissions (i.e. porn), which require an account to view.)
At the same time, the site wound down its previous rev-share system (which directly ran on ad revenue), leaving just the monthly cash prizes for payouts.
The overall effect of this change has been to render NG outright inhospitable to content farms (AI-based or otherwise) - being reliant on ad revenue to turn a profit off their low quality Content™, the near-total lack thereof renders running one on there impractical.
(Arguably, this reason isn't a particularly important one - being a niche animation site dwarfed by the likes of YouTube and Instagram, NG likely fell well under the radars of content farms even before its ad-free push.)
DeviantArt, once again, provides an easy point of contrast - as the site itself has proudly paraded, the site's monetisation features have enabled AI bros to make a quick buck off of flooding it with slop.
Judgment/Scouting
Wrapping this up with something that doesn't have a parallel in dA, I'm gonna look at the judgment and scouting systems used on the site. Though originally intended to maintain a minimum level of quality, these systems have helped prevent AI slop from gaining a foothold as well.
Judgment
For the main Portal (which covers animations and games), a simple voting process called judgment is used - users vote from 0 to 5 on uploaded works, with low-scoring submissions being automatically deleted (referred to as being ‘blammed’).
Whilst rather simple, the process has proven effective in keeping low-effort garbage off of Newgrounds - and with “low-effort garbage” being a perfect description of AI slop, the judgment process has enabled users to get rid of AI slop without moderator intervention, reducing the mods’ workload in the process.
Scouting
For the Audio Portal and the Art Portal, a vetting system (referred to as “scouting”) is used instead.
By default, work by unscouted artists appears in the "Undiscovered Artists" section of the Art/Audio Portals, hidden away from public view unless someone actively opts-in to view it or they find it from checking the user’s account.
If an already-scouted user or a moderator sees an unscouted user, they have the option to scout them, essentially vouching that their work follows site guidelines and is of sufficiently high quality. The effects of this are twofold:
-
First, the user’s work is placed into the “Approved Artists” section of the appropriate Portal, granting a large boost to its visibility.
-
Second, the user is granted the ability to scout other users, vouching for their work in turn.
Said ability is something users are required to exercise with caution - if the scouted user is later caught breaking site guidelines (or if their work is deemed too poor-quality), they can be de-scouted by an Art/Audio Moderator, and those who scouted them can be stripped of their ability to scout other users.
This system creates an easy method of establishing trust among the userbase (arguably equivalent to a PGP-style web of trust) - simply by knowing someone's been scouted, you can be confident they're posting human-made work, and scouted users can in turn extend that trust by scouting other users.
Additionally, Art/Audio Moderators are equipped to handle any breaches in said trust, whether by de-scouting users for posting slop, or removing scouting abilities from users who can't be trusted with them, enabling trust to be quickly restored.
As a secondary benefit, any slop which does get submitted is effectively hidden from view by default, making it easy for human-made work to drown out the slop, rather than the other way around.
Conclusion
The large-scale proliferation of generative AI has been a disaster for the Internet at large, flooding practically every corner of it with AI slop of all stripes.
Given all that, its kinda miraculous to know that there's any corner of the ‘Net which has braved the slop-nami and come out unscathed, let alone one as large (if rather niche) as Newgrounds is.
So, if you're looking for human-made work produced after 2022…well, I don't know where to search for most things, but for art, music, games or animation, you already know where to start :P
I have written a thousand iterations of this story. Including poems, short stories in different genres just exploring the same theme. I have settled with this simplistic version finally, although I'm happy with the idea behind it I'm still not haply with the execution.
Very difficult emotionally for me to post this, as I just want to keep this close to my heart and not publish it but here goes ig.
Essentially a lacanian analysis of desire. Nada ofc means nothing, but also the name of my own first crush (the story is entirely fictional). I actually secretly wish she sees it someday.
The response to this version by friends has been overwhelmingly negative, and I know it reads more like a reddit post than an actually good story but I can't put more effort into this lol. This version was completed about 2 years ago, only just posting it.
if you ever wonder how I write Pivot, it's a bit like this. The thing below is not a written text, it's a script for me to simulate spontaneity, so don't worry about the grammar or wording. But how are the ideas? And what have I missed?
(Imagine the text below with links to previous Pivots where I said a lotta this stuff.)
‘AI is here to stay’ - what does that mean? What are you actually claiming?
When some huge and stupid public AI disaster hits the news, AI pumpers will dive in to say stuff like “you have to admit, AI is here to stay.”
Well, no I don’t. Not unless you specify what you actually mean when you say that. Like, what is the claim they’re making? Herpes is here to stay too, but you probably wouldn’t brag about it.
We’re talking about the generative AI stuff here. Chatbots. Image slop generators. That sorta thing. Sometimes they’ll claim chatbots are forever because machine learning works for X-ray scans. These people are wasting your time.
Are they saying that OpenAI and its friends, all setting money on fire, will be around forever? Ha, no. That is not economically possible. They’re machines for taking money from venture capitalists and setting it on fire. The chatbots are just the excuse for that. They’re not sustainable businesses. Maybe after the collapse there will be a company that buys the name “OpenAI” and dances around wearing it like a skin.
Are they saying there’s a market for generative AI and so it’ll keep going when the bubble pops? Sure maybe there’ll be a market - but as I’ve been saying for a while now, the prices will be 5x or 10x what they are now if it has to pay its way as a business.
Are they saying you can always run a local model at home? Sure, and about 0.0% of chatbot users do that. In 2025, the home models are painfully slow even on a high end box. No normal people are going to do this. It’s like if they said “see, radio is here forever!” and it was actually five guys talking with Morse Code.
I’ve seen claims that the tools will still exist. I mean sure, the transformer architecture is actually useful for stuff. But mere existence isn’t much of a claim either.
So. If someone says "AI is here to stay," nail them down on what the heck the precise claim is they're making. Details. Numbers. What do you mean by being here? What would failure mean? Get them to make their claim properly.
I’ll make a prediction for you, give you an example. When, not if, the venture capitalists and their money pipeline go home and the chatbot prices multiply by ten, the market will collapse. There will be some small providers left. it will be technically not dead yet!! but the bubble will be extremely over. The number of people running an LLM at home will still be negligible.
It’s possible there will be something left after the bubble pops. AI boosters like saying it’s JUST LIKE the dot-com bubble!!! But i haven't really been convinced by the argument "Amazon lost money for years, so if OpenAI just sets money on fire then it must be Amazon."
Will inference costs — 80%-90% of compute load — come down? Sure, eventually. Will it be soon enough? Well, Nvidia just had a bad chip generation and is going back to its old chips but putting more of them in modules with heating problems.
So there you go. If you wanna say “but AI is here to stay!” tell us what you mean in detail. Stick your neck out. Give your reasons.
MoreWrite preface: I wanted to get this out of my system and now that I've written it, I'm not sure if I want to post it anywhere else. Due to the sneer-like attitude of it I seem to have ended up writing it with the imagined audience of awful.systems readers, but since it's not really about anything rationalist or techbro adjacent, it's kind of off topic for anywhere on this site except MW. If any of you know a place where this essay might be appreciated, I can consider editing it for publication elsewhere. Most likely I'll just leave it here as a practice piece, though.
I know I could rephrase much of this with shorter sentences and fewer ten-dollar words, but I like to indulge a little: reach for some juicy words to spice up the task of putting thoughts to words. It feels good when writing for the sake of myself, but makes the final product sound pretentious, which is embarrassing, and possibly harder to understand, which should be the main concern but isn't. I don't feel like editing down the indulgence right now, so please be gentle about it.
A seemingly genuine invocation of a particularly annoying thought-terminating meme made me think about a great work of contemporary literature occasionally mentionied on this very website: the ostensible Harry Potter fanfic My Immortal by XXXbloodyrists666XXX, alias Tara Gilesbie.
The story begins with the protagonist's self-introduction:
Hi my name is Ebony Dark'ness Dementia Raven Way and I have long ebony black hair (that's how I got my name) with purple streaks and red tips that reaches my mid-back and icy blue eyes like limpid tears and a lot of people tell me I look like Amy Lee (AN: if u don't know who she is get da hell out of here!).
Let's dive deeper into the hair color specifically. What does the description tell us? The color black carries some very strong connotations in western cultures. It's associated with death, darkness, fear, sorrow and cold. Purple, on the other hand, is a somewhat uncommon color in nature, mostly found among things that have evolved to intentionally stand out, such as flowers signaling to pollinators. Historically its rarity has made it desirable and expensive, even a royal color sometimes only worn by kings and emperors. It's a color of mystique, magic and arcane arts. Red is associated with blood and fire, danger and warnings. It's a color of passion and emotions of love, desire, anger and elation. It's considered an especially strong complement and contrast with black both visually and symbolically.
Ebony Way is a moody teenager obsessed with the dark side of emotions. She's obsessed with the dark side of human emotions and romanticizes death and depression. She's also devoted to emo pop (or in her terms, "goffik") bands like My Chemical Romance and Good Charlotte and the fashion associated with the genre. Even among her cohort of witches, wizards, vampires and "preppies" she stands out, simultaneously seeking attention and lamenting the burden of being at the center of it. She often becomes smitten with love and sexual lust, and can be driven to anger and violence. Her hair color is literary shorthand for her entire personality.
This characterization through the colors of Ebony's hair — and soon afterwards, other fashion choices — works on multiple levels. On a surface reading My Immortal is just an exceptionally badly written piece of fiction. The story suffers from an unstructured plot, inconsistent characterization, ignorance of source material, lack of continuity, shaky grammar and chaotic orthography that both overuses deliberate non-standard spelling for an effect and also contains a plethora of non-stylistic misspellings. Despite the numerous faults, the author's use of color symbolism is perfectly fluent and idiomatic. Ebony's hair colors communicate exactly what the author seemingly wants to communicate through them.
Another way of reading the story is as a self-aware parody of bad writing. In this reading, Tara Gilesbie is a caricature of the worst possible fan fiction author. The ur-author maintains the persona of a dreadful writer and deliberately indulges in all the hallmarks of poorly written fanfics. The passage parodies the tendency of self-insert characters to describe their appearance in excessively florid detail to match an idealized version of the writer. The description is misplaced and disproportionately detailed and the author persona's fashion sense is gaudy and juvenile. Ebony's hair color is exactly what Tara-in-kayfabe would consider cool precisely because of the connotations associated with those colors and how they reflect her (misidentified) subculture.
Whether you read My Immortal as failed sincere writing or as ironic metafiction about bad writing, Tara Gilesbie is obviously a terrible writer. The work is disastrously bad on nearly all conventional metrics, but even as a simulacrum of the worst conceivable writer she understands the semiotics of her character's hair color. The choice to specify Ebony's hair color is indisputably a deliberate one and it successfully communicates both how Ebony Dark'ness Dementia Raven Way meant to be seen and how the character herself wants to be seen.
So the next time someone spins a tale of a book saying "the curtains were blue", an English teacher insisting on reading symbolism into the color of the curtains, and the author of the book confirming the passage only means the curtains were blue, remember that the author of My Immortal has a better grasp of color symbolism and conservation of detail than your anti-intellectual interlocutor and the imagined author in the thought experiment.
(No clue why I didn't get around to this earlier, I've had this in drafts for too long.)
Eight months ago, as you probably know, I predicted the current AI bubble would destroy artificial intelligence as a concept, focusing on the unrelenting slop and failures of AI, and on the near-universal backlash it receives whenever it rears its ugly, slop-ridden head.
As it turns out, I had completely failed to recognise the the political elements of this entire bubble. In retrospect, I should've recognised it a lot fucking earlier.
Between Baldur Bjarnason outlining the esoteric fascist elements at the heart of the AI bubble, AI slop's enthusiastic adoption by fascists of all stripes, Damien Williams' notes on authoritarians' love for gen-AI, Ashley Lynch calling AI slop inherently fascist, its become clear the outright fascist nature of AI has been staring me in the face the entire time.
With all that in mind, I'd like to expand on my previous piece with three additional predictions.
- AI-as-Fascism
Right off the bat, I expect AI as a concept will pick up a public perception of being inherently fascist, or at least a tool of fascism. Beyond all the ink spilled about AI's fascist nature, Donald Trump going all-in on AI has done plenty to link his administration with the tech, whether through making AI slop of deportees, or letting Elon Musk's AI Powered^tm^ Department Of Government Efficiency go to town on the federal gov.
Long-term, I expect this will hamper future attempts to start new AI bubbles/AI springs, as attempts to revive the tech get treated as morally equivalent to creating the Fourth Reich.
- The Wider Tech Industry
On a wider front, I expect the tech industry at large will pick up a similar stench of Eau de Fash as well. Whilst the tech industry has long enjoyed a perception of apoliticality (which James Allen-Robertson has talked about (spoilers for Devs BTW), their own heavy involvement with the Trump administration has done plenty to undermine that.
Between their sucking up to Trump and AI's own stench of Eau de Fash, I can see the public starting to view the tech industry at large as a Nazi bar once the bubble bursts. Silicon Valley's given 'em plenty of reason to do so.
- A Bone for The Humanities
Ending this off on a vaguely positive note, I suspect the bubble's burst will earn the humanities some begrudging respect once the dust settles - primarily through cannibalising a fair bit of the cultural cachet that STEM has built up over the decades.
On one front, the slop-nami has given us an absolute torrent of slop flooding the Internet, notable both in its uniquely inhuman shittiness, and in AI bros' breathless adoration of it. Given that, I expect programmers/software engineers will come to be viewed as inherently incapable of making anything on par with anyone who has taken up art as a hobby/profession, and incapable of understanding art with any sort of depth to it.
Additionally, I suspect a stereotype of programmers/software engineers being hostile to art/artists may form, thanks to the rather drastic toll this bubble has had on artists, and the ongoing rhetoric of "democratising art" that the bubble's given us. (The age-old "learn to code" adage may also come back to haunt them as well, if this comes to pass.)
On a wider front, the breathless "AI doomsday" criti-hype, more general over-the-top AI hype, and nonstop hallucination-induced mishaps will likely also contribute to making STEM as a discipline look out-of-touch with reality, making the humanities look grounded and reasonable by comparison as the public looks in confusion at AI bros' inability to recognise LLMs' shittiness for what it is.
(This is basically an expanded version of a comment on the weekly Stubsack - I've linked it above for convenience's sake.)
This is pure gut instinct, but I’m starting to get the feeling this AI bubble’s gonna destroy the concept of artificial intelligence as we know it.
On the artistic front, there's the general tidal wave of AI-generated slop (which I've come to term "the slop-nami") which has come to drown the Internet in zero-effort garbage, interesting only when the art's utterly insane or its prompter gets publicly humiliated, and, to quote Line Goes Up, "derivative, lazy, ugly, hollow, and boring" the other 99% of the time.
(And all while the AI industry steals artists' work, destroys their livelihoods and shamelessly mocks their victims throughout.)
On the "intelligence" front, the bubble's given us public and spectacular failures of reasoning/logic like Google gluing pizza and eating onions, ChatGPT sucking at chess and briefly losing its shit, and so much more - even in the absence of formal proof LLMs can't reason, its not hard to conclude they're far from intelligent.
All of this is, of course, happening whilst the tech industry as a whole is hyping the ever-loving FUCK out of AI, breathlessly praising its supposed creativity/intelligence/brilliance and relentlessly claiming that they're on the cusp of AGI/superintelligence/whatever-the-fuck-they're-calling-it-right-now, they just need to raise a few more billion dollars and boil a few more hundred lakes and kill a few more hundred species and enable a few more months of SEO and scams and spam and slop and soulless shameless scum-sucking shitbags senselessly shitting over everything that was good about the Internet.
The public's collective consciousness was ready for a lot of futures regarding AI - a future where it took everyone's jobs, a future where it started the apocalypse, a future where it brought about utopia, etcetera. A future where AI ruins everything by being utterly, fundamentally incompetent, like the one we're living in now?
That's a future the public was not ready for - sci-fi writers weren't playing much the idea of "incompetent AI ruins everything" (Paranoia is the only example I know of), and the tech press wasn't gonna run stories about AI's faults until it became unignorable (like that lawyer who got in trouble for taking ChatGPT at its word).
Now, of course, the public's had plenty of time to let the reality of this current AI bubble sink in, to watch as the AI industry tries and fails to fix the unfixable hallucination issue, to watch the likes of CrAIyon and Midjourney continually fail to produce anything even remotely worth the effort of typing out a prompt, to watch AI creep into and enshittify every waking aspect of their lives as their bosses and higher-ups buy the hype hook, line and fucking sinker.
All this, I feel, has built an image of AI as inherently incapable of humanlike intelligence/creativity (let alone Superintelligence^tm^), no matter how many server farms you build or oceans of water you boil.
Especially so on the creativity front - publicly rejecting AI, like what Procreate and Schoolism did, earns you an instant standing ovation, whilst openly shilling it (like PC Gamer or The Bookseller) or showcasing it (like Justine Moore, Proper Prompter or Luma Labs) gets you publicly and relentlessly lambasted. To quote Baldur Bjarnason, the “E-number additive, but for creative work” connotation of “AI” is more-or-less a permanent fixture in the public’s mind.
I don't have any pithy quote to wrap this up, but to take a shot in the dark, I expect we're gonna see a particularly long and harsh AI winter once the bubble bursts - one fueled not only by disappointment in the failures of LLMs, but widespread public outrage at the massive damage the bubble inflicted, with AI funding facing heavy scrutiny as the public comes to treat any research into the field as done with potentally malicious intent.