this post was submitted on 23 Apr 2026
59 points (96.8% liked)
Fuck AI
6975 readers
1378 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
i dunno if im off base here, but staying 1 step ahead in the infinitely escalating digital security war requires using AI... which will, in turn, merely escalate, rather than prevent, further security threats. everything else in the article seems like fluff
Or just writing new code in Rust, which is much cheaper and prevents a large fraction of bugs
While that does mitigate a lot of things, it doesn't fundamentally guarantee security.
For example, the language will not guard against things like SQL injection, path traversal, shell injection, the language itself can't guard against those (however core libraries may discourage dangerous patterns, but ultimately using a library or manually doing something yourself the wrong way.
I would even venture in this day and age most vulnerabilities are no longer from C misadventures. Between popularity of languages that have more safety rails and more analysis tools...
I do find it funny how Mozilla has created both Rust and Servo, yet FireFox's Gecko is still written in C/++
Supply chain attacks: exist
Usually with new security tools, e,g. fuzzers, you catch a whole bunch of bugs, and then that class of bugs is essentially eliminated, but the security arms race switches to different classes of bugs not solved by the tools. So yôu have a big initial peak of bugs found/fixed that slows to trickle. Remains to be seen if LLMs follow the same pattern.