Not to be too much of a "knew it" person, but I remember when they first started rumors about how dangerous it was that my bullshit detector started going off, because it felt like they were intentionally trying to make it sound like a threat large enough to get attention, and more importantly funding. I kinda expect it to be adopted, then opsec to complain about it sucking in a couple of weeks to months at a few bigger companies.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
The dangerous part is how many resources were wasted to train a model that performs below the current open source offerings?
i dunno if im off base here, but staying 1 step ahead in the infinitely escalating digital security war requires using AI... which will, in turn, merely escalate, rather than prevent, further security threats. everything else in the article seems like fluff
i dunno if im off base here, but staying 1 step ahead in the infinitely escalating digital security war requires using AI
Or just writing new code in Rust, which is much cheaper and prevents a large fraction of bugs
While that does mitigate a lot of things, it doesn't fundamentally guarantee security.
For example, the language will not guard against things like SQL injection, path traversal, shell injection, the language itself can't guard against those (however core libraries may discourage dangerous patterns, but ultimately using a library or manually doing something yourself the wrong way.
I would even venture in this day and age most vulnerabilities are no longer from C misadventures. Between popularity of languages that have more safety rails and more analysis tools...
I do find it funny how Mozilla has created both Rust and Servo, yet FireFox's Gecko is still written in C/++
Supply chain attacks: exist
Usually with new security tools, e,g. fuzzers, you catch a whole bunch of bugs, and then that class of bugs is essentially eliminated, but the security arms race switches to different classes of bugs not solved by the tools. So yôu have a big initial peak of bugs found/fixed that slows to trickle. Remains to be seen if LLMs follow the same pattern.
I enjoy how much this headline works as a critique of their new model, as well as of the company in general