But not really a NEW problem. We knew LLM's are trained on aggregate human data. We know aggregate human data is fundamentally flawed, inconsistent, unreliable, etc.
Like was there a point at which people just decided, nah AI is just plain accurate? Or is that just what morons always thought despite the permanent warnings plastered everywhere saying THIS AI CAN MAKE MISTAKES, CHECK EVERYTHING!

This is exactly what Grand Juries are for and why the bar for indictment is so low.
It's a safeguard to prevent frivolous felony persecution by the State upon individuals.
It is to the Jurors to take one look at the evidence the prosecution presents and to say, nope, this is a BS case.
That's why it's so easy to get indictments. Generally, prosecutors only show up with actual evidence of potential wrong doing.
If they show up with nothing, in typical Trump legal team fashion, they will get no indictment.
They seem to think because THEY suck at law, everyone else does too. All competent lawyers left that administration. Believe it or not, plenty of lawyers actually have respect for the institutions they spent their lives training in and working in.