projectmoon

joined 2 years ago
[–] projectmoon@lemm.ee 2 points 10 months ago (1 children)

North Carolina?

[–] projectmoon@lemm.ee 6 points 10 months ago

I know. I have NodeBB as a backup.

[–] projectmoon@lemm.ee 34 points 10 months ago (2 children)

I imagine that was part of it, but I doubt it's the actual main reason. More of a post justification.

[–] projectmoon@lemm.ee 7 points 10 months ago

Or if you just ignore federal courts, which seems to be the current fashion.

[–] projectmoon@lemm.ee 9 points 10 months ago (1 children)

Rclone can do file mounts as well as sync.

[–] projectmoon@lemm.ee 13 points 10 months ago

A lot of the answers here are short or quippy. So, here's a more detailed take. LLMs don't "know" how good a source is. They are word association machines. They are very good at that. When you use something like Perplexity, an external API feeds information from the search queries into the LLM, and then it summarizes that text in (hopefully) a coherent way. There are ways to reduce hallucination rate and check factualness of sources, e.g. by comparing the generated text against authoritative information. But how much of that is employed by Perplexity et al I have no idea.

[–] projectmoon@lemm.ee 0 points 11 months ago (1 children)

I feel like this article is exactly the type of thing it's criticizing.

[–] projectmoon@lemm.ee 8 points 11 months ago (4 children)

What is actually happening to the computer in the image?

[–] projectmoon@lemm.ee 0 points 11 months ago

I think you have the wrong full generation parameters here.

[–] projectmoon@lemm.ee 9 points 11 months ago (5 children)

Can you link the feeds?

[–] projectmoon@lemm.ee 0 points 11 months ago

The problem is that while LLMs can translate, it's still machine translation and isn't always accurate. It's also not going to just be for that. It'll be applying "AI" to everything that looks like it might vaguely fit, and it'll stifle productivity.

[–] projectmoon@lemm.ee 0 points 1 year ago (2 children)

Is the code available somewhere?

 

Over the weekend (this past Saturday specifically), GPT-4o seems to have gone from capable and rather free for generating creative writing to not being able to generate basically anything due to alleged content policy violations. It'll just say "can't assist with that" or "can't continue." But 80% of the time, if you regenerate the response, it'll happily continue on its way.

It's like someone updated some policy configuration over the weekend and accidentally put an extra 0 in a field for censorship.

GPT-4 and GPT 3.5 seem unaffected by this, which makes it even weirder. Switching to GPT 4 will have none of the issues that 4o is having.

I noticed this happening literally in the middle of generating text.

See also: https://old.reddit.com/r/ChatGPT/comments/1droujl/ladies_gentlemen_this_is_how_annoying_kiddie/

https://old.reddit.com/r/ChatGPT/comments/1dr3axv/anyone_elses_ai_refusing_to_do_literally_anything/

view more: next ›