XLE

joined 9 months ago
[–] XLE@piefed.social 7 points 1 day ago (3 children)

Other people in this thread say physics simulations are inherently chaotic. If an AI model is trained on inherently chaotic data, how will the results not be chaotic or not worse?

[–] XLE@piefed.social 3 points 1 day ago (1 children)

Twitter levels are unattainable

[–] XLE@piefed.social 21 points 2 days ago (1 children)

I thought Session was decentralized and couldn't be centrally shut down 🤔

[–] XLE@piefed.social 3 points 3 days ago

Anthropic CEO Dario Amodei, in a recent essay, speculated on ways that we might “buy time” before the possibility that AI enslaves or destroys humanity. But meanwhile, AI companies have products to sell...

Anybody want to tell them?

[–] XLE@piefed.social 13 points 3 days ago

Guy got in trouble because of the dumbest possible telephone game (among other things)

For example, although court documents claim Sanchez searched, “is the 900 mAh battery from a (Game Boy) capable of being used in a trigger device," Sellers said that was actually a search from [his bail] supervisor, who was cross-referencing real searches from Sanchez to see if they could be used to make explosives.

[Bail supervisor] Coyle then took a screenshot of his own search history and sent it to the district attorney, leading to a violation of Sanchez’s probation and his rearrest, Sellers said.

[–] XLE@piefed.social 31 points 3 days ago

It's nice that OpenAI is being pulled in the for-profit direction and the non-profit direction at the same time, and is threatened with losing (more) money if it fails to do either.

[–] XLE@piefed.social 9 points 3 days ago

QA is when you vibe code tests, right

[–] XLE@piefed.social 29 points 3 days ago* (last edited 3 days ago) (2 children)

The entire article can be summed up in 5 words:

an Anthropic official told CNN

Notable other passages include

Logan Graham, who heads the team at Anthropic its AI models’ defenses, told CNN

and

according to Anthropic

And

Anthropic said

And my personal favorite

Anthropic claims... CNN could not immediately verify this figure.

[–] XLE@piefed.social 3 points 3 days ago

Wired’s Maxwell Zeff wrote about a number of journalists using A.I. to assist their writing, including the Times columnist Kevin Roose... who [created instructions] to help Claude write in his style, including the “10 commandments” of writing like Alex Heath.

Can't believe anybody takes Kevin seriously. Not here, sure, but there are some in the tech sphere who loves that he says what they already believe.

[–] XLE@piefed.social 14 points 3 days ago* (last edited 3 days ago) (1 children)

My solution to this is never using OneDrive, which (terrible as it is) can be uninstalled on Windows 10/11.

[–] XLE@piefed.social 6 points 3 days ago

Mozilla allows the installation of ad blocking extensions on Firefox, and it's already exhibited hostility towards the most talented developer of those extensions.

[–] XLE@piefed.social 2 points 4 days ago

Aren't you the guy who likes Eliezer Yudkowsky?

If you're worried about brain damage, you're self-inflicting it.

 

cross-posted from: https://piefed.social/c/technology/p/1913698/disney-exits-openai-deal-after-ai-giant-shutters-sora

Original WSJ exclusive: OpenAI Scraps Sora App in Continued Push to Focus on Coding and ‘Agent’ Tools

Paywall removal: https://archive.is/cKWkf

47
submitted 2 weeks ago* (last edited 2 weeks ago) by XLE@piefed.social to c/technology@beehaw.org
 

Original WSJ exclusive: OpenAI Scraps Sora App in Continued Push to Focus on Coding and ‘Agent’ Tools

Paywall removal: https://archive.is/cKWkf

23
submitted 4 weeks ago* (last edited 4 weeks ago) by XLE@piefed.social to c/firefox@lemmy.ml
 

cross-posted from: https://piefed.social/c/firefox/p/1869911/i-tried-firefoxs-new-ai-smart-window-in-a-beta-build

Buried in lede: the feature currently sends pre-Smart Window browsing activity to 3rd parties (Google, Alibaba, OpenAI) without warning you that this data was even aggregated and sent.

 

I was reading a review of Firefox's experimental Smart Window feature, and this stood out as a potential huge issue:

Smart Window uses ‘memories’, things Mozilla says “…it learns from your activity” to inform its responses.

You can delete memories individually, and you can set any given chat session to not use/store them.

Fine so far.

The problem? My memory list isn’t populated with things Smart Window learned since I enabled it. Oh no.

It has activity going back months. We’re talking searches and website interactions from long before I enabled this. features.

Firefox just handed that history to the AI models to plough from, without telling me upfront.

I found this the creepiest aspect of Smart Window.

Mozilla says this was a flub; it will refine the onboarding around Smart Window to limit memory formation to post-opt-in activity only. That’s obviously the right fix.

Because sharing a user’s prior browsing history with third-party AI models, silently, on feature activation, without any headset? Yeah, a bit icky – but that’s the price of testing features that are finished, I guess.

This news leaves me with more questions than answers:

  • Was this summarized on enabling this window, or earlier?
  • Did it use an existing model, or re-use one that someone may have already downloaded for a different feature?
  • Is this activity going anywhere else, like Mozilla's recent "privacy-preserving" advertising?
  • When this releases, what will the default be?
 

AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.

For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”

Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.

“Following the recent discussion, we have strengthened our safeguards,” [OKA's] Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms.

Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.

 

cross-posted from: https://lemmus.org/post/20604499

On an evening in late January, Emily was driving through her Minneapolis neighborhood doing something that had become part of her routine in recent weeks: patrolling for ICE.

Emily, who NPR is only identifying by her first name because she fears retribution from the federal government, says she followed an ICE vehicle at a safe distance into a parking lot. "And then someone leaned out of the passenger side of that SUV and took a picture of me and my car," she says.

Emily says she decided to leave at that point, but the SUV made a sudden U-turn and barreled towards her, braking next to her driver's side window. A female agent wearing a gaiter-style mask rolled down the window, leaned out — and addressed Emily by name.

 

Original Reddit post, which the article almost exclusively pulls from: https://old.reddit.com/r/googlecloud/comments/1reqtvi/82000_in_48_hours_from_stolen_gemini_api_key_my/

view more: next ›