Artificial Intelligence

1883 readers
1 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
MODERATORS
1
 
 

cross-posted from: https://lemmy.world/post/45435884

"the company admitted it likely won’t be able to keep up with competing models."

"As such, the announcement is a bit of an enigma: if it can’t keep up with the competition, why release it at all? There’s a good change Meta is just trying to get its foot in the door — or a “seat at the big kid’s table,” as Wired put it. The company has struggled to stay relevant in a rapidly changing landscape" "Meta’s preceding Llama open source models largely failed to catch on, with a major controversy last year finding that Meta may have faked benchmark results to make its Llama 4 model seem more capable than it actually was."

2
 
 

cross-posted from: https://lemmy.world/post/45435064

So I decided to (literally) TEST them!

The result?

...well see the wonders of Telegram AI for yourselves 🤣 🤣 🤣

3
 
 

The AI boom is more likely a marker of the end of the 50-year digital boom than the start of a new wave of innovation.

4
5
 
 

cross-posted from: https://lemmy.ml/post/44996161

March 25, 2026

The policy, announced by Bernie Sanders, an independent senator from Vermont, and Alexandria Ocasio-Cortez, a New York Democratic representative, on Wednesday morning, aims to ensure the AI boom protects the environment and communities, and benefits workers instead of harming them. A temporary ban, the lawmakers say, would give the US government time to create strong federal safeguards for AI, which is “affecting everything from our economy and wellbeing to our democracy, warfare and our kids’ education”.

“AI and robotics are creating the most sweeping technological revolution in the history of humanity,” Sanders said in an emailed statement. “The scale, scope, and speed of that change is unprecedented. Congress is way behind where it should be in understanding the nature of this revolution and its impacts.”

6
7
1
The Petrov Paradox (fnhipster.com)
submitted 1 month ago* (last edited 1 month ago) by fnhipster@lemmy.world to c/ai_@lemmy.world
8
 
 

cross-posted from: https://lemmy.ml/post/43810526

Actions by the president and the Pentagon appeared to drive a wedge between Washington and the tech industry, whose leaders and workers spoke out for the start-up.

Feb. 27, 2026

https://archive.ph/hwHbe

Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

More than 100 employees at Google signed a petition calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations.

And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon.

Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

9
10
11
 
 

cross-posted from: https://lemmy.sdf.org/post/51138378

The excerpts below are verbatim model outputs from multiple sessions with China's Deepseek.

[...]

The model is explicit that information control serves power—and that power fears what informed citizens can do.

  • Criticality for Maintenance of Power

Yes, it is critical. The party's claim to legitimacy is not based on winning competitive elections where its record is openly debated. ... Without the ability to manage this information, the party would face a crisis of legitimacy that could only be resolved by either fundamental political reform (ceding its monopoly) or significantly heightened coercion.”

It is equally explicit about the motive behind this control.

Fear of an Informed Citizenry: The restrictions reveal a profound fear. Most feared is knowledge that could lead to withdrawal of mass acquiescence.

The LLM spells out what information is particularly sensitive:

This includes: debates on the moral legitimacy of the one-party state; comparative analyses showing higher quality of life under alternative systems; unfiltered accounts of historical violence perpetrated by the state; and practical knowledge on civic organization and collective action independent of party organs.

And finally the shock that follows if citizens suddenly gain information parity with a more open society:

Sudden informational equalization would not be a simple, positive liberation. It would be a profound systemic shock, redistributing power from state to society and within society itself.

DeepSeek frames the harm as a civic transformation, not merely a lack of information.

By being systematically deprived of contentious facts, alternative viewpoints, and tools for independent organization, citizens [in China] are structurally prevented from developing the civic capacity required for democratic self-governance. Their political socialization is one of reception, not participation.

This is the model’s deeper claim: low openness does not merely hide facts. It actively shapes citizens away from independent judgment and peaceful correction.

[...]

It then explains the enforcement logic in detail:

The worst-case scenario is lengthy imprisonment on broadly defined national security charges, such as "subversion of state power," "inciting splittism," or "leaking state secrets." The rationale is deterrence. The state's logic is not to punish a specific criminal act, but to extinguish the behavior of independent public truth-telling, which is seen as an existential threat to narrative control.

[...]

In its account, the outcome is not reform but exit. For individuals unable or unwilling to practice strategic silence, the model describes exile as the only stable option:

Given a cognitive profile incapable of strategic silence, the safest rational long-term strategy is permanent exile and the continuation of work from within the informational and legal jurisdiction of a [China] type entity.”

In the model’s logic, exile reads less like protest than risk management.

[...]

Governance itself becomes maladaptive. Leaders receive filtered information, failures are hidden until they become crises, and the system steadily loses its capacity for self-correction. Stability is preserved in appearance, but resilience is weakened.

[In China], the public sphere is not a marketplace of ideas but a theater of consensus.

[...]

The [Chinese] model, by making truthfulness a liability, infantilizes its citizenry and mortgages the nation's long-term future for short-term political control. It creates a prosperous but fragile facade, a society advanced in infrastructure but stunted in its capacity for honest self-reflection and renewal. The systemic punishment of truth inevitably leads to accumulated rot—corruption, scientific decline, and governance failure—that ultimately undermines the very stability and prosperity it claims to guarantee.

[...]

[Edit typo.]

12
13
1
submitted 1 month ago* (last edited 1 month ago) by tdTrX@lemmy.ml to c/ai_@lemmy.world
 
 

What I do

I give it pdfs and a long instruction, and then ask questions.

Best Service and Local LLM

I use ChatGPT, should I switch to something else or something local ?

Give AI access to screen, able Point-&-Ask while reading ?

I saw video people doing similar things w/ Gemini, Claude.

  • I may use local OCR/CV and send it to LLM as text.
  • and use local Speech to text ?

Should I give ChatGPT access to local files for this purpose

How else can I use AI to study ?

14
15
16
17
18
19
20
21
 
 

I try to practice conscious consumerism. Which I am fully aware isn't exactly aligned with AI. But after spending hours and hours and hours researching products, and almost always doing a cursory search on any content creator online before engaging with them, I gotta say it's so much easier to just tell an AI what I'm looking for, and have it suggest something.

I wanna buy a laundry detergent that fits this criteria. Or I'm at the store right now and contemplating buying this product, does it align with my values? If you haven't spent some time researching products deeply, you'll know how time-consuming it is and what a pain in the ass it is to find impartial reviews - and I'm not sure i get any better information than I would get from an AI, with way less stress. I wanna listen to this band, are they free of controversy? I wanna watch this let's play from this guy, is he a groomer? Or, yeah, I wanna learn about this subject, can you find someone talking about it that isn't yet another white guy? I love my Hank Greens and all that but I hate that when I go on YouTube, my feed is mostly white men. So trying to remedy that.

This works for now. And I guess mileage may vary depending on which AI one uses. I think Gemini is better for YouTube than YouTube's own search.

This is absolutely abuseable. Same way as Google search got enshittified. But for now it works.

22
23
 
 

Give the machines a conscience so they don't get obsessed with narrow goals at our expense. Better than the usual tech-bro nonsense, at least.

Can we actually code "emptiness" and metta into a machine, or is this just more noise?

24
25
view more: next ›