this post was submitted on 09 May 2026
28 points (96.7% liked)

Opensource

6111 readers
110 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 2 years ago
MODERATORS
 

Some additional context regarding Fedora Project Leader Jef Spaleta position and thinking:

I’m going to put my FPL hat on real tight for this next comment.
I’m not here to side-step problems. I’m here to lean into them and get to a better future. I think some people are generally concerned about the ethical use of AI and are loudly pushing back on a future they feel is coming for them that is outside of their control. Saying no to everything AI related in the places where they have agency to do so feels like a way to take a stand and take back control. I think that’s exactly the wrong thing to do.

I am genuinely concerned about the ethical use of AI. I am genuinely concerned about the hype bubble. But i don’t think this technology goes away, even after the bubble burst. The best possible [future] I can see involves the Fedora community being part of the conversation around ethical use of this technology and that doesn’t mean we sit on the sidelines and ask downstreams to do the work we refuse to do to provide a developer oriented desktop experience because we can’t bring [ourselves] to do it. Punting on developers is how this project becomes irrelevant. And its definitely not in line with the supporting text of the First foundation, nor inline with what I understand Fedora’s core mission to be.

[…]

As the Fedora Project Leader, I am absolutely not concerned about the reputational damage to this project that comes with setting up an entirely new output attractive to developers who want to make use of Ai tools. If this were proposing re-orienting Workstation, which is a mature project output, I would be pushing back as well as a skeptic on the value of the AI stuff. That approach would be materially changing the character of an existing output that users rely on..and that’s probably not appropriate and would result in watching users drop out. This approach deliberately sets up a separate output to avoid that. This will either be attractive and grow or it will not.. based entirely on individual interest. This approach is not disruptive to existing outputs nor asking for users or volunteer contributors relying on those outputs to be coopted into putting work into an output that do not want to contribute to.

top 4 comments
sorted by: hot top controversial new old
[–] Rekall_Incorporated@piefed.social 14 points 1 week ago (1 children)

I tend to agree with Jef Spaleta, LLMs and ML tech/services are not going anywhere and they will continue to have a bigger impact on our lives.

You can't just reject the notion of ML products/services, they clearly have significant utility (not just LLMs, I do a lot of video ML upscalling as a hobby and you'de be surprised with the result you can get for DVD or even VHS quality source material).

The ethical dilemmas inherent to AI tech are not technical in nature. The root cause is tied to the current oligarchic regime and the inability of the largest and most powerful democratic-leaning society to deal with corruption.

[–] Thteven@lemmy.world 6 points 1 week ago (1 children)

Don't tell me what I can't reject lol. I will never personally use AI services and now Fedora is off my list of Linux distro recommendations because of that.

Apologies, I meant this in a more general sense.

Perhaps a clearer phrasing would have been that I believe (and in this I agree with Spaleta) that it will not be possible for open source supporters and society more broadly to ignore ML products/services at scale.

[–] terabyterex@lemmy.world 6 points 1 week ago

exactly. its not an all or nothing. take software development. i never thought vibe coding wpuld becanything but some non-coder putting something together for use on thrir personsal machine. it blows my mind when developers want to use that involvement for work. on thr flip side, let ai generate unit tests. the are tedious and boring. me reviewing unit tests is a lot easier than writing all of them out.

when the dust settles, it will have its place.