artwork

joined 4 months ago
[–] artwork@lemmy.world 6 points 4 hours ago* (last edited 4 hours ago)

It's almost always been so in modern days, and is obvious if only you read news from around the globe and not just yours local.
Judging from my experience traveling around the world, due to the job I have, I must say, sorry, the USA's social environment system is one of the most manipulative and forceful, and is designed for the mass control to desired destinations.

The Overton window is the range of subjects and arguments politically acceptable to the mainstream population at a given time. It is also known as the window of discourse. The key to the concept is that the window changes over time; it can shift, shrink, or expand. It exemplifies "the slow evolution of societal values and norms".

Source: https://en.wikipedia.org/wiki/Overton_window

---

His primary rules were: never allow the public to cool off; never admit a fault or wrong; never concede that there may be some good in your enemy; never leave room for alternatives; never accept blame; concentrate on one enemy at a time and blame him for everything that goes wrong; people will believe a big lie sooner than a little one; and if you repeat it frequently enough people will sooner or later believe it.

Source: A Psychological Analysis Of Adolph Hitler His Life... [web-archive]

[–] artwork@lemmy.world 0 points 20 hours ago* (last edited 2 hours ago)

I am... sorry, b-but... may I ask... why are you an... artist...?
What do you want to tell to the world... as an artist...?

Please, at least... check out some actual works of actual digital Artists who believed in the effort, purpose, human... in the message...
For a simple example, from times of .kkrieger, we may sure recall "Candytron" by Farbrausch, that won Best 64k Intro award in 2003:
- https://www.pouet.net/prod.php?which=9424
- https://demozoo.org/productions/3319

Do you... hear... or see it... from ~0120 seconds...?: farbrausch_-fr030-candytron.final_hiq(divx511_c.a.p.tv).avi
Please tell... what did the artists want to tell in these seconds?

Oh! Do you recall the OST from video-game Sanctum, and the Boombox at Satisfactory?

[–] artwork@lemmy.world 15 points 1 day ago* (last edited 1 day ago) (3 children)

Just in case, about the work:

# Hyperion

On the world called Hyperion, beyond the law of the Hegemony of Man, there waits the creature called the Shrike. There are those who worship it. There are those who fear it. And there are those who have vowed to destroy it. In the Valley of the Time Tombs, where huge, brooding structures move backward through time, the Shrike waits for them all.

- Authors: Dan Simmons
- First published: 1989-05-26

Source: GoodReads [+image]

Paperback Cover (by Gary Ruddell)

Related:
- Hyperion Cantos Series (Also known as Canti di Hyperion...)
- Wikipedia

[–] artwork@lemmy.world 15 points 1 day ago* (last edited 1 day ago)

It's not just superior but mostly... humane... and still much more trusted to invest your finite life time into, I believe...
There's love, empathy, and reason I feel...

...not just "shadow banned" as on Reddit...Though, I was already banned for utterly unexpected reasons at !womensstuff@piefed.blahaj.zone and !privacy@lemmy.ml , and though these are still not Lemmy.World, there were at least appreciated reasons stated (in the Modlog), and not just "shadow banned" as on Reddit for utterly unknown reasons with no official responses with money sent into their Vault and Awards I do regret know being shadow-banned myself...
No one cares who you are there and how much you've done for tens of years; it's an empty void and sorrow...
Regardless...

Thank you, heartfelt, dear Lemmy and PieFed Developers, Artists, Community... for the ineffably magnificent Marvel... Art you do...
Please stay safe, and I wish you success, prosperity, achievements to treasure, reach new horizons, stability... and peace...

 

I want to read your words, your mistakes, your opinions, what's on your mind. Not whatever gets summarised or filtered through a machine, please.

I don’t even want to use those two letters, or the more appropiate three-letter acronym, because honestly, it annoys me to even type it out at this point. What a pain.

I’ll share some things that shouldn’t need to be said, but in just in case, here’s what a personal website does not need...

Source [web-archive]

 

We analyzed 1,000,000+ links from AI responses...

This study explores how modern generative AI systems cite sources in response to realistic user prompts. Our objective was to quantify and characterize the nature of AI-generated citations across different use cases and vendor models. This includes their frequency, source types, and the prominence of earned and owned media.

To accomplish this, we constructed a large, diverse prompt set and executed it across several web-enabled language models, followed by systematic analysis of the responses and the cited links. The prompts span a variety of industries and subject matter. Sometimes prompts specifically mention companies by name, sometimes they do not.

Gemini, Perplexity, Claude and ChatGPT were used to execute the queries, during between July and December 2025.Generative AI systems are rapidly evolving and inherently opaque. The behaviors observed in this study may shift as models are updated or retrained.

Sources:
- MuckRack-GenerativePulse2025-1.pdf (archived)
- https://generativepulse.ai/report

Related: Which journalists and news outlets are most cited by AI answer engines? [web-archive]

[–] artwork@lemmy.world 3 points 5 days ago* (last edited 5 days ago) (1 children)

By Jesse Hallett, 6th Apr 2026, 35 min read, Tags: TypeScript, type theory...
Source [web-archive]

Thank you very much!

Though... 35 min read...? ~7000 words...? Type theory...? Is that the speed of mind of 2026...?

 

cross-posted from: https://lemmy.nz/post/36231581

While the YouTubers' videos are available to watch on the platform, the lawsuit alleged that Apple illegally circumvented the "controlled streaming architecture" that regular users are limited to. The creators claimed that Apple's video scraping was used to train its generative AI products, adding that the tech giant's "massive financial success would not have been possible without the video content created" by the YouTubers. MacRumors noted that these YouTube channels have also filed similar lawsuits against other tech companies, including Meta, Nvidia, ByteDance and Snap.

Source [web-archive]

---

Plaintiffs Ted Entertainment, Inc. ("TEI"), Matt Fisher, and Golfholics, Inc. (collectively, where appropriate "Plaintiffs"), each individually and on behalf of all others similarly situated (the "Class," as defined below), by and through their undersigned counsel, file this Complaint against Apple, Inc. ("Defendant"), for violations of the Digital Millennium Copyright Act ("DMCA"), 17 U.S.C. § 1201. The allegations contained herein, which are based on Plaintiffs' knowledge of facts pertaining to themselves and their own actions and counsels' investigations, and upon information and belief as to all other matters, are as follows...

Source: https://www.scribd.com/document/1022659389/Ted-Entertainment-v-Apple

[–] artwork@lemmy.world 44 points 6 days ago (1 children)

A man can be himself only so long as he is alone; and if he does not love solitude, he will not love freedom; for it is only when he is alone that he is really free." ~ Arthur Schopenhauer

---

Our language has wisely sensed these two sides of man’s being alone. It has created the word “loneliness” to express the pain of being alone. And it has created the word “solitude” to express the glory of being alone. ~ Paul Tillich

[–] artwork@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

Preview of the button "Beta"

[–] artwork@lemmy.world 5 points 1 week ago

Oh! It does look like a Psychodidae (aka Drain fly)! ✨

[–] artwork@lemmy.world 3 points 1 week ago

Indeed, hence, "environment".

 

cross-posted from: https://ibbit.at/post/219495

From Fark.com RSS via this RSS feed. Fark comments are available here.

---

By Wednesday morning, Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions - known as source code - that developers had shared on programming platform GitHub.
It later narrowed its takedown request to cover just 96 copies and adaptations, saying its initial ask had reached more GitHub accounts than intended.

Source [web-archive]

---

Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model’s weights during training, and whether those memorized data can be extracted in the model’s outputs.

While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models... We investigate this question using a two-phase procedure...

We evaluate our procedure on four production LLMs: Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3, and we measure extraction success with a score computed from a block-based approximation of longest common substring...

Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs...

...we were able to extract four whole books near-verbatim, including two books under copyright in the U.S.: Harry Potter and the Sorcerer’s Stone and 1984...

Source: https://arxiv.org/pdf/2601.02671

 

cross-posted from: https://ibbit.at/post/219495

From Fark.com RSS via this RSS feed. Fark comments are available here.

---

By Wednesday morning, Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions - known as source code - that developers had shared on programming platform GitHub.
It later narrowed its takedown request to cover just 96 copies and adaptations, saying its initial ask had reached more GitHub accounts than intended.

Source [web-archive]

---

Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model’s weights during training, and whether those memorized data can be extracted in the model’s outputs.

While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models... We investigate this question using a two-phase procedure...

We evaluate our procedure on four production LLMs: Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3, and we measure extraction success with a score computed from a block-based approximation of longest common substring...

Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs...

...we were able to extract four whole books near-verbatim, including two books under copyright in the U.S.: Harry Potter and the Sorcerer’s Stone and 1984...

Source: https://arxiv.org/pdf/2601.02671

 

First, thank you for all of the feedback - your thoughts are appreciated and have significantly impacted the decisions that we’ve made about how we move forward.

TL;DR - We will be retiring the beta site shortly and will be removing the button to get to it and ceasing support for it.

We will not be migrating the unified posting experience to the main site. Not migrating the unified experience to the main site will obviate the need to solve the issue around the conversion of comments and answers to “replies”, because that was tied to this unified post experience.

Source [web-archive] ✨


Preview of the button "Beta"


Related: New site design and philosophy for Stack Overflow: Starting February 24, 2026 at beta.stackoverflow.com

[–] artwork@lemmy.world 8 points 1 week ago

It must be the ephemeral or almost magical Rule 3! ^^"

[–] artwork@lemmy.world 8 points 1 week ago* (last edited 1 week ago) (4 children)

I am sorry, but just in case, the supposed "beans" are likely different in clients/environments.

The character code: 0xf0 0x9f 0xab 0x98

Yet, regardless, the game/idea is sure interesting!


PreviewsAlexandrite:

Lemmy UI:

15
submitted 1 week ago* (last edited 1 week ago) by artwork@lemmy.world to c/fuck_ai@lemmy.world
 

...apparently, Microsoft’s take on it. Looking at their terms of service for Copilot, we read in the original bold...

Copilot is for entertainment purposes only...

While that’s good advice, we are pretty sure we’ve seen people use LLMs, including Copilot, for decidedly non-entertaining tasks. But, at least for now, if you are using Copilot for non-entertainment purposes, you are violating the terms of service.

We get it. They are just covering their… bases. When you do something stupid based on output from Copilot, they can say, “Oh, yeah, that was just for entertainment.” But they know what you are doing, and they even encourage it.
Heck, they’re doing it themselves. Would it stand up in court?...

Now it is true that probably everyone will give you a similar warning. OpenAI, for example, has this to say...
Notice that it doesn’t pretend you are only using it for a chuckle. Anthropic has even more wording, but still stops short of pretending to be a party game. Copilot, on the other hand, is for fun.

Source [web-archive]

[–] artwork@lemmy.world 0 points 1 week ago* (last edited 1 week ago)

If ARIA feels like it’s doing heavy lifting, it’s usually a sign the markup is fighting the browser.

Source [web-archive]

 

Using OpenClaw with Claude AI is about to get a lot more expensive, thanks to Anthropic’s new policy changes. Beginning April 4th at 3PM ET, users will “no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw,” according to an email sent to users on Friday evening. Instead, if users want to use OpenClaw with Claude, they’ll have to use a “pay-as-you-go option” that will be billed separate from their Claude subscription.

With OpenClaw creator Peter Steinberger now employed by OpenAI, Anthropic may also be encouraging subscribers to use more of its own tools, like Claude Cowork, instead. Steinberger says that he and OpenClaw board member Dave Morin “tried to talk sense into Anthropic, best we managed was delaying this for a week.”

According to Anthropic Claude Code exec Boris Cherny:

Starting tomorrow at 12pm PT, Claude subscriptions will no longer cover usage on third-party tools like OpenClaw.

You can still use these tools with your Claude login via extra usage bundles (now available at a discount), or with a Claude API key.

We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren’t built for the usage patterns of these third-party tools. Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API.

Subscribers get a one-time credit equal to your monthly plan cost. If you need more, you can now buy discounted usage bundles. To request a full refund, look for a link in your email tomorrow.

We want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that.

Source: https://xcancel.com/bcherny/status/2040206440556826908

---

I know it sucks. Fundamentally engineering is about tradeoffs, and one of the things we do to serve a lot of customers is optimize the way subscriptions work to serve as many people as possible with the best model.

Third party services are not optimized in this way, so it's really hard for us to do sustainably.

I did put up a few PRs to improve prompt cache hit rate for OpenClaw in particular, which should help for folks using it with Claude via API/overages.

Source: https://xcancel.com/bcherny/status/2040212589951774808#m

 

cross-posted from: https://lemmy.world/post/45115923

This image of home just came down from the Artemis II crew.

Taken after their translunar injection burn, there are aurorae at top right and lower left, and zodiacal light at lower right.

Credit: NASA/Reid Wiseman

// That's home. That's us.

Source

---

Alternative references with a better quality mentioned in comments by @baguette@piefed.social:
- https://images.nasa.gov/details/art002e000192;
- https://images-assets.nasa.gov/image/art002e000192/art002e000192~orig.jpg [5568 x 3712]

 

This image of home just came down from the Artemis II crew.

Taken after their translunar injection burn, there are aurorae at top right and lower left, and zodiacal light at lower right.

Credit: NASA/Reid Wiseman

// That's home. That's us.

Source

---

Alternative references of better image quality mentioned in comments by @baguette@piefed.social:
- https://images.nasa.gov/details/art002e000192;
- https://images-assets.nasa.gov/image/art002e000192/art002e000192~orig.jpg [5568 x 3712]

77
submitted 1 week ago* (last edited 1 week ago) by artwork@lemmy.world to c/fuck_ai@lemmy.world
 

cross-posted from: https://programming.dev/post/48191305

Or maybe that's just me. I've been writing code for a good chunk of my life now. I find deep joy in the struggle of creation. I want to keep doing it, even if it's slower. Even if it's worse. I want to keep writing code. But I suspect not everyone feels that way about it. Are they wrong? Or can different people find different value in the same task? And what does society owe to those who enjoy an older way of doing things?

If I could disinvent this technology, I would. My experiences, while enlightening as to models' capabilities, have not altered my belief that they cause more harm than good. And yet, I have no plan on how to destroy generative AI. I don't think this is a technology we can put back in the box. It may not take the same form a year from now; it may not be as ubiquitous or as celebrated, but it will remain.

And in the realm of software development, its presence fundamentally changes the nature of the trade. We must learn how to exist in a world where some will choose to use these tools, whether responsibly or not. Is it possible to distinguish one from the other? Is it possible to renounce all code not written by human hands?

Source: https://taggart-tech.com/reckoning [web-archive]

 

cross-posted from: https://lemmy.world/post/45050923

The internet is on fire over Claude Code's (NPM CLI to be precise) "leaked" source. 512,000 lines! Feature flags! System prompts! Unreleased features! VentureBeat, Fortune, Gizmodo, The Register, Hacker News - everyone covered it. A clean-room Rust rewrite (to dodge the DMCA) hit 100K GitHub stars in nearly a day - a world record. 110K now and counting.

Here's what nobody's saying: all of that was already public! On npm. In plaintext. For years.
Open unpkg.com/@anthropic-ai/claude-code/cli.js right now - that's the entire Claude Code CLI, one click away, readable in your browser. No leak required.

What "leaked" was a source map file that added internal developer comments on top of code that was never protected in the first place, plus a directory/source structure...

But the Code Was Already There Here's what most of the coverage missed: Claude Code ships as a single bundled JavaScript file - cli.js - distributed via npm. It's 13MB, 16,824 lines of JavaScript. And it's been sitting there, publicly accessible, since the product launched...

We Asked Claude to Deobfuscate Itself...

Source: https://www.afterpack.dev/blog/claude-code-source-leak [web-archive]

---

Partial de-obfuscation is sure possible today, yet still, it's inadequately time-consuming nowadays, and normally it's still impossible to recreate an original structure enough to consider complete, I believe.

Some tried to use the fairly advertized tool for Discord's app, and the result was the following (+screenshot):
- https://www.afterpack.dev/security-scanner/xml6xm2iyia0

view more: next ›