GnuLinuxDude

joined 2 years ago
[–] GnuLinuxDude@lemmy.ml 12 points 2 days ago

And permissively licensed utils have been around thanks to BSD and it’s never been an issue.

The distinction is that BSD coreutils are not attempting to be a drop-in 1:1 compatible replacement of GNU coreutils. The Rust coreutils has already accomplished this with its inclusion into Ubuntu 26.04.

If I wanted a permissively licensed system, I'd use BSD. I don't, so I primarily use Linux. I think citing a proprietary OS like macOS as a reason why permissively licensed coreutils are OK is kind of funny. It's easy to forget that before before the GPL there were many incompatible UNIX systems developed by different companies, and IMO the GPL has kept MIT and BSD-licensed projects "Honest", so-to-speak. Without the GPL to keep things in check, we'd be back to how things were in the 80s.

So what's next on the docket for Ubuntu? A permissively licensed libc?

[–] GnuLinuxDude@lemmy.ml 20 points 2 days ago (3 children)

Not interested in an MIT-licensed coreutils. Thanks, but no thanks!

[–] GnuLinuxDude@lemmy.ml 50 points 5 days ago (1 children)

Remember when Scam Altman posted a picture of the Death Star to explain how scary GPT5 is? lmao these people are all such cretins and I hate them to the last.

[–] GnuLinuxDude@lemmy.ml 32 points 5 days ago

If you want to simulate running Claude while it's offline, just go run the faucet in your kitchen.

[–] GnuLinuxDude@lemmy.ml 0 points 1 week ago

Take a tool like Photoshop. It can do all sorts of really cool things, but nobody wants to talk to the program

...What?

 

[4.1] - 2026-03-23

Encoder

  • Refactor MD, EncDec, and Entropy Coding kernels (!2604)
  • Improve Still Image coding efficiency (!2612, !2614)
  • Change Wiener Filter level for chroma for presets M3 and below (!2620)
  • Optimize Screen Content coding for Still Image (!2630)

Arm

  • Refactor Subpixel Variance kernels (!2608)
  • Optimize 16b SAD kernel (!2610)
  • Fixed Neoverse V2 unit test detection (!2622)
  • Update Arm build guide (!2625)

Bug fixes and documentation

  • Fixed a hang caused by improper variable looping (#2338, !2600)
  • Add missing option 2 for --enable-dlf's help output (!2601)
  • Depth Refinement algorithmic bug fix (!2602)
  • Add mutexes to fix hangs when running multiple instances of the encoder in one process (!2603, !2605, !2619)
  • Fix motion calculation for cyclic QP refresh (!2613)
  • Fixed a Debug vs Release mismatch (!2618)
  • Fixed some new warnings with newer GCC versions (!2621, !2636)
  • Changed Temporal Filtering distortion calculation to not include padding (!2623)
  • Cleanup some dead unit tests (!2626)
  • Benchmark framework improvements (!2627)
  • CI/CD improvements (!2628)
  • Fixed some niche crashes (!2629)
  • Readd missing PredStructure enum without SVT_AV1 prefix (!2635)
  • Rename svt_log to prevent conflict with SVT-JPEG-XS (!2634)
  • General code and doc cleanup (!2606, !2607, !2609, !2611, !2616, !2617, !2624, !2631, !2633, !2637)
[–] GnuLinuxDude@lemmy.ml 5 points 3 weeks ago

My media server, which is just my server generally, is an old thinkpad I have from 2014. For media I use Jellyfin and I ensure the content is already in a format that will not require transcoding on any device I care to serve to (typically mp4 1080p hevc + aac).

If you look at the used computer market, there are endless options to attain what you are asking for. My only real advice is make sure the computer doesn’t draw much power and, if possible, doesn’t emit much or any fan noise. A laptop is a decent choice because the battery kind of serves as an uninterruptible power supply. I just cap my charge limit at 80% since I never unplug it.

[–] GnuLinuxDude@lemmy.ml 3 points 1 month ago (1 children)

Vista called it SuperFetch, and preloading pages into memory is not a bad technique. macOS and Linux do it, too, because it's a simple technique for speeding up access to data that would otherwise have to be fetched from disk. You can see that Linux does it as you check the output of free and read out the buff/cache column. Freeing unused pages from memory is very fast, because you can just overwrite dirty pages.

[–] GnuLinuxDude@lemmy.ml 20 points 1 month ago (7 children)

For Windows if 8 gb of RAM is not enough that’s an own-goal. Because it is. Or it should be. Windows 11 is not so dramatically better than Windows Vista SP3 to require a 10x better computer to use comfortably. Actually, in many ways Windows 11 is a massive downgrade from what came before it.

I’m glad the MacBook neo is only 8gb. That means they have to support it as a usable low-end target. That means we aren’t jumping the gun on saying “actually you need 12 gigs of RAM” as if that should be normal for a usable computer.

[–] GnuLinuxDude@lemmy.ml 9 points 1 month ago

It’s not like they have a vested interest in the continuity of these western institutions. When you apply a maximum pressure sanctions campaign against Iran, guess what!? Their economy is not tied up in your economy. They could give a fuck. Especially after you bombed them first (twice!). Especially after you unilaterally withdraw from a treaty with them.

[–] GnuLinuxDude@lemmy.ml 10 points 1 month ago (7 children)

The same White House that tells you a new casus belli for the ~~war~~ special operation each time you ask them? With a projected timeline that shifts day after day? Led by a pedophile rapist who has instructed his DOJ to cover up any of his involvement with Epstein? Who we already know from the fact that he was president for FOUR YEARS that he is a serial liar?

There is not a single thing that this White House can say that will make me think they are telling the truth.

[–] GnuLinuxDude@lemmy.ml 8 points 1 month ago (3 children)

Interesting writing. But my concern is that social responsibility will be dumped by the cost factor as he said. Anything that is GPL is under threat by an AI-based reimplementation. The cost of doing that seems artificially low now (investment hype phase, not ROI phase of these businesses), so it’s not really the idea anyone could do it that concerns me. The concerning part is no matter the price, bigger companies can take the hit and now direct their resources to undo the GPL everywhere and simultaneously replace labor in doing it.

 

I'm installing 3x2TB HDDs into my desktop pc. The drives are like-new.

Basically they will replace an ancient 2tb drive that is failing. The primary purpose will basically be data storage, media, torrents, and some games installed. Losing the drives to failure would not be catastrophic, just annoying.

So now I'm faced with how to set up these drives. I think I'd like to do a RAID to present the drives as one big volume. Here are my thoughts, and hopefully someone can help me make the right choice:

  • RAID0: Would have been fine with the risk with 2 drives, but 3 drives seems like it's tempting fate. But it might be fine, anyhow.
  • RAID1: Lose half the capacity, but pretty braindead setup. Left wondering why pick this over RAID10?
  • RAID10: Lose half the capacity... left wondering why pick this over RAID1?
  • RAID5: Write hole problem in event of sudden shutoff, but I'm not running a data center that needs high reliability. I should probably buy a UPS to mitigate power outages, anyway. Would the parity calculation and all that stuff make this option slow?

I've also rejected considering things like ZFS or mdadm, because I don't want to complicate my setup. Straight btrfs is straightforward.

I found this page where the person basically analyzed the performance of different RAID levels, but not with BTRFS. https://larryjordan.com/articles/real-world-speed-tests-for-different-hdd-raid-levels/ (PDF link with harder numbers in the post). So I'm not even sure if his analysis is at all helpful to me.

If anyone has thoughts on what RAID level is appropriate given my use-case, I'd love to hear it! Particularly if anyone knows about RAID1 vs RAID10 on btrfs.

 

[4.0.0] - 2026-1-23

API updates

  • Major release with new API updates that are not backwards compatible.
  • Extended the crf range to 70 reducing the impact or QP scaling allowing the encoder to reach lower bitrates
  • Added quarter steps between crf increments to allow for further granularity in qp selection
  • Added support for setting a custom global logger for library consumers (!2570 (merged), !2579 (merged))
  • Cleaned up public API headers including removal of deprecated macros, structs, and fields (!2565 (merged), !2568 (merged))
    • Additionally cleaned up anything marked using SVT_AV1_CHECK_VERSION().
  • Added ability to calculate per-frame PSNR and SSIM metrics (!2521 (merged))
  • Allow sending more than 1 but less than 4 frames with avif mode (This is not for AVIF image sequence, but for encoding an alpha layer) (!2551 (merged), !2560 (merged))
  • Added tune IQ and MS-SSIM for Still Image coding mode

Encoder

  • Significant improvements in AVIF and still image modes (!2552 (merged),!2567 (merged)):
  • ~5-8x speedup M11-M0 at the same quality levels with tune MS-SSIM
  • ~5-8% BD-Rate improvements at the same complexity with tune MS-SSIM
  • Tradeoff improvements for the RTC modes (!2558 (merged)):
  • ~5-15% speedup at similar quality levels in --rtc mode across presets 7 - 11
  • Tradeoff improvements for the Random Access mode (VOD use case) showing a 10-25% speedup across presets M7 down to M0 for --fast-decode 1 and 2 (!2558 (merged))
  • Major feature updates for the visual quality mode with the completion porting all SVT-AV1-PSY applicable features for --tune vq for video and --tune iq for avif (!2484 (merged), !2489 (merged), !2491 (merged), !2494 (merged), !2496 (merged), !2503 (merged), !2504 (merged), !2507 (merged), !2514 (merged), !2522 (merged) , !2561 (merged), !2562 (merged), !2576 (merged)):
  • Added AC Bias, a psychovisual feature that improves detail preservation and film grain retention
  • Update S-Frame support to allow setting it in a specific decode order option and with more qp options (!2477 (merged) !2523 (merged) !2534 (merged))
  • Further Arm Neon and SVE2 optimizations that improve high bitdepth encoding by an average of ~5% in low resolutions
 

In some e-waste bin I found an entry level M1 MacBook Pro. The display didn’t work (visible crack lines on the panel and the screen lights up but only shows black), but everything else about the computer is totally fine, I determined after testing.

I managed to factory reset the thing and now I have an extra computer on my hands. And a good one, at that, because I think even an entry level M1 is still a good computer.

I already have a laptop, desktop, and an old server. So I feel like all my needs are met. But are there creative uses with an extra Mac?

 

My company is strongly pushing AI. There are lot of experiments, demos, and effort from decently smart people about integrating it into our workflows. There are some impressive victories that have been made with AI tooling producing some things fast. I am not in denial about this. And the SE department is tracking improved productivity (as measured by # of tickets being done, I guess?)

The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam. I think it's obvious how google search results are spam, how spam songs and videos are being produced, etc. But even bad results from AI that have to be discarded, IMO, are spam.

And that isn't even getting into all massive amounts of theft to train the data, or the immense amounts of electricity it takes to do training and inference, as well as run, all this crap. Nor the psychosis being inflicted onto people who emplace their trust into these systems. Nor the fact that these tools are being used to empower authoritarian regimes to track vulnerable populations, both here (in the USA) and abroad. And all this AI shit serves to enrich the worst tech moguls and to displace people like artists and people like myself, a programmer.

I'm literally being told at my job that I should view myself basically as an AI babysitter, and that AI has been unambiguously proven in the industry, so the time for wondering about it, experimenting with it, or opposing it is over. The only fault and flaw is my (i.e. any given SE's) unwillingness to adapt and onboard.

Looking for advice from people who have had to navigate similar crap. Because I feel like I'm at a point where I must adapt or eventually get fired.

 

I have, within the context of my job, things to do that will take various lengths of time and are of various priorities. If I get blocked on one it'd be useful to know what to switch to, and on.

I have, within the context of my personal life, things that I want to do that will take undetermined amounts of time and are of various priorities.

It'd also be nice to have a record to go back and reflect on when I did what. And it'd be nice to plan a little ahead so that I can decide what I hope to do next.

So... how do you do it? I am so bad at time management. Is there a useful software I can use (if so, is it foss)? Is there a way to keep consistent with my planner so that I don't fall behind on managing my time management, without falling into the trap of spending much effort on creating a time management system that all my time is spent managing my time.

Send help :(

 

I was walking home yesterday and I just happened to come across an HP LaserJet p2035n sitting by the dumpster, waiting to be taken away. I've never owned a printer, but this thing looked like it came from an era when such devices were made to be reliable instead of forcing DRM-locked cartridges, so I picked it up and took it with me. After getting situated I started some online research and I figure this brand of printers was manufactured from about 2008-2012, and my printer has a 2012 date.

As it turns out, this tossed printer works perfectly fine. I plugged it into power and ran a test sheet, and it prints almost perfectly. I plugged it via USB-B into my PC running Fedora 41 and immediately it gets picked up and added as usable printer. I then plugged the printer into its Ethernet port and fortunately this thing is new enough to have Bonjour (i.e. mdns) services so once again my PC just immediately finds it and can print. Awesome!

My laptop is a MacBook. While it did detect the printer over the network, it couldn't add the printer because it couldn't find a driver to operate it. I honestly don't understand why that's a problem since I assume macOS also uses CUPS just like Linux. But at any rate, I found the solution:

With CUPS on Linux I can share the printer. After configuring firewall-cmd to allow the ipp service now my iPhone and my MacBook can also print to the shared printer using the generic PostScript driver. So, in conclusion, Linux helped me 1) use this printer with no additional effort of installing drivers, 2) share this printer to devices which were not plug-and-play ready, and 3) print pics of Goku and Vegeta. As always, I love Linux.

 

I disable animations either through Gnome's accessibility setting or KDE's slider to instant. I find that Gnome's animations are just too slow by default and KDE's tend to be janky. So while I want my window manager to have instant animations, I don't need my applications to do so.

Is it possible to disable the animations from the DE's settings but to keep them like normal in Firefox? Example: when I press ctrl+t it's OK if the new tab has an animation when it's created in the browser's UI.

 

When I first set up my web server I don't think Caddy was really a sensible choice. It was still immature (The big "version 2" rewrite was in beta). But it's about five years from when that happened, so I decided to give Caddy a try.

Wow! My config shrank to about 25% from what it was with Nginx. It's also a lot less stuff to deal with, especially from a personal hosting perspective. As much as I like self-hosting, I'm not like "into" configuring web servers. Caddy made this very easy.

I thought the automatic HTTPS feature was overrated until I used it. The fact is it works effortlessly. I do not need to add paths to certificate files in my config anymore. That's great. But what's even better is I do not need to bother with my server notes to once again figure out how to correctly use Certbot when I want to create new certs for subdomains, since Caddy will do it automatically.

I've been annoyed with my Nginx config for a while, and kept wishing to find the motivation to streamline it. It started simple, but as I added things to it over the years the complexity in the config file blossomed. But the thing that tipped me over to trying Caddy was seeing the difference between the Nginx and Caddy configurations necessary for Jellyfin. Seriously. Look at what's necessary for Nginx.

https://jellyfin.org/docs/general/networking/nginx/#https-config-example

In Caddy that became

jellyfin.example.com {
  reverse_proxy internal.jellyfin.host:8096
}

I thought no way this would work. But it did. First try. So, consider this a field report from a happy Caddy convert, and if you're not using it yet for self-hosting maybe it can simplify things for you, too. It made me happy enough to write about it.

 

Changes for 1.5.0 'Sonic':

1.5.0 is a major release of dav1d, that:

  • WARNING: we removed some of the SSE2 optimizations, so if you care about systems without SSSE3, you should be careful when updating!
  • Add Arm OpenBSD run-time CPU feature
  • Optimize index offset calculations for decode_coefs
  • picture: copy HDR10+ and T35 metadata only to visible frames
  • SSSE3 new optimizations for 6-tap (8bit and hbd)
  • AArch64/SVE: Add HBD subpel filters using 128-bit SVE2
  • AArch64: Add USMMLA implempentation for 6-tap H/HV
  • AArch64: Optimize Armv8.0 NEON for HBD horizontal filters and 6-tap filters
  • Power9: Optimized ITX till 16x4.
  • Loongarch: numerous optimizations
  • RISC-V optimizations for pal, cdef_filter, ipred, mc_blend, mc_bdir, itx
  • Allow playing videos in full-screen mode in dav1dplay
 

[2.2.0] - 2024-08-19

API updates

  • No API changes on this release

Encoder

  • Improve the tradeoffs for the random access mode across presets:
  • Speedup of ~15% across presets M0 - M8 while maintaining similar quality levels (!2253)
  • Improve the tradeoffs for the low-delay mode across presets (!2260)
  • Increased temporal resolution setting to 6L for 4k resolutions by default
  • Added ARM optimizations for functions with c_only equivalent yielding an average speedup of ~13% for 4k10bit

Cleanup Build and bug fixes and documentation

  • Profile-guided-optimized helper build overhaul
  • Major cleanup and fixing of Neon unit test suite
  • Address stylecheck dependence on public repositories
 

[2.1.0] - 2024-05-17

API updates

  • One config parameter added within the padding size. Config param structure size remains unchanged
  • Presets 6 and 12 are now pointing to presets 7 and 13 respectively due to the lack of spacing between the presets
  • Further preset shuffling is being discussed in #2152

Encoder

  • Added variance boost support to improve visual quality for the tune vq mode
  • Improve the tradeoffs for the random access mode across presets:
  • Speedup of 12-40% presets M0, M3, M5 and M6 while maintaining similar quality levels
  • Improved the compression efficiency of presets M11-M13 by 1-2% (!2213)
  • Added ARM optimizations for functions with c_only equivalent

Cleanup Build and bug fixes and documentation

  • Use nasm as a default assembler and yasm as a fallback
  • Fix performance regression for systems with multiple processor groups
  • Enable building SvtAv1ApiTests and SvtAv1E2ETests for arm
  • Added variance boost documentation
  • Added a mailmap file to map duplicate git generated emails to the appropriate author
view more: next ›