N0x0n

joined 2 years ago
[–] N0x0n@lemmy.ml -3 points 1 week ago* (last edited 1 week ago) (1 children)

Sure, but piracy is probably how most people use the P2P network... When was the last time you used torrenting for something legal? :/

But what ever... This is the privacy sub...

[–] N0x0n@lemmy.ml 3 points 1 week ago (6 children)

https://www.ivpn.net/knowledgebase/general/do-you-support-port-forwarding/

Why have an article about port forwarding and P2P network and linking the TOS about how it's against IVPN's rules?

You will not use our service for receiving and distributing pirated copyright materials. This includes, but is not limited to the following activities: trading, selling, bartering, sharing, transmitting or receiving, of such materials.

That's dumb :/

[–] N0x0n@lemmy.ml 2 points 2 weeks ago (1 children)

This seems rather dangerous ! Maybe take a beer instead !

[–] N0x0n@lemmy.ml 0 points 1 month ago

There's also https://github.com/Naunter/BT_BlockLists

Which has a bit of controversy... Some people say it works other say it's useless. I use this filter over a year now and didn't get any dmca from my ISP. (I don't download any recent media though)

Keep in mind, this could/will potentially block healthy peers in the swarm ^^ it's not a full bulletproof solution...

It does work in qbittorrent you just need to change the extension to .p2p I think?

[–] N0x0n@lemmy.ml 3 points 1 month ago* (last edited 1 month ago) (1 children)

The arr stack is kinda tricky to get started and understand how it all works together, but as soon as it clicks, it's awesome !!

Can't exactly say what, but I kinda got lost and what helped me out was to slowly work one arr service at a time and understand what they actually do. (First only Sonarr after awhile I added prowlarr, then radarr and now slowly testing Seer !)

Trash guide was also helpful specially for custom formats. Just take your time and don't try to much to make your own custom formats... Have seen alot a of people on private trackers blow up their ratio without noticing it.

Best advice I can give you is to just play arround with sonarr or radarr alone and try things out and see what they do ^^ Or try to read and understand the official documentation but you will have a better grasp while doing things :)

Edit: Ohh and forget about asking chatGPT... It will mostly output outdated information an cause you more trouble and leave you even more confused !

[–] N0x0n@lemmy.ml 3 points 1 month ago

Huh? That's just a myth... They are just like every other folks, blinded sheeps who sometime go into the streets doing nothing 🤫

Had any of the last decades street walks any important impact? Like Farmers? Nothing... Gillet jaune? Nothing... Police? Nothing... Get this bastard out of Presidency? Naaaah...

So yeah, Historically, French people are known for their revolutionary bloodlust, but that's long gone.

[–] N0x0n@lemmy.ml 6 points 1 month ago (1 children)

Nobody mentioned this but route only the necessary traffic to your router (all your self-hosted services) with wireguard's split tunneling (just set the apropriate allowed_ip networks in your wg config).

You could set it to 0.0.0.0/0 and send all her traffic through your router but this could potientially choke your own network and make her own speed slow down.

[–] N0x0n@lemmy.ml 1 points 1 month ago (1 children)

It's kinda open I guess... But as soon as you try to do things out of the box on MacOS, it just doesn't work without a janky workarround ^^

And just don't get me started on their .plist implementation 🤦‍♂️ I haven't update for about 2 years, in fear it will totally break my current workflow and all the custom things I had to do to make it work HOW i like it and not how Apple dictates it.

It's a gift, but God what I hate that dumb stupid Macraptop !

[–] N0x0n@lemmy.ml 1 points 1 month ago

Wow... Never thought about this ! To bad my battery died years ago ! Craptop still going strong though 👍💪

[–] N0x0n@lemmy.ml 1 points 1 month ago

There's no bookmarking functionality :( however, they are grouped by directorty structure. Most important part !

However, not sure it works witb emags ://

[–] N0x0n@lemmy.ml 3 points 1 month ago (1 children)

Eeeewww ! YouTube's audio quality is TrasH ! If you want to respect your ears and those arround you...Just dont download musique from there... It's not worth it !

I'm not talking about flac grade quality... But minima off 192k mp3...

[–] N0x0n@lemmy.ml 1 points 1 month ago (1 children)

I though the acronym was GAFAM ? Google Amazon Facebook Apple Microsoft ? Faang has some deeper meaning but misses Microsoft in it ^^

 

Is there some hidden pronounciation rule I'm not aware of? And why do we say F.B.I and not FBI or U.S.B and not USB ?

I know it seems a really silly question seeing the actual situation with the ICE eveywhere on the news... But this really bothers me on why people yell ICE when it's actually the I.C.E :/.

 

cross-posted from: https://lemmy.ml/post/36794370

Hello everyone :)

My Linux learning and Homelab setup is going smoothly and after a long period of stagnation, I'm in a new learning curve :D ! I just learned the magic of hard links and implemented them with bind mounts (yeah hard links only work on the same file system :P) to my Qbittorrent scripting to automagically move them as hard links to a bind mount accessible to Sonarr, Radarr, Jellyfin... and move, rename and do all other things without even touching the original files: PUR MAGIC !

Everything is a file? Naah, everything is a hard link ! (Or inode? xD)


While I'm overjoyed I learned and have a better understanding of files, hard links, soft links, file system, docker, web, all kind of things related to IT... I'm getting kinda overwhelmed of what's happening on my system !!

  • I have a dozen docker compose on my server, all behind traefik resolved by my piHole DNS on a raspberry pi, some have a custom image made by myself for certificate purpose or some manual changes I added.

  • I have some .config files in my ~/ to improve my micro experience and make it more integrated over SSH with my Mac and Desktop. If also have some config files in my /root directory for my miniCA for all my service with my personal local domain name and other config files like /etc/bash_aliases and some changes in /etc/bash.bashrc

  • I have some python .venv for learning and scripting with python

  • I have a complex and long bash script backup for all my docker containers to back up my volumes, config files and media files separately

  • Installed some useful and needed packages like resolvconf, wireguard, samba...

  • I have a few samba shares, all accessible in my LAN with sometimes some exotic configuration for Mac integration

  • My docker containers have some bind mounts shared volumes

[....]

And now I also have some hard links lying around in the mix ! So I have to say it out loud: I'm OVERWHELMED !


Yes, I do keep some notes in Obsidian and also have a self-hosted Forgejo to keep my notes updated and have some kind of version control of changes in my scripts, but I do feel like I'm not sure anymore what I have... Not to say I didn't mention all the other stuff related to my phone (baikal, ntfy....) or how to keep everything updated (WUD does the trick for containers :)) and tidy...

I guess I'm looking for something magic, something that could In ONE blink give me what's doing what and where? And changes my life for an ever-growing IT space ? Preference something visual...?


I hope to hear from you guys on what I can do to take away this feel of being lost and not being able to fully track my systems and my LAN !

Thank you !

 

Hello everyone :)

My Linux learning and Homelab setup is going smoothly and after a long period of stagnation, I'm in a new learning curve :D ! I just learned the magic of hard links and implemented them with bind mounts (yeah hard links only work on the same file system :P) to my Qbittorrent scripting to automagically move them as hard links to a bind mount accessible to Sonarr, Radarr, Jellyfin... and move, rename and do all other things without even touching the original files: PUR MAGIC !

Everything is a file? Naah, everything is a hard link ! (Or inode? xD)


While I'm overjoyed I learned and have a better understanding of files, hard links, soft links, file system, docker, web, all kind of things related to IT... I'm getting kinda overwhelmed of what's happening on my system !!

  • I have a dozen docker compose on my server, all behind traefik resolved by my piHole DNS on a raspberry pi, some have a custom image made by myself for certificate purpose or some manual changes I added.

  • I have some .config files in my ~/ to improve my micro experience and make it more integrated over SSH with my Mac and Desktop. If also have some config files in my /root directory for my miniCA for all my service with my personal local domain name and other config files like /etc/bash_aliases and some changes in /etc/bash.bashrc

  • I have some python .venv for learning and scripting with python

  • I have a complex and long bash script backup for all my docker containers to back up my volumes, config files and media files separately

  • Installed some useful and needed packages like resolvconf, wireguard, samba...

  • I have a few samba shares, all accessible in my LAN with sometimes some exotic configuration for Mac integration

  • My docker containers have some bind mounts shared volumes

[....]

And now I also have some hard links lying around in the mix ! So I have to say it out loud: I'm OVERWHELMED !


Yes, I do keep some notes in Obsidian and also have a self-hosted Forgejo to keep my notes updated and have some kind of version control of changes in my scripts, but I do feel like I'm not sure anymore what I have... Not to say I didn't mention all the other stuff related to my phone (baikal, ntfy....) or how to keep everything updated (WUD does the trick for containers :)) and tidy...

I guess I'm looking for something magic, something that could In ONE blink give me what's doing what and where? And changes my life for an ever-growing IT space ? Preference something visual...?


I hope to hear from you guys on what I can do to take away this feel of being lost and not being able to fully track my systems and my LAN !

Thank you !

 

I guess using LVM on an old raspberry pi 3B+ was not the smartest idea :/ ! As expected the pi 3b+ can handle qbittorrent and piHole without to much load average...

After switching from LVM+EXT4 to a single EXT4 partition, I can download, check file integrity and resolve DNS at the same time without my dns resolution hanging !

Solution:

  • Install piHoleOS on a simpler filesystem without LVM (ext4)
  • Take qbittorrent out of docker and install it barebone on the device (qbittorrent-nox)

Hello :))

I'm relatively new to the pi and all the ecosystem (ARM, SBC...) and I'm kinda intrigued on what's happening here if someone has some info to share.

I generally work on debian for my server stuff but found an old never used RPi 3B+ in one of my boxes. Installed RPi OS lite based on debian bookworm and docker with the pihole container as DNS server for my home network.

Works great and does hold all my DNS requests without issues. However, yesterday I migrated my qbittorrent stuff (mostly linux ISO 😅) on the RPi, on an external HDD over USB.

While uploading works fine without issues for +/- 100 torrent files, when downloading OR checking the files integrity the RPi is chocking really hard making the DNS request slowdown and even unresponsive...

I did some search and from my findings the USB is a hard bottleneck for file transfer on the pi 3b+ and qbittorrent in a container adds alot of overheat too, so I checked and moved everything out of the container and installed qbittorrent-nox which does improve the situation but it still makes the DNS request on pihole impossible when downloading or checking file integrity.


I this some kind of bug, known issue? RPi is cool stuff but If it can't hold or stand a medium hungry service, that's kinda a bummer... It's not doing that much of work, It's just resolving DNS and downloading a file over the torrent protocol.

Somone similar issue, observation, insight or solution? Or is the pi not meant to hold a torrenting service?

 

Hello everyone :)

Firstly, I'm not anyhow related to programming at ALL ! I can put together some easy to use bash scripts, to automate some stuff, while copy/pasting from the web and doing A LOT of trial an error. It sometimes took me a whole week to have a functional script. I also sometimes asked for some help here on Lemmy and still uses some of those script people helped me out to build up from the ground !

Secondly, I'm not really into the AI slope and have a lot of arguments why I hate it (Unauthorized webscrapping, High energy consumption, Privacy nightmare....).

However, I have to say I'm quite impressed how good my first experience with AI was, considering my very limited knowledge in programming. The script works perfectly for my use case. I had to switch between Claude and O4-mini to get the best results and took me a whole day of prompting around and testing before it behaved like I wanted it to !

Without going to much into details, I was looking for a way to interface with Qbittorrent's API and manage my torrents and move them around in new categories in an automated way. What this python script does is to export the .torrent file in specific directory (not the files) and stop the torrent and move it in a new category if desired based on specific criteria (ratio, category, tags, seeding time...) . If correctly configured, directories and sub-directories are also created on the fly.


My own opinion after this experience is that it probably won't write a fully functional software (not yet?), but for something like scripting or learning basic programming skills it's a very capable assistant!

  1. What do you think of the code overall? (see below)

  2. Also, do you think it's still relevant to get proficient and learn all the details or just stick to the basic and let AI do the heavy lifting?


DISCLAIMER

Keep in mind this works perfectly for my use case and maybe won't work like you expect. It has it's flaws and will probably break in more niche or specific use cases. Don't use it if you don't know what you're doing and proper testing ! I'm not responsible if all your torrents are gone !!!


## Made by duckduckgo AI ##
## Required to install requests with pip install requests ##
## see duck.ai_2025-07-13_16-44-24.txt ##

import requests
import os

# Configuration
QB_URL = "http://localhost:8080/"  # Ensure this is correctly formatted
USERNAME = ""  # Replace with your qBittorrent username
PASSWORD = ""  # Replace with your qBittorrent password
MIN_RATIO = 0.0  # Minimum ratio to filter torrents
MIN_SEEDING_TIME = 3600  # Minimum seeding time in seconds
OUTPUT_DIR = "./directory"  # Replace with your desired output directory
NEW_CATEGORY = ""  # Specify the new category name
NEW_PATH = "~/Downloads"

# Optional filtering criteria
FILTER_CATEGORIES = ["cats"]  # Leave empty to include all categories
FILTER_TAGS = []  # Leave empty to include all tags
FILTER_UNTAGGED = False  # Set to True to include untagged torrents
FILTER_UNCATEGORIZED = False  # Set to True to include uncategorized torrents

# Function to log in to qBittorrent
def login():
    session = requests.Session()
    response = session.post(f"{QB_URL}/api/v2/auth/login", data={'username': USERNAME, 'password': PASSWORD})
    if response.status_code == 200:
        print("Login successful.")
        return session
    else:
        print("Login failed.")
        return None

# Function to get torrents
def get_torrents(session):
    response = session.get(f"{QB_URL}/api/v2/torrents/info")
    if response.status_code == 200:
        print("Retrieved torrents successfully.")
        return response.json()
    else:
        print("Failed to retrieve torrents.")
        return []

# Function to stop a torrent
def stop_torrent(session, torrent_hash):
    response = session.post(f"{QB_URL}/api/v2/torrents/stop", data={'hashes': torrent_hash})
    if response.status_code == 200:
        print(f"Stopped torrent: {torrent_hash}")
    else:
        print(f"Failed to stop torrent: {torrent_hash}")

# Function to start a torrent
def start_torrent(session, torrent_hash):
    response = session.post(f"{QB_URL}/api/v2/torrents/start", data={'hashes': torrent_hash})
    if response.status_code == 200:
        print(f"Started torrent: {torrent_hash}")
    else:
        print(f"Failed to start torrent: {torrent_hash}")


# Function to create a category if it doesn't exist
def create_category(session, category_name, save_path):
    # Skip category creation if category or save path is empty
    if not category_name or not save_path:
        print("Skipping category creation: category or save path is empty.")
        return

    # Check existing categories
    response = session.get(f"{QB_URL}/api/v2/torrents/categories")
    if response.status_code == 200:
        categories = response.json()
        if category_name not in categories:
            # Create the new category with savePath
            payload = {
                'category': category_name,
                'savePath': save_path
            }
            response = session.post(f"{QB_URL}/api/v2/torrents/createCategory", data=payload)
            if response.status_code == 200:
                print(f"Category '{category_name}' created with save path '{save_path}'.")
            else:
                print(f"Failed to create category '{category_name}'. Status code: {response.status_code}")
        else:
            print(f"Category '{category_name}' already exists.")
    else:
        print("Failed to retrieve categories. Status code:", response.status_code)


# Function to set the category for a torrent
def set_torrent_category(session, torrent_hash, category_name, save_path):

    # If either category or path is missing, remove the category
    if not category_name or not save_path:
        response = session.post(f"{QB_URL}/api/v2/torrents/setCategory", data={'hashes': torrent_hash, 'category': ''})
        if response.status_code == 200:
            print(f"Removed category for torrent: {torrent_hash}")
        else:
            print(f"Failed to remove category for torrent: {torrent_hash}")
        return


def is_category_match(torrent_category, filter_categories):
    """
    Check if the torrent's category matches any of the filter categories.
    Supports partial category matching.

    Args:
    torrent_category (str): The category of the torrent
    filter_categories (list): List of categories to filter by

    Returns:
    bool: True if the category matches, False otherwise
    """
    # If no filter categories are specified, return True
    if not filter_categories:
        return True

    # Check if the torrent's category starts with any of the filter categories
    return any(
        torrent_category == category or
        torrent_category.startswith(f"{category}/")
        for category in filter_categories
    )


# Modify the export_torrents function to use the new category matching
def export_torrents(session, torrents):
    # Create the output directory if it doesn't exist
    os.makedirs(OUTPUT_DIR, exist_ok=True)

    for torrent in torrents:
        ratio = torrent['ratio']
        seeding_time = torrent['seeding_time']
        category = torrent.get('category', '')
        tags = torrent.get('tags', '')

        # Use the new category matching function
        if (ratio >= MIN_RATIO and
            seeding_time >= MIN_SEEDING_TIME and
            is_category_match(category, FILTER_CATEGORIES) and
            (not FILTER_TAGS or any(tag in tags for tag in FILTER_TAGS)) and
            (not FILTER_UNTAGGED or not tags) and
            (not FILTER_UNCATEGORIZED or category == '')):

            torrent_hash = torrent['hash']
            torrent_name = torrent['name']
            export_url = f"{QB_URL}/api/v2/torrents/export?hash={torrent_hash}"


            # Export the torrent file
            response = session.get(export_url)
            if response.status_code == 200:
                # Save the torrent file with its original name in the specified output directory
                output_path = os.path.join(OUTPUT_DIR, f"{torrent_name}.torrent")
                with open(output_path, 'wb') as f:
                    f.write(response.content)
                print(f"Exported: {output_path}")

                # Stop the torrent after exporting
                stop_torrent(session, torrent_hash)

                # Create the new category if it doesn't exist
                create_category(session, NEW_CATEGORY, NEW_PATH)

                # Set the category for the stopped torrent
                set_torrent_category(session, torrent_hash, NEW_CATEGORY, NEW_PATH)
            else:
                print(f"Failed to export {torrent_name}.torrent")

# Main function
def main():
    session = login()
    if session:
        torrents = get_torrents(session)
        export_torrents(session, torrents)

if __name__ == "__main__":
    main()

 

Partially Solved

While I haven't found a native solution on how to integrate NTFY to glance, I did build up something that actually send basic text streams to glance in an automated way. It's very rudimentary and probably error prone, but that's the best I could do right now... Maybe someone else will chime in and give some better advice/solution.

For those interested postgREST allows to build a simple docker container postgres database you can query for the custom api in glance. It DOES work, but If like myself, your database/json/postgre knowlege is very limited, it only allows basic text response like: "Update Failed".

I did try to get a little further into the rabbit hole, but it does come with the necessity to have a good database and query/response background ? Not a very good solution and will probably not go one or try to improve on that right now... But feel free to give better advice or another lead to follow :)

Further notes:

On a final note, I do see a lot of interest in the Glance community and alot of new and interesting updates:

  • Added .Options.JSON to the custom API widget which takes any nested option value and turns it into a JSON string v0.8.3
  • [Custom API] Synchronous API calls and options property v0.8.0

Hello everyone !

I kinda hit a roadblock here and I'm interested if someone actually have done something similar or an alternative to what I'm trying to achieve.

Some background

Right now I'm playing around with NTFY and works great. I even hooked some automated backup script to my server with stdout/stderr output:

(Please, no bash-shaming ! :P)

#!/bin/bash

$COMMAND

if [ $? -eq 0 ]; then
        echo "Success"
        issue=$(<stdout.txt)
        curl -H "Title: Hello world!" -H "Priority: urgent" -d "$issue" https://mydomain/glancy

else
        echo "Failure"
        issue=$(<stderr.txt)
        curl -H "Title: Hello world!" -H "Priority: urgent" -d "$issue" https://mydomain/glancy

fi

This works great and I receive my notification on every device subscribed to the topic

What I'm trying to achieve?

Send the NTFY notification to a visual dashboard like Glance. If there's no native way to achieve this, self-host a simple json api that get's populated by my server's script response?

What's the issue ?

After skimming all the GitHub repos, there's no mention on any self-hosted dashboard to integrate NTFY as a notification hook. I find it kinda strange because NTFY is just a simple HTTP PUT or POST requests so It should be rather easy no?

And after searching the whole day on the web, there wasn't any good results or resources. So I came to the conclusion that It wasn't that easy and probably needs a bit more of something I'm probably bad at (coding?).

In the glance documentation there's configuration to hook a custom api and looks rather simple, however now I hit a roadblock I'm not able to solve... I have no idea where or how to spin up a self-hosted and dynamic json api that communicates with my server and updates/populate that json file... Here's an example to show what I mean:

Json api: https://api.laut.fm/station/psytrancelicious/last_songs

Custom Glance API template:

- type: custom-api
  title: Random Fact
  cache: 6h
  url: https://api.laut.fm/station/psytrancelicious/last_songs
  template: |
    <p class="size-h4 color-paragraph">{{ .JSON.String "title" }}</p>
Questions

  1. Any native way to hook NTFY's notification to a dashboard like instance (Glance, Homer, Dashy?)

  2. If no, Is it possible to self-host a json api that gets populated by my script's response? A good pointer to the right direction would be very nice, preferably a Docker solution !

  3. Another solution to have a visual dashboard (not the native NTFY dashboard) and visualize all my script response notification in one place ?


Thank in advance for all your responses :) and sorry for my bad wording, web development terminology is not really my cup of tea !

 

cross-posted from: https://lemmy.ml/post/28250905

cross-posted from: https://lemmy.ml/post/28250870

Hello everyone !

I'm seeding/cross-post this in 3 communities because I think I will get better answers in each respective one (Hardware, coding, electronics).

As the title say I'm want to learn to build from the ground up those cheap solar led/optic fiber lightning, here some images to get what I mean:

They come in bundles but after awhile they just die out without repair ability which kinda sucks and because they are cheap my mum keeps buying them... So, I would like to build ones I'm able to repair and customize :). However I have absolutely NO idea where to begin and what exactly I'm searching for... I'm lacking the skills and knowledge on the 3 fronts !

  • What hardware I'm looking for ?
  • What kind of electronics ?
  • What programming language to glue everything together?
  • .... ?

I'm not afraid to get my hands dirty and learn how to micro-solder, learn some coding skills to get everything neatly glued together software wise, learn the necessary hardware or other important and necessary stuff to achieve this goal ! I'm looking for every good and reliable advice to get me started !

One thing though, If i have to learn some hardware/low level coding skills I would prefer a language that would be useful for other stuff in the long run.

Thank you in advance and I'm already sorry if I'm very slow to respond, I'm not native and the flood amount of information I will probably get, will surpass my ability to respond to everyone right away.

Also every other directions are welcome, like:

  • how to repair the old ones? Do I need to flash their proprietary software/hardware?

Thank you !

 

cross-posted from: https://lemmy.ml/post/28250870

Hello everyone !

I'm seeding/cross-post this in 3 communities because I think I will get better answers in each respective one (Hardware, coding, electronics).

As the title say I'm want to learn to build from the ground up those cheap solar led/optic fiber lightning, here some images to get what I mean:

They come in bundles but after awhile they just die out without repair ability which kinda sucks and because they are cheap my mum keeps buying them... So, I would like to build ones I'm able to repair and customize :). However I have absolutely NO idea where to begin and what exactly I'm searching for... I'm lacking the skills and knowledge on the 3 fronts !

  • What hardware I'm looking for ?
  • What kind of electronics ?
  • What programming language to glue everything together?
  • .... ?

I'm not afraid to get my hands dirty and learn how to micro-solder, learn some coding skills to get everything neatly glued together software wise, learn the necessary hardware or other important and necessary stuff to achieve this goal ! I'm looking for every good and reliable advice to get me started !

One thing though, If i have to learn some hardware/low level coding skills I would prefer a language that would be useful for other stuff in the long run.

Thank you in advance and I'm already sorry if I'm very slow to respond, I'm not native and the flood amount of information I will probably get, will surpass my ability to respond to everyone right away.

Also every other directions are welcome, like:

  • how to repair the old ones? Do I need to flash their proprietary software/hardware?

Thank you !

 

Hello everyone !

I'm seeding/cross-post this in 3 communities because I think I will get better answers in each respective one (Hardware, coding, electronics).

As the title say I'm want to learn to build from the ground up those cheap solar led/optic fiber lightning, here some images to get what I mean:

They come in bundles but after awhile they just die out without repair ability which kinda sucks and because they are cheap my mum keeps buying them... So, I would like to build ones I'm able to repair and customize :). However I have absolutely NO idea where to begin and what exactly I'm searching for... I'm lacking the skills and knowledge on the 3 fronts !

  • What hardware I'm looking for ?
  • What kind of electronics ?
  • What programming language to glue everything together?
  • .... ?

I'm not afraid to get my hands dirty and learn how to micro-solder, learn some coding skills to get everything neatly glued together software wise, learn the necessary hardware or other important and necessary stuff to achieve this goal ! I'm looking for every good and reliable advice to get me started !

One thing though, If i have to learn some hardware/low level coding skills I would prefer a language that would be useful for other stuff in the long run.

Thank you in advance and I'm already sorry if I'm very slow to respond, I'm not native and the flood amount of information I will probably get, will surpass my ability to respond to everyone right away.

Also every other directions are welcome, like:

  • how to repair the old ones? Do I need to flash their proprietary software/hardware?

Thank you !

0
submitted 2 years ago* (last edited 2 years ago) by N0x0n@lemmy.ml to c/linux@lemmy.ml
 

TIL something new... My hate for MacOS took over common logic. 2.8GB, 3 seconds file transfer on USB was to beautiful to be true. After some further investigation and hints from @JonnyRobbie@lemmy.world @nanook@friendica.eskimo.com I learned that Linux writes to cache before writing it to the device, to see whats happening in the background: sync & watch -n 1 grep -e Dirty: /proc/meminfo.

Still, the transfer speed on Linux was slightly faster than on MacOS. My rant was unjustified, It just my fault for being clueless on some more advanced Linux stuff. But I learned something new today, so this post was actually helpful !

Howerver, I still hate MacOS and will probably give Asahi remix a try.

Thanks to everyone !


Hey guys ! I'm getting tired/bored of MacOS' shenanigans... Yesterday was the last drop that make me think of trying an alternative.

While trying to upload a 2.8 GB file over to an USB-C stick it took like 8 minutes? Okay that's "good" enough if you only do it from time to time... But 25 files takes literally 1h30min... Are we in 2001?

I mean the exact same 2.8GB file, with the exact same USB-C stick took FU***** 3 seconds on Linux !!

Ohh and don't think I didn't tried to "fix" the issue, after a long search on the web I came across a lot of people having similar issues that aren't fixed since 2 major updates? With a total radio silence from the shiny poisonous Apple...

Among other things I tried:

  • Disable Spotlight indexing sudo mdutil -a -i off
  • Reformat the USB stick from Mac
  • All available filesystem FAT32, exFAT...(yes even MacOS native APFS)
  • Another USB stick
  • ....

Enough is enough. I was willing to learn their way of thinking for my personal experience and somehow always got my way around to reproduce what I learned on Linux to Mac. But now that there is an alternative OS, I think I'm ready to get back home.

So does anyone here already gave Asahi Remix a try? If so what was your experience with it?

I read their FAQ and most of their documentation and it seems good enough for daily drive (except for some quirks here and there) but I wanted to hear from people who already made the jump and how was their personal feeling.


PS: I got that MacOS for my birthday from a family member with good intentions. That wasn't a personal choice. While I'm more than happy and thankful for the gift, I totally hate it more and more... Especially because MOST of my self-hosted services, applications, scripts, are open source.

 

Hi everyone !

Intro

Was a long ride since 3 years ago I started my first docker container. Learned a lot from how to build my custom image with a Dockerfile, loading my own configurations files into the container, getting along with docker-compose, traefik and YAML syntax... and and and !

However while tinkering with vaultwarden's config and changing to postgresSQL there's something that's really bugging me...

Questions


  • How do you/devs choose which database to use for your/their application? Are there any specific things to take into account before choosing one over another?

  • Does consistency in database containers makes sense? I mean, changing all my containers to ONLY postgres (or mariaDB whatever)?

  • Does it make sense to update the database image regularly? Or is the application bound to a specific version and will break after any update?

  • Can I switch between one over another even if you/devs choose to use e.g. MariaDB ? Or is it baked/hardcoded into the application image and switching to another database requires extra programming skills?

Maybe not directly related to databases but that one is also bugging me for some time now:

  • What's redis role into all of this? I can't the hell of me understand what is does and how it's linked between the application and database. I know it's supposed to give faster access to resources, but If I remember correctly, while playing around with Nextcloud, the redis container logs were dead silent, It seemed very "useless" or not active from my perspective. I'm always wondering "Humm redis... what are you doing here?".

Thanks :)

0
submitted 2 years ago* (last edited 2 years ago) by N0x0n@lemmy.ml to c/privacy@lemmy.ml
 

After the discussion in the following post I dug a bit deeper the rabbit hole.

While I mostly relied on Exodus to see if an app has trackers in it... I was baffle to see all the sketchy requests it made while dumping the DNS requests with PCAPdroid...

Over 200 shady requests in a few seconds after login... here's a preview:

While I don't use AdguardVPN, I have Adguard Home as my DNS server in my homelab... I think It's time to switch to pi-hole !

Edit: VPN pcapdroid

view more: next ›