iltg

joined 2 years ago
[–] iltg@sh.itjust.works 0 points 4 weeks ago (1 children)

for me it adds nothing (like most userdb fields as i don't use them) but equally doesn't remove or compromise anything, userdb is optional

i'm absolutely not acting like it's being added for no reason, did you read my reply? it's being added (and i just wrote it) to maliciously comply with CA upcoming laws. you instead just acted like a optional field is the same as MS no-offline setup. "Windows would implement it in an identical way". do you even use linux?

you claim there's plenty of evidence and this is not a slippery slope because the goal is deanonymization. this is not how you prove to not be in a logical fallacy. "legalize gay marriage and they'll marry dogs", "oh i have plenty of evidence queer folks are against nuclear family". the second statement is true (per this queer folk) but it doesn't make the first less of a slippery slope.

Meta pushes for age verification? i believe that, not contested. systemd will violate privacy? this is the slippery slope. i know meta wants privacy violated. you're claiming that having an optional field is a dead giveaway systemd wants to let meta do this.

how? wouldn't systemd rely on meta services, or third party stuff like persona, to id you if they really wanted to make sure who you are? i see no api calls, i see no system lockdown when not complying, i see no data being sent away.

i see an optional field that nothing uses, that prevents nothing, that is strictly on your device.

you say it's "just" compliance, but how does it verify? if this is compliance with age verification, it sure lacks a lot of verification and seems to just be age. thus why this is malicious compliance: the bare minimum to be lawful and not compromise user privacy. seems desirable to me

[–] iltg@sh.itjust.works -1 points 4 weeks ago (5 children)

not who you replied to but makes linux systems maliciously compliant so that you can still use them (say, in schools) without having your privacy violated.

your slippery slope argument could apply to any field of userdb: real name will require an id, location will require geolocation!

slippery slope is a logical fallacy, complain when systemd requires an id, not when it does the bare privacy-respecting minimum to comply with a silly law

[–] iltg@sh.itjust.works 1 points 1 month ago* (last edited 1 month ago)

TLDR: an e2ee channel means "everything passing over this channel is super secure and private, but it needs some keys for this to work". e2ee means something: you can not care about most issues with delivery and protection and such, but you need to care about the keys. if you don't do that, you are probably ruining the security of such e2ee channel


end-to-end-encryption solves one issue: transport over untrusted middleware, doesn't mean much by itself. it's being flung around a lot because without proper understanding sounds secure and private.

it's like saying that i ship you something valuable with a super strong and impenetrable safe. but what do i do with the key? e2ee is the safe, solves the "how can i send you something confidential when i dont trust those who deliver it", and it means much! it's a great way to do it.

but it solves one problem giving a new one: what to do with the key? this usually can be combined with other technologies, such as asymmetric encryption (e.g. RSA), which allows having keys which can be publicly shared without compromising anything. so i send you an impenetrable code-protected safe with an encrypted code attached, and only your privkey can decrypt the code since i used your pubkey!

(note: RSA is used for small data since encryption/decryption is cpu intensive. usually what happens is that you share an AES key encrypted with RSA, and the payload is encrypted using that AES key. AES is symmetric: one key encrypts and decrypts, but AES keys are small. another piece of technology attached to make this system work!)

but now comes the user-friendliness issue: very few are big enough nerds to handle their keys. hell, most folks don't even want to handle their passwords! so services like matrix offer to hold your keys on the server, encrypted with another passphrase, so that you don't need to bother doing that, just remember 2 passwords or do the emoji compare stuff. it's meh: compromising the server could allow getting your keys and kinda spoils e2ee, and once i stole you one password i can probably steal you both, but it's convenient and reasonably secure for most. you can absolutely opt out, but every time you log from a new device, you can't read anything sent before unless you export and import your keys manually.

what does whatsapp do? i don't know! but it kind of magically works. if they do e2ee, where are the keys???? how does meta handle reports if messages are e2ee???????

i'm not sure about signal but everyone praises it so i guess it's good? also it seems you can't restore messages via network, you need an export from a previous install, so it seems your keys live inside your app data, which is good and safe i guess.

also, e2ee works if you can trust the key you're sending to! as mentioned in the 'activitypub keys' section before, if you ask a middleman the key for your recipient, can you trust that's the real key? e2ee doesn't cover that, it's not in its scope

so what does e2ee mean? it means: super strong channel, ASSUMING keys are safe and trusted. e2ee as a technology doesn't solve "all privacy" or guarantee that nobody snoops in per se. it offers a super safe channel protected by keys, and lets you handle those keys how you more see fit. which meaning deciding who you trust to send, how you let others know how to encrypt for you (aka share your pubkey) and how you will keep your privkey safe.


thanks for coming to my TED talk btw

[–] iltg@sh.itjust.works 1 points 1 month ago* (last edited 1 month ago)

nothing per se, depends on implementation, see my other reply for some insights on key management

[–] iltg@sh.itjust.works 8 points 1 month ago* (last edited 1 month ago) (4 children)

hi! sorry for throwing this here without explaining much, explaining a bit seems definitely due diligence!

so, i need to make some things clear, skip if you know these already:

fediverse

the fediverse is not a single software, rather a collection of softwares speaking a common language (sharing a protocol: activitypub). the classic example is mail: on gmail you can email folks on outlook. they just know how to send messages to other instances/servers/deployments, and how to receive. for example, email (SMTP) expects data formatted in a certain manner (lots of headers and a body, kinda) on port 25. Activitypub expects activities (json-ld documents) coming over inboxes (POST to http endpoints).

compatibility

now, say emissary sends an encrypted message to a mastodon user. mastodon doesn't know what to do with that document! it's a garbled mess of encrypted data, what is mastodon supposed to do with it? there are no rules for this in the spec! the post claims "federated" (aka, across multiple servers) e2ee messaging, and that already exists with multiple solutions. what they mean to me is either

  • they are making a new e2ee chat: great! emissary users will get a way to message other emissary users. but that's it: you need to be on emissary, like with matrix you need to be on matrix
  • they are making a fediverse e2ee chat: this isn't easy! you can't just make it for yourself, you need to clearly define how it works, and everyone must implement it too. otherwise mastodon or lemmy won't know what to do with the message you sent

spec

they link two specs: MLS (an IETF spec defining scalable e2ee messaging), and activitypub-e2ee. the first one is great: i think matrix wants to move their encryption to that? it's good, but it's a spec: you need to adapt it to your use. the second one is how MLS can be applied to Activitypub communication: the thing we care about! unfortunately the later spec is just a draft, so it needs more work and it's unlikely that it will see adoption in this state.

asymmetric encryption

so now i need to go a bit into asymmetric encryption, in this case RSA. there's a lot of great examples if you put "asymmetric encryption" or "rsa" into google, but i'll try my best here. imagine 2 folks trying to communicate, Alice and Bob, but they need to have a postperson deliver their messages. they don't want such postperson reading them! how to do that? A and B both get two "keys": one private and one public. these keys are related to each other: a pubkey "has" a privkey, and vice versa. these keys are also "magic" (math, good luck if u wanna dig in here, if you're not into math just trust me the keys are magic). using a public key, you can encrypt a message so that only the related private key can decrypt it. and using a private key you can encrypt a message so that only its public key can decrypt it. the second case is for identity proofing, we care about the first one: if A and B make their public keys public (heh), they can both use such keys to create messages meant only for either A or B, assuming they still hold their private keys, and nobody else. because ~~math~~ magic

activitypub keys

in activitypub every actor holds a private and public key. this is how the protocol does "authorized fetch", meaning making sure an activity truly comes from the actor claiming to send it. so we can use these keys for doing e2ee!

Alice <---> A's server <---> B's server <---> Bob

Alice can ask her server to get Bob's public key from Bob's server, and then encrypt a message for Bob and send it via the servers without anyone snooping in. Great?

NO!

A's server can lie about bob's key, give a random key, decrypt the message, then encrypt it with bob's real pubkey and send it. this way bob knows nothing and A's server can read the message. Same way, A's server can give Bob's server a fake pubkey for alice, so it can read the incoming message and then encrypt and re-send to alice with her real key. So trust is broken!

the spec offers 3 solutions to this:

  • trusting your server, which is kind of the starting point and we don't want that
  • having a third party validate keys (either a centralized solution which Alice and Bob ask, or some yet-to-invent federated way to handle keys. we're kinda back at point one)
  • having alice and bob exchange keys themselves (maybe send them on matrix or signal, delegating the "identify and trust" issue to those services)

"knowing irl"

some users compared the issue with "knowing each other irl" but it's not the same. on signal, i trust you to be you, and our conversation to be private. if i search you by username, i can just message you. trusting your username is you is a meaningless discourse here: you are your username. i'm writing this to "Abundance114", i don't care who you are, i just want this to reach "Abundance114". so on signal i plug your user and our keys automagically reach each other safely. this spec doesn't explain how this happen: i would need to first identify and trust you, Abundance114, and then find a way to safely communicate with you so we can exchange keys.


i hope this wan in-depth enough! i'm not an encryption expert, if any is here i'm open to critics, but this seems reasonable to me with my protocol and encryption experience. basically i believe this post is hype bait: whatsapp is e2ee, but who has the keys? do you trust meta? sure, the message travels encrypted, but who can read it? only you? an e2ee system is not just its encryption tech, but the way keys are securely shared

[–] iltg@sh.itjust.works 57 points 1 month ago* (last edited 1 month ago) (11 children)

this is misleading and sensationalistic. if emissary implements e2ee, it's not "e2ee for the fediverse", it's " e2ee for emissary users". did mastodon talk about e2ee? did lemmy?

also the MLS-in-activitypub draft proposes for trusted key exchange either " trust the server" (lmao), use a centralized key authority (wow) or have users manually verify their keys out of band (so basically use matrix to assure your chat is encrypted). source: https://swicg.github.io/activitypub-e2ee/architectural-variations.html#validating-end-to-end-encryption

fedi devs need to stop clickbaiting, and fedi users should learn a bit more about their protocol to avoid getting misled this way

[–] iltg@sh.itjust.works 1 points 11 months ago (1 children)

what os are you going to use on your smartphone if you remove software from google and apple?

aosp, fdroid, no gservices

what VR headset

not into vr so can't say

what telecom

sadly, not a good one. i wish i had a choice, but this isn't software

are you only shopping in local food markets?

sort of? i get fresh stuff from actual markets when i can and when i go for groceries i avoid ultra processed stuff from big multinationals, making sure of the provenance and the maker of the stuff i get, supermarkets also sell stuff from local producers

lemmy creators are bigots

eh, im still leeching off some other person hosting, im not going to host lemmy and im slowly making my own thing

also can you provide examples? i heard it multiple times, I'm not contesting it, just kinda want to see myself, like with vaxry, and not only trust second hand accusations

i don't want to be a cop and background check

no absolutely fine i don't check all my software too, but when i hear a callout i dont hide behind "art and artist" mentality and move off the bigot's stuff

[–] iltg@sh.itjust.works 0 points 11 months ago (1 children)

preference is s weak motivation honestly. i prefer google maps yet i still don't want google and make do with OSM

I'm simply interested in having control over my PC

but you don't, you still depend on vaxry. can you maintain, update, fix and recompile hyprland yourself? if so, fork it and start boycotting vaxry. if not, what control are you talking about? it's just preference

this whole argument to me sounds like "i prefer a WM with smooth animations and an active discord so im going to overlook the problematic maintainer im going to give clout to and start depending on"

[–] iltg@sh.itjust.works 1 points 11 months ago (3 children)

i'm not on wayland so i can't try any of these, but there are lists you can browse from (https://wiki.archlinux.org/title/Wayland#Compositors for example)

you are setting quite restrictive and arbitrary limits

well supported

what do you mean?

with smooth animations

what counts as "smooth animations"?

if your message boils down to "something which looks really good to me and that has a discord i can go into and ask for help", you may have set the requirements tight enough to only include hyprland, but that's not a valid excuse in my opinion to avoid boycotting problematic developers

[–] iltg@sh.itjust.works -1 points 11 months ago (5 children)

your argument is a bit extreme, it doesn't need to only be software from nice folks, it just needs to not be software made by not nice folks

apart from sqlite, i think everything is replaceable with a bit of compromise

what things made by not nice folks are you locked into?

[–] iltg@sh.itjust.works -1 points 11 months ago (6 children)

lemmy is not a great comparison, there's like 3 alternatives, there are tens if not more hyprland alternatives.

i don't think software is just software, why would this tech be exempt? pilot-less aircrafts is just tech, just like software, but we do remember that drones bomb people. supporting problematic developers is not "as bad" as building killing machines, but it's the same principle: looking the other way when it's convenient. we should aim to ostracize and isolate problematic devs, and it starts by not using their software, because doing so gives them clout and relevance

[–] iltg@sh.itjust.works -1 points 1 year ago (5 children)

your statement is so extreme it gets nonsensical too.

compilers will usually produce higher optimized asm than writing it yourself, but there is room to improve usually. it's not impossible that deepseek team obtained some performance gains hand-writing some hot sections directly in assembly. llvm must "play it safe" because doesn't know your use case, you do and can avoid all safety checks (stack canaries, overflow checks) or cleanups (eg, make memory arenas rather than realloc). you can tell LLVM to not do those, but it may happen in the whole binary and not be desirable

claiming c# gets faster than C because of jit is just ridicolous: you need yo compile just in time! the runtime cost of jitting + the resulting code would be faster than something plainly compiled? even if c# could obtain same optimization levels (and it can't: oop and .net runtime) you still pay the jit cost, which plainly compiled code doesn't pay. also what are you on with PGO, as if this buzzword suddenly makes everything as fast as C?? the example they give is "devirtualization" of interfaces. seems like C just doesn't have interfaces and can just do direct calls? how would optimizing up to C level make it faster than C?

you just come off as a bit entitled and captured in MS bullshit claims

view more: next ›