GeekyOnion

joined 2 years ago
[–] GeekyOnion@lemmy.world 8 points 2 weeks ago (2 children)

Can I post more than one image at a time? Share your wisdom!!! SHARRRREEEEE

 

Where can I find more than single-images, other than imgur?

[–] GeekyOnion@lemmy.world 2 points 2 weeks ago

Always welcome!

[–] GeekyOnion@lemmy.world 3 points 2 weeks ago

Happy to help! I've been through the process several times for myself, and family members, and it's never been very clear or easy to navigate the process. For example, the initial issuance of a passport does start with an in-person visit, but you basically need a federal employee to say, "yes, this is the right packet of information," then it's shipped off to central processing. Subsequent issuance can be done with out the in-person visit, but requires you to send in the old passport, along with any supporting forms/documents.

[–] GeekyOnion@lemmy.world -3 points 2 weeks ago (1 children)

Exactly. "This sux," is not the same as "it's a TRAP," hence the accusation of trolling.

[–] GeekyOnion@lemmy.world 32 points 2 weeks ago (8 children)

It's a small step from "robotic guard dogs," to "armed robotic guard dogs."

https://fortune.com/2026/03/17/robot-dog-patrols-data-centers-ai-infrastructure-buildout/

[–] GeekyOnion@lemmy.world -5 points 2 weeks ago (3 children)

Saying "it feels like a trap" is what prompts the comments about trolling. "This is dumb," or "this is regressive and adds a tax on poor people who want to get a passport, requiring them to spend money to get copies," are legit complaints. Some kind of nefarious plot? Unlikely in the extreme.

[–] GeekyOnion@lemmy.world 10 points 2 weeks ago (2 children)

Yes. You need to send in a packet of information. The issuance of a passport isn't done locally.

[–] GeekyOnion@lemmy.world 3 points 2 weeks ago (12 children)

I mean, I like that you're attempting to troll, but it kind of falls apart with a simple internet search for "state certified birth certificate copy," if that's the "physical document to prove your citizenship," you're talking about.

[–] GeekyOnion@lemmy.world 8 points 2 weeks ago (1 children)

While I agree with the general sentiment that civil servants shouldn't be political, getting selected as the head of a department, or the leader of an organization has significant political overtones. Mind you, my understanding of the role is directly related to the TV series "Yes, Minister," so there's likely to be a few gaps.

[–] GeekyOnion@lemmy.world 28 points 2 weeks ago (2 children)

I see headlines like this and think, “oh, haha. The onion has done it again.” Then I read the actual source, and get sad.

[–] GeekyOnion@lemmy.world 1 points 2 weeks ago

I didn't think about the topic of storing them well. Like so many things, they just exist in the same space that I do (in the house), and that's probably better than a shed or the garage.

 

How are folks syncing local DNS records across multiple Piholes?

 

I've been rebuilding all my content hosted on a Synology NAS + Proxmox installed on a NUC, and moving it to a dedicated box with beefy/brutal stats. I was messing around with Proxmox and unprivileged LXC containers for a while, using a ZFS pool on the host, and passing though using mount points while mapping the users in the container to groups on the host. It was going pretty well except I had (what I thought) was insanely odd and inconsistent behavior. In summary, in the same LXC, I could pass through two mount points with the same users and permissions (etc.) and one would show up mapped correctly, and the other wouldn't.

I gave up on that approach after a few unhelpful responses of "you're doing it wrong." That may be the case, but I was more focused on why the issue was inconsistent rather than just failing.

I'm now running an Unraid VM, with my HBA (and USB stick) passed through, lots of RAM, and an 8-pack of processors. I thought Unraid was pretty slick when I ran the trial a while ago, and was kind of unimpressed with it's performance in this configuration. After getting all the drives configured correctly (made the mistake of mixing up "array" and "pool" after my initial foray into zfs), and weeding out three bad drives from ServerPartDeals, I had a stable array, all my LXC containers configured on Proxmox, NFS going over a dedicated, local bridge (10.10 for the WIN!), and my data moved over from the old NAS, I was pretty happy.

During the whole process, I had been watching/monitoring lots of odd behavior on Proxmox, with Unraid, and with my data transfers. My Pihole instance was going crazy with load averages, even though it was reserved for the LXCs on the host, rather than for the whole house, and the IO pressure stall was over 90% constantly. Given that I had several bad disks out of what I ordered from the supplier, I thought I was dealing with some crazy stuff. I was taking down the LXCs and VM one-by-one, trying to find where that stall pressure was coming from.

As I was troubleshooting, I was wondering if it was maybe IO pressure on the host OS disks (NVMe drives directly attached to the motherboard, zfs mirrored), and did a quick "zpool list." Hmm. That's funny. Why is my old destroyed (or so I thought) pool still showing up??? When I first switched to Unraid, I exported my pool (doom-pool) and then imported it in Unraid after I passed through the HBA. After deciding that ZFS was nice, but not necessary, I destroyed the pool in Unraid, and reconfigured for a standard xfs array. It looked like, somehow, the export of the pool, import, and destroy did something strange, and the drives were showing up as online and in use on the host still. I tried to kill the pool again on the host, and everything would sit and spin.

I ended up shutting down the host and needing to cut power (zfs services were hung for about 12 minutes before I decided it was ok), and when I rebooted, the old pool was gone from the host, and (holy moly) everything was working better. The IO pressure was gone. The CPU spikes and lags were gone. Pihole wasn't going nuts any more.

The one thing I haven't tried yet is to do some disk-to-disk copies on Unraid. This was one of those places where I saw aberrant behavior, and transfer limited to 120MB/s (I have 14TB SAS 12Gb drives in my array), but I don't have any heavy files I need to move. Right now I'm just happy that it wasn't more bogus hardware, or a problem with my HBA or motherboard or something. Anywho, just wanted to share.

view more: next ›