hamsda

joined 10 months ago
[–] hamsda@feddit.org 3 points 2 weeks ago

It depends on what it is. I do not have a singular documentation-platform or wiki for those things. I'm more of the keep the docs where the code is guy. I also try to keep complexity to a minimum.

All my linux server setups are done with ansible. ansible itself is pretty self-documenting, as you more or less declare the desired outcome in YAML form and ansible does the rest. This way, I do not need to remember it, but it's easier to understand when looking it up again.

Most of my projects have a git repository, so most of what I need to know or do is documented

  • in a README.md
  • as pipeline-instructions inside .gitlab-ci.yml

This way, I was able to reduce complexity and unify my homelab projects.

My current homelab-state is:

  • most projects are now docker-based
  • most projects have a GitLab CI for automated updating to newer versions
  • the CI itself is a project and all my CI-docker-based deploys use this unified pipeline-project
  • most projects can be tested locally before rolling out new versions to my VMs
  • some projects have a production and a staging server to test
  • those which cannot be dockerized or turned into a CI are tools and don't need that (e.g. ansible playbooks or my GitLab CI)

On what to include, I always try to think: Will I still be able to understand this without documentation if I forget about the project for 6 months and need to make a change then? If you can't be sure, put it in writing.

If it's just a small thing regarding not the project itself or the functionality or setup itself but rather something like I had to use this strange code-block here because of XXX, I'll just put a comment next to the code-line or code-block in question. These comments mostly also include a link to a bug-report if I found one, so i can later check and see if it's been fixed already.

[–] hamsda@feddit.org 2 points 3 weeks ago

Before I got more into selfhosting, I was running nothing but syncthing in a Raspberry Pi.

The pi was the "Server" and all the other Clients were only connected to the pi (in syncthing).

Worked flawlessly :)

[–] hamsda@feddit.org 2 points 1 month ago

Ha, I wish I could.

I'm not 100% satisfied, so I'm still searching for the "perfect distro for me", if it even exists.

I have been using Arch Linux on my personal PC and company laptop for 4 years, but I couldn't get some things to work. Things that, after installing Fedora, worked out of the box.

My current setup is:

  • EndeavourOS (e.g. arch linux with a GUI-installer) for my PC at home
  • Fedora Workstation 43 for my company laptop
  • Servers are all running Debian, I'll probably never change that
  • Hypervisor for VMs is Proxmox VE, which is Debian too
[–] hamsda@feddit.org 3 points 1 month ago

Currently using nginx-proxy-manager for exactly this purpose. Nice and easy-to-use UI, including automatic LetsEncrypt ssl certificates :)

[–] hamsda@feddit.org 5 points 2 months ago

I'm using CheckMK to monitor my hypervisor, physical hardware like disks, CPU etc. and SNMP-capable hardware like my pfSense firewall via a CheckMK instance in docker. It either works in docker or on a few different linux based OS like ubuntu and debian (see CheckMK download page).

There's a free and open source version (called raw edition, GitHub Link) which I am using. It comes with a lot of checks / plugins for monitoring stuff out of the box and if there's something it doesn't ship, you can easily create your own check in whatever language your server is capable of executing a binary of. Or you could look up if there's a user-contributed plugin on the official CheckMK Exchange Platform.

The whole configuration of this is based on rules with a lot of predefined rules and sane defaults already set.

To have an example for your use-case: You can monitor docker-logfiles and let CheckMK warn you, if specific keywords are or are not in a logfile. You will then be able to view the offending lines in the monitoring UI.

Why do I use this?

  • We use it at work
  • FOSS
  • docker makes updating this easy
  • can send mails, teams notifications, ...
  • very customizable and expandable

my docker compose file

# docker-compose.yml

services:
  monitoring:
    image: checkmk/check-mk-raw:2.4.0-latest
    container_name: monitoring
    restart: unless-stopped
    environment:
      - CMK_PASSWORD=changeme
    ports:
      # WEB UI port
      - "5000:5000"
      # agent communication port
      - "8000:8000"
      # used for SNMP
      - "162:162/udp"
      - "514:514/tcp"
      - "514:514/udp"
    volumes:
      - "./monitoring:/omd/sites"
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
[–] hamsda@feddit.org 5 points 2 months ago

I was just visiting a friend of mine last sunday with my SteamDeck and we played Unrailed! the whole day :D

The SteamDeck is THE perfect, portable fun-games-machine to just take with you. And every sale there's another few local-splitscreen-multiplayer games on sale, my library is scared already.

If you did not live through the time of "going to your friends to play games", this is your ticket to a past you sadly never got to experience.

[–] hamsda@feddit.org 4 points 2 months ago

Haha, same here.

Though for me it wasn't about the Zerg, I just liked the logo. That's how I got to Arch and that's (partly) why my servers use debian :D

[–] hamsda@feddit.org 0 points 3 months ago

Oh, that looks really good. I guess I'll add that to my Phone-Wallpaper-Collection, thank you very much :)

[–] hamsda@feddit.org 8 points 4 months ago

If you're selfhosting, the cloud is your someone else's computer ;)

[–] hamsda@feddit.org 1 points 4 months ago

I didn't mean to get caught up in exceptions or exaggerations. I'm no developer either, so I have zero background-knowledge about game-development or game-engines.

Though as I work in IT (again, no developer) and live within a zero-IT-knowledge friend circle, I tend to try and shine a little light on some things that, to the outside, might seem simple but maybe aren't. I guess sometimes I'm trying to err on the side of caution a little too much.

I definitely think there are a few of those one-line, true/false settings that could just be toggled, especially things that are handled by the engine instead of the game-logic itself, though I cannot speak of experience here.

[–] hamsda@feddit.org 5 points 4 months ago (3 children)

I'm talking about ones that are like one line of code being set to true instead of false etc

I don't know how many, if any, settings matching the true/false + 1 line of code restraints even exist.

If you can change a setting, even if it's a binary choice, someone had to think about, implement and test everything pertaining to these choices.

Depending on what kind of mechanic we're talking about and how deeply integrated into the rest of the game this mechanic is, that could be a big task.

[–] hamsda@feddit.org 3 points 5 months ago* (last edited 5 months ago)

I did not run OPNSense, but I have a direct comparison for pfSense as VM on Proxmox VE vs pfSense on a ~400€ official pfSense physical appliance.

I do not feel any internet-speed or LAN-speed differences in the 2 setups, I did not measure it though. The change VM -> physical appliance was not planned.

Running a VM-firewall just got tiring fast, as I realized that Proxmox VE needs a lot more reboot-updates than pfsense does. And every time you reboot your pfSense-VM-Hypervisor, your internet's gone for a short time. Yes, you're not forced to reboot. I like to do it anyway, if it's been advised by the people creating the software I use.

Though I gotta say, the pfSense webinterface is actually really snappy and fast when running on an x86 VM. Now that I have a Netgate 2100 physical pfSense appliance, the webinterface takes a looooong time to respond in comparison.

I guess the most important thing is to test it for yourself and to always keep an easy migration-path open, like exporting firewall-settings to a file so you can migrate easily, if the need arises.

[EDIT] - Like others, I also would advice heavily against using the the same hypervisor for your firewall and other VMs. Bare-Metal is the most "uncomplicated" in terms of extra workload just to have your firewall up and running, but if you want to virtualize your firewall, put that VM on its own hypervisor.

 

Hello fellow Proxmox enjoyers!

I have questions regarding the ZFS disk IO stats and hope you all may be able to help me understand.

Setup (hardware, software)

I have Proxmox VE installed on a ZFS mirror (2x 500 GB M.2 PCIe SSD) rpool . The data (VMs, disks) resides on a seperate ZFS RAID-Z1 (3x 4TB SATA SSD) data_raid.

I use ~2 TB of all that, 1.6 TB being data (movies, videos, music, old data + game setup files, ...).

I have 6 VMs, all for my use alone, so there's not much going on there.

Question 1 - costant disk write going on?

I have a monitoring setup (CheckMK) to monitor my server and VMs. This monitoring reports a constant write IO operation for the disks, ongoing, without any interruption, of 20+ MB/s.

I think the monitoring gets the data from zpool iostat, so I watched it with watch -n 1 'sudo zpool iostat', but the numbers didn't seem to change.

It has been the exact same operations and bandwidth read / write for the last minute or so (after taking a while for writing this, it now lists 543 read ops instead of 545).

Every 1.0s: sudo zpool iostat

              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data_raid   2.29T  8.61T    545    350  17.2M  21.5M
rpool       4.16G   456G      0     54  8.69K  2.21M
----------  -----  -----  -----  -----  -----  -----

The same happens if I use -lv or -w flags for zpool iostat.

So, are there really constantly 350 write operations going on? Or does it just not update the IO stats all too often?

Question 2 - what about disk longevity?

This isn't my first homelab-setup, but it is my first own ZFS- and RAID-setup. If somebody has any SSD-RAID or SSD-ZFS experiences to share, I'd like to hear them.

The disks I'm using are:

Best regards from a fellow rabbit-hole-enjoyer.

 

Dear GrapheneOS community,

I recently switched to GrapheneOS with my new Pixel 9a. All in all it works well, but there's still one or two things I just cannot get to work.

Whenever I start GPS navigation, I can hear a voice saying a single sentence and then just stopping and silence for the rest of the drive.

I tried the following apps:

  • Osmand
  • Organic Maps
  • CoMaps

I have installed RHVoice as TTS software.

When starting navigation, Osmand tells me how long my journey will take and how much distance I have to drive and that's the last thing I ever hear from Osmand voice navigation.

Organic Maps navigation tells me the first thing I need to do on the drive (e.g. "turn right in 400m") and then not a single word for the rest of the drive.

CoMaps seems to be the same.

If I enbale Osmand Development Plugin in Osmand, I can then test voice output, which works perfectly. It just does not work when I need it and I have no idea why.

Does anyone know what I'm doing wrong?

view more: next ›