buedi

joined 2 years ago
[–] buedi@feddit.org 1 points 3 weeks ago

The .10 or .20 just advises Docker to create that specific Subinterface automatically. In my example ip link will show new interfaces called br0.10 and br0.20 after creating the macvlan networks for VLAN IDs 10 and 20. You do not need to adjust your Netplan config when doing it like that. I would even assume that you are not allowed to define VLAN ID 10 and 20 in that particular case also in Netplan. I would expect that this will cause issues. Also see https://docs.docker.com/engine/network/drivers/macvlan/ in the 802.1Q trunk bridge mode section.

There are probably multiple ways to do all of this, but this is how I did it and it works for me since a few years without touching it again. All VLANs are separated from each other and no VLAN has access to the LAN side. Everything is forced to go through tagged VLANs via the switch to the Firewall, where I then create rules to allow / deny traffic from / to all my networks and the Internet.

For me, this setup is very simple to re-implement should my Host go down. No special configuration in Netplan is needed. Only create the Docker Networks and start up my stacks again.

[–] buedi@feddit.org 4 points 3 weeks ago (2 children)

I can't see your full setup / config from here, but a) you are not overengineering that. Using VLANs to segment networks is a very good practice. And although Docker (nor Podman) allow macvlan when running rootless, my gutfeeling tells me that segmenting my network takes priority over running rootless, because I think that attack vectors by traversing networks are much more common that breaking out of a container into the host. But this is just my gutfeeling. b) I think I run here what you want to achieve, so I try to explain what I did.

My Setup is similar to yours. OPNsense (OpenWRT before that), a Switch that is capable of VLAN and a Ubuntu Server with a single NIC that hosts all the Compose stacks.

  1. You already configured your VLANs in OPNsense, so I will just mention that I created mine via Interface -> Devices -> VLAN on the LAN Interface of my OPNsense and then used the Assignments to finally make them available. On the OPNSense each one gets a static IP from the respective Network I defined for the VLAN.
  2. On the Docker Host, in Netplan I configured the single NIC I have as a Bridge. I cannot remember if that was necessary or if I just planned ahead, should I add a 2nd NIC later on, to prevent that I need to reconfigure the whole networking again. Of course that Bridge sits in my LAN and the Netplan Config looks like this:
network:
  ethernets:
    eno1:
      dhcp4: no
  version: 2
  bridges:
    br0:
      addresses:
      - 192.x.x.3/24
      nameservers:
        addresses:
        - 192.x.x.x
        search:
        - my.lan
        - local
      routes:
      - to: default
        via: 192.x.x.1
      interfaces:
        - eno1
  1. Now that the Docker Containers can use the VLANs, I had to create Docker Networks as macvlan like this:
docker network create -d macvlan --subnet=192.x.10.0/24 --gateway=192.x.10.1 -o parent=br0.10 vlan10
docker network create -d macvlan --subnet=192.x.20.0/24 --gateway=192.x.20.1 -o parent=br0.20 vlan20
  1. Now for a Container to make use of those Networks, you have to define them as External in the Compose Stack like this:
services:
  my-service:
    image: blah
    ...
    networks:
      vlan10:

networks:
  vlan10:
    name: vlan10
    external: true

In 4. you have the option to not define an ipv4_address in the networks section. Then Docker will just pick its own addresses when the containers start. Letting OPNsense assign IP addresses dynamically in such a VLAN is something that did not work for me. So either you let Docker pick the IPs when starting a stack, or you define your IP addresses in the stack. If you do the latter, you have to do it for every stack that ever joins that VLAN, otherwise Docker might pick an IP that you already assigned manually and that stack will not start.

I also wanted to have some services running directly in the LAN via Docker. This setup is a bit more involved and requires you to create a SHIM Network, otherwise the Docker Host itself will not be capable of accessing Containers running in the LAN Network. This was the case for my Pi-Hole for example, that I wanted to have an IP in my LAN Network and had to be reachable by the Docker Host itself too. There is a very good post about Macvlan and SHIM Networks in this blog: https://blog.oddbit.com/post/2018-03-12-using-docker-macvlan-networks/

I hope this helps. Do not give up. Segmenting your Networks is important, especially if you plan to publish some services over the Internet.

[–] buedi@feddit.org 0 points 1 month ago

Just a thought about the Switch. Maybe you could plug it into a Powersocket that is monitored in HA, like a NOUS A1-T? If power draw goes up, the Switch is on. This could easily be "hacked" by the kids by plugging the switch into another Powersocket. But if the Switch has a reliable Standby power draw, that could be monitored too. If that is zero, the someone is cheating :-)

Monitoring the Switch when not docked, could maybe done via WiFi? Check if the MAC (or the IP if fixed via DHCP) is online or not. Of course this also helps if the Switch is always Online when used. I do not have one, so I do not know.

[–] buedi@feddit.org 2 points 2 months ago (1 children)

Since you are German, don´t forget the Impressum. I think it is mandatory and some people are dicks.

[–] buedi@feddit.org 3 points 7 months ago

I am back playing since a few weeks after bring absent for 5 years. Got myself Odyssey finally and do exobiology and you can build your own Colonies now!

[–] buedi@feddit.org 2 points 8 months ago

Thanks for pointing out Simplex Chat, I did not know that it exists. It looks very interesting, but reading more about it, they will have to implement some kind of business model in the future. My fear is, that even when self-hosting, some features will be behind a paywall in the future, so it is not a solution I would switch to... switching to a new messenger is a long-term endeavour. It is hard to convince friends to move over too, let alone switching to a new one every few years. That's near impossible. But the technology of Simplex looks really interesting and reading through the Docs it makes the impression that it is very polished.

[–] buedi@feddit.org 2 points 8 months ago (1 children)

Thank you very much for the technical insight. It makes clear why it is how it is and it is good to see that you can host Activitypub services on Subdomains... so the issue I thought that exists is not that big of an issue anymore. Also I love the discussion under your post, very interesting!

Thanks also to everyone else who replied!

 

Hi there,

I went through the documentation of GoToSocial and there are some pieces of information which confuse me. For example on the Deployment considerations, they state, that once you hosted a particular Fediverse service on your domain, you cannot switch to another technology. Further down in this article in the "Domain name" section it even gives me the impression that if you switch technologies on the same domain, this will in fact cause issues in the whole Fediverse.

Two questions came up when reading through this:

  • Is the ActivityPub protocol and the technologies that depend on it that fragile? Switching technologies on the same Domain would be something I would have just done without a further thought until I find the technology I want to use for years (and which I might still switch out to another one many years in the future).
  • It is not clear from the documentation if you can get around this by hosting the service I want to try under service1.example.com instead of example.com. The documentation states, that you can host your users under user@service1.example.com, but the API services still under example.com. This will not solve the root issue, right?

Getting a new domain for each Activitypub service I might try to implement and test / use does not really sound great to me. Maybe I just did not understand all of that properly and there is no issue?

[–] buedi@feddit.org 1 points 1 year ago

Sure, ESXi would have been interesting. I thought about that, but I did not test it because it is not interesting to me anymore from a business perspective. And I am not keen of using it in my Homelab, so I left that out and use that time to do something relaxing. It's my holiday right now :-)

 

I spent a few days comparing various Hypervisors under the same workload and on the same hardware. This is a very specific workload and results might be different when testing oher workloads.

I wanted to share it here, because many of us run very modest Hardware and getting the most out of it is probably something others are interested in, too. I wanted to share it also because maybe someone finds a flaw in the configurations I ran, which might boost things up.

If you do not want to go to the post / read all of that, the very quick summary is, that XCP-ng was the quickest and KVM the slowest. There is also a summary at the bottom of the post with some graphs if that interests you. For everyone else who reads the whole post, I hope it gives some useful insights for your self-hosting endeavours.