• 0 Posts
  • 73 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle

  • vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.

    Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.

    If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.



  • Fail2ban and containers can be tricky, because under the hood, you’ll often have container policies automatically inserting themselves above host policies in iptables. The docker documentation has a good write-up on how to solve it for their implementation

    https://docs.docker.com/engine/network/packet-filtering-firewalls/

    For your usecase specifically: If you’re using VMs only, you could run it within any VM that is exposing traffic, but for containers you’ll have to run fail2ban on the host itself. I’m not sure how LXC handles this, but I assume it’s probably similar to docker.

    The simplest solution would be to just put something between your hypervisor and the Internet physically (a raspberry-pi-based firewall, etc)







  • Dran@lemmy.worldtolinuxmemes@lemmy.worldsystemdeez nuts
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    5 months ago

    There is also the argument that it’s more complicated under the hood and harder to troubleshoot, particularly because of it’s inherent parallelism and dependency-tree design, whereas initv was inherently serial. It was much more straightforward to pick the order in which services started and shut down on an initv system.

    For example, say I write a service and I want it to always be the first service stopped during a shutdown, and I want all other services to wait for it to stop before shutting down. That was trivial to do on an initv system, it’s basically impossible on systemd.

    For those wondering, yes I did run into this situation. My solution was clobbering the shutdown, poweroff, and restart binaries with scripts earlier in path search that stop my service, verify that they’re stopped, and then hook back to systemd to do the power event.









  • How has nobody in this thread said check_mk yet?

    It’s free, you host it yourself. It’s built off of nagios, compatible with nagios plugins, supports snmp or agent based checks. It can email, SMS, slack or discord you when something breaks, you can write your own custom checks in any language that can output to a local console… I could never imagine even looking for something else.