• 0 Posts
  • 234 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle



  • VMix popularity exploded during the pandemic. A lot of conferences became a blend of teams/zoom/Google and VMix.

    Might be hardware based like a multi-m/e video mixer (blackmagic make cheap ones), or maybe more of a screen manager (like barco e2, analog way livecore). But, unless there are production requirements, vmix is much more likely. It’s (now) proven, and much cheaper!

    OBS can absolutely do it. There are other open source softwares that can do it.
    I’ve seen people bastardise Resolume into something that looks decent.
    There are some online studio systems so everything you do is virtualized. Streamyard used to be like this, till it was bought by hopin (I think it was hopin)


  • You can do reverse proxy on the VPS and use SNI routing (because the requested domain is in clear text over HTTPS), then use Proxy Protocol to attach the real source IP to the TCP packets.
    This way, you don’t have to terminate HTTPS on the VPS, and you can load balance between a couple wireguard peers so you have redundancy (or direct them to different reverse proxies or whatever).
    On your home servers, you will need an additional frontend(s) that accepts Proxy Protocol from the VPS (as Proxy Protocol packets aren’t standard HTTP/S packets, so standard HTTPS reverse proxies will drop them as unknown/broken/etc).
    This way, your home reverse proxy knows the original IP and can attach it to the decrypted http requests as x-forward-for. Or you can do ACLs based on original client IP. Or whatever.

    I haven’t found a way to get a firewall that pays attention to Proxy Protocol TCP headers, but I haven’t found that to really be an issue. I don’t really have a use case







  • towerful@programming.devtoGames@lemmy.worldThe N64
    link
    fedilink
    English
    arrow-up
    10
    ·
    25 days ago

    Any older disk based console also required a memory card.
    Pretty sure the controller was the first to have an analogue joystick.
    I think a lot of the quirks of the N64 were because they were essentially first drafts. A lot of first, a lot of ground breaking tech.
    Nobody knew what they were doing, at that time: nothing was wrong


  • It’s not a workaround.
    In the old days, if you had 2 services that were hard coded to use the same network port, you would need virtualization or a different server and make sure the networking for those is correct.

    Network ports allow multiple services to use the same network adapter as a port is like a “sub” address.
    Docker being able to remap host network ports to containers ports is a huge feature.
    If a container doesn’t need to be accessed outside of the docker network, you don’t need to expose the port.

    The only way to have multiple services on the same port is to use either a load balancer (for multiple instances of the same service) or an application-aware reverse proxy (like nginx, haproxy, caddy etc for web things, I’m sure there are other application-aware reverse proxies).





  • If they are on the same subnet, why are they going via the router? Surely the NIC/OS will know it’s a local address within its subnet, and will send it directly; as opposed to not knowing where to send the packet, so letting the router deal with it.

    I’m assuming you are using a standard 24 bit subnet mask, because you haven’t provided anything that indicates otherwise and the issue you present would be indicative of a local link being used - this possible


  • For me, after looking over the docs, it’s close enough to JavaScript that it might as well adopt more of the syntax (for example, conditionals and loops don’t use parenthesis). It also has some similarities to python, but again not enough to be python.

    Feels like an in-between language that has enough similarities to seem easy, but some gotchas that will regularly catch you out.
    And then some extra features like the if chaining, which doesn’t have the keyword if or switch in it. So you have to know that that structure implies an if or switch conditional.

    Especially for something like bash scripting, which devs probably don’t spend as much time doing compared to python or js. So, it would probably take them longer (and break their brain more) than just scripting it in python/js directly or dealing with bash directly.

    It’s an improvement over bash, and it’s nice that it transpiles to bash.
    I might have to play around with it and see how it actually feels to use


  • So, is public accessibility actually required?
    Does it need to be exposed to the public internet?

    Why not use wireguard (or another VPN)? Even easier is tailscale.
    If you are hand selecting users (IE, doesn’t actually need to be publicly accessible), then VPN is the most secure and just run a reverse proxy for ease & certs.
    Or set up client certificate authentication, so only users that install a certificate issued by you can connect to the service (dunno how that works for 3rd party apps to immich)

    Like I asked, what is your actual threat model?
    What are your requirements?
    Is public accessibility actually required?


  • That got a bit long.
    Reading more into bunkerweb.

    Things like the “limit” feature are going to doink people on cgnat or large corporate networks. I’ve had security stuff tripped by a company using my software, and it’s a PITA cause all the requests from legit users come from only a few IP addresses.

    Antibot isn’t going to be helpful for things like JS requests, because cookies aren’t included by default with fetch requests - so the application needs to be specifically built for this (at which point, do it at an application level so it can scale easier?).
    And captcha. For whatever that is worth these days.

    Reverse Scan is going to slow down every request (as it scans the remote client for suspicious open ports, so a 500ms delay as default).

    Country is just geo-ip.

    Bad Behaviour is just rate limiting (although with a 24h ban). Sucks if a few corporate/cgnat users all hit a 404 and suddenly that entire company/ISP’s IP is blocked for a day.

    This seems like something to use when running a TOR server or something, where security is more important than user experience. Like, every feature seems to punish legit users


  • LE certs can always be “side loaded” by acme.sh or LEbot or whatever, and the reverse proxy restarted to use the new certs. So, the whole “pro subscription to use specific certs” shouldn’t be a factor, except a little more work/config (so, money Vs time).

    Now for my opinion…

    For base security, all it’s doing is looking at whatever you tell it to look at in an http request and forward/drop/block as such.
    HAProxy is well battle-tested. Nginx is well battle-tested. Traefik and caddy are comparably newer contenders, but considering their adoption they are probably well battle-tested.
    Which means, an established reverse proxy is only going to be as secure as the software it’s forwarding traffic to.

    If there happens to be some mental TLS handshake RCE that comes up, chances are they are all using the same underlying TLS library so all will be susceptible…
    But at least an attacker only gets access to the reverse proxy server. Which is why it’s worth having that in a locked down isolated VM, ideally built in a way that is extremely easy to rebuild (declarative configs like docker-compose and some scripts, or even something like nixos for an immutable OS).

    As for add-ons… Most WAFs only look for things like XSS injection or SQL injection or exploitative HTTP request formats. Very very basic attack vectors that any decent HTTP stack and reasonably built software shouldn’t have to even worry.
    Any DDOS protection is more likely to blast your network connectivity, which (for self hosting) a WAF isn’t going to be able to do anything about.
    I’m not sure how good they actually are against a DOS attack that is caused by bugs/inefficiencies in the application. Maybe they monitor for long/increasing response times, and block further requests to them? Might cause a lot of false-positives for your users.

    So, the only real benefit - that I see - are zero-day exploit protections… and that only matters if they are built around near-realtime updates like crowdsec is. I don’t know how it compares to cloudflares WAF, tho.
    Any zero-day protection that isn’t being managed and updated in near-realtime is about as effective as you monitoring news of your installed services/programmes and updating them regularly. Because you are likely to update your WAF and apps when you hear about those, or regular scheduled updates will deal with them before you even learn about them.

    I guess there is security in layers, and if layers of security is more important than CPU consumption/response time/requests per second (ie have an abundance of processing, servicing few users, etc) then it might be a no-brainer.

    The only other time I can see a generic WAF being useful is if you have rolled your own framework and HTTP stack, and are running your own software. Because, you won’t get that right… So might as well have the extra protection of a WAF.

    Or, I guess, with really old unsupported software.
    But surely there is a newer take or fork of it?

    There is also the “am I worth it” factor.
    Like, what is your actual threat model?
    Defend against the usual script-based attacks (IE low hanging fruit), only expose/forward ports that are actually required, use some sensible security that isolates more vulnerable systems (IE a proxy) from more sensitive (ie a database or storage), and update regularly on stable/lts branches.

    Edit:
    I just googled bunkerweb.
    First we had firewalls. Then we got web application firewalls. Along came next generation firewalls. Now we have Next Generation Web Application Firewalls with paid features like “Pay per protected services” and “Best effort support included”

    Maybe I’m just salty