• 19 Posts
  • 1.07K Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle














  • Okay. Do you want to debug your situation?

    What’s the operating system of the host? What’s the hardware in the host?

    What’s the operating system in the client? What’s the hardware in the client?

    What does the network look like between the two? Including every piece of cable, and switch?

    Do you get sufficient experience if you’re just streaming a single monitor instead of multiple monitors?


  • Remember the original poster here, was talking about running their own self-hosted GPU VM. So they’re not paying anybody else for the privilege of using their hardware

    I personally stream with moonlight on my own network. Have no issues it’s just like being on the computer from my perspective.

    If it doesn’t work for you Fair enough, but it can work for other people, and I think the original posters idea makes sense. They should absolutely run a GPU VM cluster, and have fun with it and it would be totally usable


  • Fair enough. If you know it doesn’t work for your use case that’s fine.

    As demonstrated elsewhere in this discussion, GPU HEVC encoding only requires 10ms of extra latency, then it can transit over fiber optic networking at very low latency.

    Many GPUs have HEVC decoders on board., including cell phones. Most newer Intel and AMD CPUs actually have an HEVC decoder pipeline as well.

    I don’t think anybody’s saying a self-hosted GPU VM is for everybody, but it does make sense for a lot of use cases. And that’s where I think our schism is coming from.


    As far as the $2,000 transducer to fiber… it’s doing the same exact thing, just more specialized equipment maybe a little bit lower latency.