• 0 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle









  • Thanks for the writeup! So far I’ve been using ollama, but I’m always open for trying out alternatives. To be honest, it seems I was oblivious to the existence of alternatives.

    Your post is suggesting that the same models with the same parameters generate different result when run on different backends?

    I can see how the backend would have an influence hanfling concurrent api calls, ram/vram efficiency, supported hardware/drivers and general speed.

    But going as far as having different context windows and quality degrading issues is news to me.



  • yes: sntx.space, check out the spurce button in the bottom right corner.

    I’m building/running it the homebrewed-unconventional route. That is I have just a bit of html/css and other files I want to serve, then I use nix to build that into a usable website and serve it on one of my homelab machines via nginx. That is made available through a VPS running HA-Proxy and its public IP. The Nebula overlay network (VPN) connects the two machines.



  • This might very well become a setting for an upcoming P&P Campaign from me.

    The hook: You have been mysteriously receiving packets by a host calling itself <the-connochaetes>. The packets hint at the existence of an “ΛℵϚ∃⌊ζ” cult running large scale brainwashing schemes. Will you heed the call and free the world? Or will you use the chace to increase your influence? It all starts with you joining this IRC: <REDACTED>