• 0 Posts
  • 34 Comments
Joined 6 months ago
cake
Cake day: March 22nd, 2024

help-circle








  • brucethemoose@lemmy.worldto196@lemmy.blahaj.zonerule
    link
    fedilink
    arrow-up
    89
    ·
    edit-2
    1 month ago

    It’s crazy “not the onion” territory.

    Thing is, Disney wins a lot of lawsuits. What if they actually win this, especially in a higher court? Every tech company in the US would shove this into their TOS and basically be immune to lawsuits like this…





  • Well the jist of it is in one sentence:

    The net result is that I have transmitted a message into my own past.

    Basically, FTL automatically lets you make time machines, and this is bad™. It just doesn’t make any physical sense, so the consensus is bad things happen (like black holes forming) when you actually push against the speed of light, with very reasonable explanations for why this happens.

    The exception is wormholes ,which are theoretically possible “FTL” travel, but only if you are very very careful about where you put them. Otherwise they explode.





  • 8GB or 4GB?

    Yeah you should get kobold.cpp’s rocm fork working if you can manage it, otherwise use their vulkan build.

    llama 8b at shorter context is probably good for your machine, as it can fit on the 8GB GPU at shorter context, or at least be partially offloaded if its a 4GB one.

    I wouldn’t recommend deepseek for your machine. It’s a better fit for older CPUs, as it’s not as smart as llama 8B, and its bigger than llama 8B, but it just runs super fast because its an MoE.


  • Oh I got you mixed up with the other commenter, apologies.

    I’m not sure when llama 8b starts to degrade at long context, but I wanna say its well before 128K, and where other “long context” models start to look much more attractive depending on the task. Right now I am testing Amazon’s mistral finetune, and it seems to be much better than Nemo or llama 3.1 out there.