You wouldn’t steal a car…
You wouldn’t steal a car…
At least the fandom has a big body of work to meme from.
The Avatar fandom has about 48 hours of total runtime. And novels that, tragically, no one memes.
Off topic, but not capitalizing on Korrasami is one of the stupidest things Viacom has ever done, lol. Which is saying something.
How could they possibly think “No one will like this, quietly sweep it under the rug…”
I don’t think that could be done at a scale that matters, because it doesn’t make you any money.
TBH the bigger threat is the corporate bots that already post in “human” subs. They’re destroying the site already, but Reddit doesn’t really care about that either, lol.
Does the internet archive scrape Reddit? Or have they?
Nothing, because that doesn’t cost Reddit short term money so they don’t care?
Probably longevity? Maybe heat?
Charging the last 20% is really hard on the battery. Also, Samsung has a history with ahem exploding batteries they overcharged, and exploding earbuds would be quite a meme.
It’s crazy “not the onion” territory.
Thing is, Disney wins a lot of lawsuits. What if they actually win this, especially in a higher court? Every tech company in the US would shove this into their TOS and basically be immune to lawsuits like this…
They have always hosted AVC videos. They have to for old/incompatible devices, because literally any toaster and software on earth will play it.
But sometimes they are funny about what gets encoded to what, and of course they will always try to default to av1/vp9 and opus.
Surely it would fall back to mp4? Are some videos AV1/VP9 only?
WTF, what was the justification for that rule before they changed it?
Well the jist of it is in one sentence:
The net result is that I have transmitted a message into my own past.
Basically, FTL automatically lets you make time machines, and this is bad™. It just doesn’t make any physical sense, so the consensus is bad things happen (like black holes forming) when you actually push against the speed of light, with very reasonable explanations for why this happens.
The exception is wormholes ,which are theoretically possible “FTL” travel, but only if you are very very careful about where you put them. Otherwise they explode.
See Orion’s Arm’s explanation:
https://www.orionsarm.com/xcms.php?r=oa-faq&topic=FTL in OA
And related concept’s like a wormhole’s failure mode (EG they immediately collapse if ever positioned in a way that allows for actual FTL travel):
https://www.orionsarm.com/eg-article/48545a0f6352a
https://www.orionsarm.com/eg-article/4754be03eb3bc
Orion’s Arm is really cool because it’s set in the far future, but it tries to limit AI engineering to what’s theoretically possible with current physics (just not with current engineering), and they have good explanations for it all. For example, warp drives are a thing, and theoretically plausible, but they do not allow for FTL travel.
If FTL is impossible (as is likely the case) there is a point where a better ship can’t catch up, even if its going like 0.9c.
A person of culture, I see.
8GB or 4GB?
Yeah you should get kobold.cpp’s rocm fork working if you can manage it, otherwise use their vulkan build.
llama 8b at shorter context is probably good for your machine, as it can fit on the 8GB GPU at shorter context, or at least be partially offloaded if its a 4GB one.
I wouldn’t recommend deepseek for your machine. It’s a better fit for older CPUs, as it’s not as smart as llama 8B, and its bigger than llama 8B, but it just runs super fast because its an MoE.
Oh I got you mixed up with the other commenter, apologies.
I’m not sure when llama 8b starts to degrade at long context, but I wanna say its well before 128K, and where other “long context” models start to look much more attractive depending on the task. Right now I am testing Amazon’s mistral finetune, and it seems to be much better than Nemo or llama 3.1 out there.
4 core i7, 16gb RAM and no GPU yet
Honestly as small as you can manage.
Again, you will get much better speeds out of “extreme” MoE models like deepseek chat lite: https://huggingface.co/YorkieOH10/DeepSeek-V2-Lite-Chat-Q4_K_M-GGUF/tree/main
Another thing I’d recommend is running kobold.cpp instead of ollama if you want to get into the nitty gritty of llms. Its more customizable and (ultimately) faster on more hardware.
Can you afford an Arc A770 or an old RTX 3060?
Used P100s are another good option. Even an RTX 2060 would help a ton.
27B is just really chunky on CPU, unfortunately. There’s no way around it. But you may have better luck with MoE models like deepseek chat or Mixtral.
Then the lemmy title is misleading, no? Isn’t that against the rules?