My meme/shitposting alt, other @Deebster
s are available.
Their app and website are both atrocious. I’ve got a rant somewhere on Lemmy about once time it made me scream with impotent rage over the UX experience, and I’m someone comfortable with editing the DOM/scripting to fix the worst of it.
Well, said at least - this story’s almost a decade old.
You’ve had a good definition, but Wikipedia has (a lot) more info: https://en.wikipedia.org/wiki/Kayfabe
The quote’s a famous monologue from Hamlet.
Are they allowed to put jokes in legal documents like this? (I know it’s gone now)
I’ve raged at the incompetent UX design so many times, like recently when I was trying to add videos to the currently playlist in a certain order, since you can’t reorder yourself. The mini player blocked the controls I needed for the last item on the page, but closing the player wiped out the playlist. Cue scream of rage and a few choice words at volume.
Works here too, but when I tried to save it to the Internet Archive the saved page doesn’t have AI results 😟
Rickroll: (v) to troll the youth using memes
Makes sense!
Ironically, it’s a pretty well-known one itself (you see people just refer to it by mentioning “today’s 10000”).
Hmm, I think they’re close enough to be able to say a neural network is modelled on how a brain works - it’s not the same, but then you reach the other side of the semantics coin (like the “can a submarine swim” question).
The plasticity part is an interesting point, and I’d need to research that to respond properly. I don’t know, for example, if they freeze the model because otherwise input would ruin it (internet teaching them to be sweaty racists, for example), or because it’s so expensive/slow to train, or high error rates, or it’s impossible, etc.
When talking to laymen I’ve explained LLMs as a glorified text autocomplete, but there’s some discussion on the boundary of science and philosophy that’s asking is intelligence a side effect of being able to predict better.
Humans invent stuff (without realising) it to, so I don’t think that’s enough to disqualify something from being intelligent.
The interesting question is how much of this is due to the training goal basically being “a sufficiently convincing response to satisfy a person” (pretty much the same as on social media) and how much of it is a fundamental flaw in the whole idea.
I agree to your broad point, but absolutely not in this case. Large Language Models are 100% AI, they’re fairly cutting edge in the field, they’re based on how human brains work, and even a few of the computer scientists working on them have wondered if this is genuine intelligence.
On the spectrum of scripted behaviour in Doom up to sci-fi depictions of sentient silicon-based minds, I think we’re past the halfway point.
I had to check, but the real thing is the Dairy Council and this is a parody account. Obviously it’s way more interesting than the real @dairyuk account.
You’re claiming that Generative AI isn’t AI? Weird claim. It’s not AGI, but it’s definitely under the umbrella of the term “AI”, and at the more advanced end (compared to e.g. video game AI).
This one’s obviously fake because of the capitalisation errors and ..
but the fact that it’s otherwise (kinda) plausible shows how useless AI is turning out to be.
I’m going to tag you in next time I lose the game.
So is this a human doing a great Attenborough impression, AI doing it, or the man himself*?
* wildcard option