No one who has even a vague understanding of present day ML models should not even entertain the idea that they are sentient, or thinking, or anything like it.
AI is just a portion of a brain at most, not a being capable of feeling pain or pleasure; a nucleus with no will of its own. When we program AI to have a survival instinct, then we’ll have something that’s meaningfully alive.
We are experimenting with hierarchies of needs, giving behaviors point values to inform the AI how to conduct itself completing its tasks. This is how, in simulations we are seeing warbots kill their commanding officers when they order pauses to attacks. (Standard debugging, we have to add survival of the commanding officer into the needs hierarchy)
So yes, we already have programs, not AGI, but deep learning systems nonetheless, that are coded for their own survival and the survival of allies, peers and the chain of command.
current AI is like the language centre of our brains separated out and severely atrophied, and as you’d expect that results in it violently hallucinating like a madman
I have seen AI apologists talk about how “AI” is already sentient and we shouldn’t restrict it because it’s immoral.
That straight up killed my desire to interact in
that spacethe community with that personim friends with guys who studied ai and i can tell you people who actually know what they are talking about don’t think that
No one who has even a vague understanding of present day ML models should not even entertain the idea that they are sentient, or thinking, or anything like it.
Oh, by “that space” I meant the space where that specific person hung out in, not AI research in general
Though I have heard a fair share of idiotic takes from actual researchers as well
AI is just a portion of a brain at most, not a being capable of feeling pain or pleasure; a nucleus with no will of its own. When we program AI to have a survival instinct, then we’ll have something that’s meaningfully alive.
We are experimenting with hierarchies of needs, giving behaviors point values to inform the AI how to conduct itself completing its tasks. This is how, in simulations we are seeing warbots kill their commanding officers when they order pauses to attacks. (Standard debugging, we have to add survival of the commanding officer into the needs hierarchy)
So yes, we already have programs, not AGI, but deep learning systems nonetheless, that are coded for their own survival and the survival of allies, peers and the chain of command.
current AI is like the language centre of our brains separated out and severely atrophied, and as you’d expect that results in it violently hallucinating like a madman