• Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    I’m reading your comment as “[AI is] Not yet [an existential threat], anyway”. If that’s inaccurate, please clarify, OK?

    With that reading in mind: I don’t think that the current developments in machine “learning” lead towards the direction of some hypothetical system that would be an existential threat. The closest to that would be the subset of generative models, that looks like a tech dead end - sure, it might see some applications, but I don’t think that it’ll progress much past the current state.

    In other words I believe that the AI that would be an existential threat would be nothing like what’s being created and overhyped now.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Yeah, the short-term outlook doesn’t look too dangerous right now. LLMs can do a lot of things we thought wouldn’t happen for a long time, but they still have major issues and are running out of easy scalability.

      That being said, there’s a lot of different training schemes or integrations with classical algorithms that could be tried. ChatGPT knows a scary amount of stuff (inb4 Chinese room), it just doesn’t have any incentive to use it except to mimic human-generated text. I’m not saying it’s going to happen, but I think it’s premature to write off the possibility of an AI with complex planning capabilities in the next decade or so.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        I don’t think that a different training scheme or integrating it with already existing algos would be enough. You’d need a structural change.

        I’ll use a silly illustration for that; it’s somewhat long so I’ll put it inside spoilers. (Feel free to ignore it though - it’s just an illustration, the main claim is outside the spoilers tag.)

        The Mad Librarian and the Good Boi

        Let’s say that you’re a librarian. And you have lots of books to sort out. So you want to teach a dog to sort books for you. Starting by sci-fi and geography books.

        So you set up the training environment: a table with a sci-fi and a geography books. And you give your dog a treat every time that he puts the ball over the sci-fi book.

        At the start, the dog doesn’t do it. But then as you train him, he’s able to do it perfectly. Great! Does the dog now recognise sci-fi and geography books? You test this out, by switching the placement of the books, and asking the dog to perform the same task; now he’s putting the ball over the history book. Nope - he doesn’t know how to tell sci-fi and geography books apart, you were “leaking” the answer by the placement of the books.

        Now you repeat the training with a random position for the books. Eventually after a lot of training the dog is able to put the ball over the sci-fi book, regardless of position. Now the dog recognises sci-fi books, right? Nope - he’s identifying books by the smell.

        To fix that you try again, with new versions of the books. Now he’s identifying the colour; the geography book has the same grey/purple hue as grass (from a dog PoV), the sci book is black like the neighbour’s cat. The dog would happily put the ball over the neighbour’s cat and ask “where’s my treat, human???” if the cat allowed it.

        Needs more books. You assemble a plethora of geo and sci-fi books. Since typically tend to be dark, and the geo books tend to have nature on their covers, the dog is able to place the ball over the sci-fi books 70% of the time. Eventually you give up and say that the 30% error is the dog “hallucinating”.

        We might argue that, by now, the dog should be “just a step away” from recognising books by topic. But we’re just fooling ourselves, the dog is finding a bunch of orthogonal (like the smell) and diagonal (like the colour) patterns. What the dog is doing is still somewhat useful, but it won’t go much past that.

        And, even if you and the dog lived forever (denying St. Peter the chance to tell him “you weren’t a good boy. You were the best boy.”), and spend most of your time with that training routine, his little brain won’t be able to create the associations necessary to actually identify a book by the topic, such as the content.

        I think that what happens with LLMs is a lot like that. With a key difference - dogs are considerably smarter than even state-of-art LLMs, even if they’re unable to speak.

        At the end of the day LLMs are complex algorithms associating pieces of words, based on statistical inference. This is useful, and you might even see some emergent behaviour - but they don’t “know” stuff, and this is trivial to show, as they fail to perform simple logic even with pieces of info that they’re able to reliably output. Different training and/or algo might change the info that it’s outputting, but they won’t “magically” go past that.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          5 months ago

          Chinese room, called it. Just with a dog instead.

          I have this debate so often, I’m going to try something a bit different. Why don’t we start by laying down how LLMs do work. If you had to explain as full as you could the algorithm we’re talking about, how would you do it?