A partnership with OpenAI will let podcasters replicate their voices to automatically create foreign-language versions of their shows.

  • sudoshakes@reddthat.com
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    7
    ·
    1 year ago

    A large language model took a 3 second snippet of a voice and extrapolated from that the whole spoken English lexicon from that voice in a way that was indistinguishable from the real person to banking voice verification algorithms.

    We are so far beyond what you think of when we say the word AI, because we replaced the underlying thing that it is without most people realizing it. The speed of large language models progress at current is mind boggling.

    These models when shown FMRI data for a patient, can figure out what image the patient is looking at, and then render it. Patient looks at a picture of a giraffe in a jungle, and the model renders it having never before seen a giraffe… from brain scan data, in real time.

    Not good enough? The same FMRI data was examined in real time by a large language model while a patient was watching a short movie and asked to think about what they saw in words. The sentence the person thought, was rendered as English sentences by the model, in real time, looking at fMRI data.

    That’s a step from reading dreams and that too will happen inside 20 months.

    We, are very much there.

    • Pete90@feddit.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      I don’t think what you’re saying is possible. Voxels used in fMRI measure in millimeters (down to one of I recall) and don’t allow for such granular analysis. It is possible to ‘see’ what a person sees but the image doesn’t resemble the original too closely.

      At least that’s what I have learned a few years ago. I’m happy to look at new sources, if you have some though.

      • sudoshakes@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

        High-resolution image reconstruction with latent diffusion models from human brain activity: https://www.biorxiv.org/content/10.1101/2022.11.18.517004v3

        Semantic reconstruction of continuous language from non-invasive brain recordings: https://www.biorxiv.org/content/10.1101/2022.09.29.509744v1

      • sudoshakes@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I like how I said, the problem is progress is moving so far you don’t even realize what you don’t know about the subject as a layman… and then this comment appears saying things are not possible.

        Lol.

        How timely.

        I the speed at which things are changing and redefining what is possible in this space is moving faster than any other are of research. It’s insane to the point that if you are not actively reading white papers every day, you miss major advances.

        The layman had this idea of what “AI” means, but we have truly no good way to make the word align to its meaning and capabilities with how fast we change what it means underneath.

    • Not_mikey@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Interesting and scary to think ai understands the black box of human neurology more than we understand the black box of ai.