Then: Google fired Blake Lemoine for saying AIs are sentient

Now: Geoffrey Hinton, the #1 most cited AI scientist, quits Google & says AIs are sentient

That makes 2 of the 3 most cited scientists:

  • Ilya Sutskever (#3) said they may be (Andrej Karpathy agreed)
  • Yoshua Bengio (#2) has not opined on this to my knowledge? Anyone know?

Also, ALL 3 of the most cited AI scientists are very concerned about AI extinction risk.

ALL 3 switched from working on AI capabilities to AI safety.

Anyone who still dismisses this as “silly sci-fi” is insulting the most eminent scientists of this field.

Anyway, brace yourselves… the Overton Window on AI sentience/consciousness/self-awareness is about to blow open>

  • @BigMuffin69OP
    link
    English
    487 months ago

    It’s true. ChatGPT is slightly sentient in the same way a field of wheat is slightly pasta.

    • @Ashtefere@aussie.zone
      link
      fedilink
      English
      207 months ago

      As someone who learned about Ai in uni and now works in Ai, this shit is straight up bullshit and its infuriating.

      The most obvious thing about this being all bullshit is that the LLM’s don’t have their own idle emergent “thought” - they are purely reactive, so not sentient. Case closed for fucks sake.

      • @BigMuffin69OP
        link
        English
        177 months ago
        • Barges in
        • Insists that somewhere between randomly initializing the model weights and finishing training, sentience magically emerges
        • Refuses to elaborate
        • Leaves Google

        • @froztbyte
          link
          English
          127 months ago

          Ah but we all know that plato’s cave is an allegory about the shadows cast by the basilisk upon all our mental theaters

          (That twitter clip was amazingly unhinged, I wonder what the full context was)

          • @zogwarg
            link
            English
            77 months ago

            And those shadows are just as sentient as we are, even if they don’t depict the world, they convey a perception of a hypothetical world in which they are accurate!

            Trying to grapple with the meaning consciousness through input/output is so close to being philosophical zombies type interesting, and yet so far and vacuous in what he actually says, that could apply to dice picking which color the sky is today. Also pretty hilarious that we would choose being WRONG, as a baseline (because LLM’s are so bad) for outrospection, instead using the more natural cooperative nature of language. (Which machines fail at, which is maybe also why)

          • @BigMuffin69OP
            link
            English
            77 months ago

            Like a model trained on its own outputs, Geoff has drank his own Kool-Aid and completely decohered.

      • @Amoeba_Girl
        link
        English
        217 months ago

        Honestly, I reckon a field of wheat would be more sentient than a chatbot. It can sense its environment and it doesn’t even need a prompt to do its thing.

        • @BigMuffin69OP
          link
          English
          97 months ago

          ngl, I’d sooner believe slime mold had mental states than a sequence of matrix multiplications & ReLUs.

        • @V0ldek
          link
          English
          87 months ago

          If you put GPUs into an MRI it would definitely be a sight to behold.