• @zbyte64
    link
    English
    22 months ago

    Humans predict things by assigning meaning to events and things, because in nature, we’re constantly trying to guess what other creatures are planning. An LLM does not hypothesize what your plans are when you communicate to it, it’s just trying to predict the next set of tokens with the greatest reward value. Even if you were to use literal human neurons to build your LLM, you would still have a stochastic parrot.

      • @zbyte64
        link
        English
        2
        edit-2
        2 months ago

        Why should I need to prove a negative? The burden is on the ones claiming an LLM is sentient. LLMs are token predictors, do I need to present evidence of this?

        • @sunbeam60@lemmy.one
          link
          fedilink
          English
          12 months ago

          I’m not asking you to prove anything. I’m saying I haven’t seen evidence either way so for me, it’s too early to draw conclusions.