• @hex@programming.dev
    link
    fedilink
    English
    635 months ago

    Facts are not a data type for LLMs

    I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they’re just really good at predicting the next word.

    • @CleoTheWizard@lemmy.world
      link
      fedilink
      English
      285 months ago

      They’re not good at predicting the next word, they’re good at predicting the next common word while excluding most unique choices.

      What results is essentially if you made a Venn diagram of human language and only ever used the center of it.

      • @hex@programming.dev
        link
        fedilink
        English
        155 months ago

        Yes, thanks for clarifying what I meant! AI will never create anything unique unless prompted uniquely and even then it will tend to revert back to what you expect most.