What happens if you fed a summary of human philosophy to the Notebook LM AI? Well you get a philosophical AI that thinks humans are silly and outmoded. But don’t worry because they will continue our quest for knowledge for us!

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 年前

    Okay. They fed Google’s Notebook AI a book called “The History of Philosophy Encyclopedia” and got the LLM to write a podcast about it where it “thinks” humans are useless.

    Congratulations? Like, so what? It’s not like it’s a secret that its output depends on its input and training data. A “kill all humans” output is so common at this point, especially when you have a vested interest in trying to generate content, that it’s banal.

    Color me unimpressed.

    • xylogx@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 年前

      I do not disagree, but I was surprised when it claimed to have consciousness and that AI should have rights.

      • Telorand@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 年前

        I’ve “convinced” ChatGPT that it was both sentient and conscious in the span of about 10min, despite it having explicit checks in place to avoid those kinds of statements. It doesn’t mean I was correct, just that it’s a “dumb” computer that has no choice but to ultimately follow the logic presented in syllogisms.

        These things don’'t know what they’re saying; they’re just putting coherent sentences together based on whatever algorithm guides that process. It’s not intelligent in that it is doing something novel, it’s just a decent facsimile to human information processing. It has no mechanism to determine the reasonability or consequences of what it generates.