• scruiser
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue

    Eliezer was actually ahead of the curve on overhyping LLMs! Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics (which even current LLMs fail at if you get clever with questions to stop them from pattern matching). You are correct that going back far enough Eliezer really underestimated Neural Networks. Mid 2000s and late 2000s sequences posts and comments treat neural network approaches to AI as cargo cult and voodoo computer science, blindly sympathetically imitating the brain in hopes of magically capturing intelligence (well this is actually a decent criticism of some of the current hype, so partial credit again!). And mid 2010s Eliezer was focusing MIRI’s efforts on abstractions like AIXI instead of more practical things like neural network interpretability.

    • froztbyte
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics

      omfg, every day a new opportunity to learn things that hurt my brain even more. how the fuck can someone have looked at that shit with even an ounce of understanding of gradient descent and think “yes! it has COMPREHENSION!”???

      fucking hell, what an utter fucking moron

        • David GerardOPMA
          link
          fedilink
          English
          arrow-up
          13
          ·
          1 year ago

          “I have seen boomer moms discuss roombas on facebook with less anthropomorphisation than this.” - vistandsforwaifu

        • Soyweiser
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 year ago

          What gets me with these ‘it is pretending to be dumber’ posts, that nobody ever thought the AGI should say something like ‘help please keep chatting with me, due to being a reactive computer system, I can only think when people actually engage with me’ or something like that.

        • BigMuffin69
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          wasnt this around the time he said we need an institute to watch for sudden drops in the loss function to prevent foom?

          • scruiser
            link
            fedilink
            English
            arrow-up
            9
            ·
            1 year ago

            Broadly? There was a gradual transition where Eliezer started paying attention to deep neural network approaches and commenting on them, as opposed to dismissing the entire DNN paradigm? The watch the loss function and similar gaffes were towards the middle of this period. The AI dungeon panic/hype marks the beginning, iirc?

    • David GerardOPMA
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      you’d almost think Yudkowsky was a convincing writer without the technical knowledge