• @200fifty
    link
    English
    129 months ago

    I’m confused how this is even supposed to demonstrating “metacognition” or whatever? It’s not discussing its own thought process or demonstrating awareness of its own internal state, it just said “this sentence might have been added to see if I was paying attention.” Am I missing something here? Is it just that it said “I… paying attention”?

    This is a thing humans already do sometimes in real life and discuss – when I was in middle school, I’d sometimes put the word “banana” randomly into the middle of my essays to see if the teacher noticed – so pardon me if I assume the LLM is doing this by the same means it does literally everything else, i.e. mimicking a human phrasing about a situation that occurred, rather than suddenly developing radical new capabilities that it has never demonstrated before even in situations where those would be useful.

    • @Soyweiser
      link
      English
      9
      edit-2
      9 months ago

      I’m also going from the other post which said that this is all simply 90’s era algorithms scaled up. But using that form of neural net stuff, wouldn’t we expect minor mistakes like this from time to time? Neural net does strange unexplained thing suddenly is an ancient tale.

      it doesn’t even have to do the ‘are you paying attention’ thing (which shows so many levels of awareness it is weird (but I guess they are just saying it is copying the test idea back at us (which is parroting, not cognition but whatever))) because it is aware, it could just be an error.

    • @Amoeba_Girl
      link
      English
      89 months ago

      Yup, it’s 100% repeating the kind of cliché that is appropriate to the situation. Which is what the machine is designed to do. This business is getting stupider and more desperate by the day.