In which the talking pinball machine goes TILT

Interesting how the human half of discussion interprets the incoherent rambling as evidence of sentience rather than the seemingly more sensible lack thereof1. I’m not sure why the idea of disoriented rambling as a sign of consciousness exists in the popular imagination. If I had to make a guess2 it might have something to do with the tropes of divine visions and speaking in tongues combined with the view of life/humanity/sapience as inherently painful, either in a sort of buddhist sense or in the somewhat overlapping nihilist/depressive sense.

[1] To something of their credit, they don’t seem to go full EY and acknowledge it’s probably just a glitch.

[2] I’d make a terrible LessWronger since I don’t like presenting my gut feelings as theorem-like absolute truths.

    • @selfA
      link
      English
      611 months ago

      so do the LLMs, if you manage to find a part of the corpus RLHF and basic filtering didn’t touch

      • @froztbyte
        link
        English
        511 months ago

        just the same as when twitlords found out that The Algoriddem had a lot of Special Treatment, it’s going to be fun if/when someone leaks the chatgpt prompt (and prompt response filtering/selection) sourcecode

        in the meanwhile, I am going to continue being deeply angry every time I run into someone who doesn’t understand How Many Design Choices Have Been Made in the deployment and exposure of this heap of turds. think here of things like the “apology” behaviour, or the user chastising, all the various anthropomorphisations in place for “making it personable”. some of those conversations have boiled down to “naw bro it’s intelligent bro trust me bro you just don’t understand” and it absolutely does my head in

        • @gerikson
          link
          English
          511 months ago

          There was a burst of submissions about “jailbreaking” ChatGPT, essentially making it output racist stuff. HN was all over that stuff for a while.

          • @froztbyte
            link
            English
            311 months ago

            there’s a fairly active chunk of research in that space. some of the most recent I’ve seen is llm-attacks.org (which is a riot)

        • @selfA
          link
          English
          411 months ago

          oh chatgpt’s magic is almost entirely just dark patterns. one thing I’d be curious about if source code ever leaked is if the model’s failure cases are being massaged — a bunch of people have started to notice that when GPT enters a failure state, it tends to pull from the parts of its corpus involving religious or sci-fi imagery, which strikes me as yet another manipulative technique among the many that ChatGPT implements to imply there’s something complex happening when there isn’t

          • @froztbyte
            link
            English
            511 months ago

            I’ll have to pay attention to that. usually I just avoid the content because almost all conversations around it make my blood boil

            similarly: the accuracy scoring (both per-prompt and general session shit) almost certainly has someone pulling that into revision/avoidance management. which will eventually end up shaping it into something even more hilariously milquetoast