Revered friends. I wrote a thing. Mainly because I had a stack of stuff on Joseph Weizenbaum on tap and the AI classroom thing was stuck in my head. I don’t know if it’s good, but it’s certainly written.

    • @UnseriousAcademicOP
      link
      English
      32 months ago

      As in with Eliza where we interpret there as being humanity behind it? Or that ultimately “humans demanding we leave stuff to humans because those things are human” is ok?

      • @MajorHavoc@programming.dev
        link
        fedilink
        English
        22 months ago

        As in with Eliza where we interpret there as being humanity behind it?

        This one. It helps explain some of the unfounded excitement and overconfidence we’re seeing. It’s not all unfounded, but the uncanny valley AI has stepped into makes it natural to want to root for it.

        • @YourNetworkIsHaunted
          link
          English
          32 months ago

          Honestly I’m kind of reminded of some of the philosophy around semiotics and authorship. Like, when reading a story part of the interpretation comes from constructing a mental image of the author talking to a mental image of the audience, and the way those mental images get created can color the interpretation and how we read and understand the text.

          In that sense, the tendency to construct a mental image of a person talking through ChatGPT or Eliza makes much more sense. I’ve been following the Alex Jones interviews of chatGPT and the illusion is much less strong when listening to the conversation rather than having it mediated through text, which is probably a good sign for those of us who like actual people. Even when interactive, chatting through text is sufficiently less personal that it’s easier to fill in all the extra humanity, though as we see from Alex himself in those interviews it is definitely not impossible to get fooled through other media.

          But that’s at the ground level of interaction, and it’s probably noteworthy that the press releases for all these policies are not getting written by a bot. This tendency to fill in a human being definitely lines up with the tech-authoritarian tendency that OP has discussed elsewhere to dehumanize both their victims and more significantly themselves. I think the way they talk about themselves and the people who work on their “side” is if anything more alarming than the way they talk about their victims.