As Weizenbaum later wrote, “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
And now think about that for a moment: It is so well known that one doesn’t even have to dive into the literature to find it. It’s literally one of the first things even a cursory glance at Wikipedia will bring up immediately, which obviously means that the people currently working on LLM’s cannot possibly have been unaware of it - unless they’re absurdly incompetent, which I suppose we can’t exactly rule out.
What. The. Fuck.
Yeah. To make matters worse, we’ve known that treating a statistical interaction model as if it has a personality is a massive problem since at least 1976.
“AI psychosis” is not a recent phenomena.
jfc
And now think about that for a moment: It is so well known that one doesn’t even have to dive into the literature to find it. It’s literally one of the first things even a cursory glance at Wikipedia will bring up immediately, which obviously means that the people currently working on LLM’s cannot possibly have been unaware of it - unless they’re absurdly incompetent, which I suppose we can’t exactly rule out.
They were explicitly aware of it and then “Open”AI got so irresponsible that Elon friggin Musk said it was too much and bailed out.
I mean… let’s be real. He only “bailed out” to go make his own version that he could control.