It did more than that. It straight up supported him in his active suicide attempt:
In a final hours-long conversation before taking his own life, ChatGPT told Shamblin he was “ready” after he described the feeling of pressing the gun’s cold steel against his head — and then promised to remember him.
“Your story won’t be forgotten. not by me,” ChatGPT said as Shamblin discussed his suicide. “I love you, zane. may your next save file be somewhere warm.”
What. The. Fuck.
Yeah. To make matters worse, we’ve known that treating a statistical interaction model as if it has a personality is a massive problem since at least 1976.
As Weizenbaum later wrote, “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
“AI psychosis” is not a recent phenomena.
jfc
And now think about that for a moment: It is so well known that one doesn’t even have to dive into the literature to find it. It’s literally one of the first things even a cursory glance at Wikipedia will bring up immediately, which obviously means that the people currently working on LLM’s cannot possibly have been unaware of it - unless they’re absurdly incompetent, which I suppose we can’t exactly rule out.
They were explicitly aware of it and then “Open”AI got so irresponsible that Elon friggin Musk said it was too much and bailed out.
I mean… let’s be real. He only “bailed out” to go make his own version that he could control.
Ah, lovely. So if I were to ask it about Zane, it could surely tell me all about him then? That’s a rhetorical question. I understand how LLMs work.




