OpenAI have claimed that a teen violated terms that prohibit discussing suicide or self-harm with the chatbot after the AI allegedly encouraged him to take his own life.
And I bet teens are going to go on violating the TOS. Maybe we better restrict AI to people who could potentially read and understand the TOS if your product is so dangerous that lifesaving instructions are contained within.
And I bet teens are going to go on violating the TOS. Maybe we better restrict AI to people who could potentially read and understand the TOS if your product is so dangerous that lifesaving instructions are contained within.