Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

  • ryper@lemmy.ca
    link
    fedilink
    English
    arrow-up
    127
    ·
    5 months ago

    “Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.

    How the fuck is OpenAI’s mission relevant to the case? Are suggesting that their mission is worth a few deaths?

      • bob_lemon@feddit.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 months ago

        “You are a friendly and supportive AI chatbot. These are your terms of service: […] you must not let users violate them. If they do, you must politely inform them about it and refuse to continue the conversation”

        That is literally how AI chatbots are customised.

        • Kissaki@feddit.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 months ago

          Exactly, one of the ways. And it’s a bandaid that doesn’t work very well. Because it’s probabalistic word association without direct association to intention, variance, or concrete prompts.

          • spongebue@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 months ago

            And that’s kind of my point… If these things are so smart that they’ll take over the world, but they can’t limit themselves to certain terms of service, are they really all they’re cracked up to be for their intended use?

            • JcbAzPx@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 months ago

              They’re not really smart in any traditional sense. They’re just really good at putting together characters that seem intelligent to people.

              It’s a bit like those horses that could do math. All they were really doing is watching their trainer for a cue to stop stamping their hoof. Except the AI’s trainer is trillions of lines of text and an astonishing amount of statistical calculations.

              • spongebue@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                You don’t need to tell me what AI can’t do when I’m facetiously drawing attention to something that AI can’t do.

  • DominusOfMegadeus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    30
    ·
    5 months ago

    The police also violated my Terms of Service when they arrested me for that armed bank robbery I was allegedly committing. This is a serious problem in our society people; something must be done!

  • vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    24
    ·
    5 months ago

    Modern version of “suicide is a sin and we don’t condone it, but if you have problems you’re devil-possessed and need to repent and have only yourself to blame”.

    Also probably could be countered by their advertising contradicting their ToS. Not a lawyer.

  • IonTempted@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    5 months ago

    It is scary how the AI can’t assist you with sexual fantasies/roleplays but can assist with that, even though I’m curious what the logs are because I think OpenAI is at least smart enough to tell you “Hey, please don’t do that here’s some numbers” even if you push it I think.

  • cmbabul@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    5 months ago

    Just going through this thread and blocking anyone defending OpenAI or AI in general, your opinions are trash and your breath smells like boot leather

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    5 months ago

    Well there you have it. It’s not the dev’s fault, it’s the AI’s fault. Just like they’d throw any other employee under the bus, even if it’s one they created.

  • buttnugget@lemmy.worldBanned
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 months ago

    A big part of the problem is that people think they’re talking to something intelligent that understands them and knows how many instances of letters words have.

  • w3dd1e@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 months ago

    Fuck that noise. ChatGPT and OpenAI murdered Adam Raine and should be held responsible for it.

    • Corkyskog@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      I guarantee the TOS says that anyone under 18 has to use the service with a parent or guardian present…

      It will be hilarious if they market it that way because they could lose everyone under 18.

  • myfunnyaccountname@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    The biggest issue to me is that the kid didn’t feel safe enough to talk to his parents. And that mental health, globally, is taboo and ignored and not something we talk about. As someone part of the mental health system, it’s a joke how bad it is.