• David GerardOPMA
    link
    English
    44 months ago

    That was the current example we were thinking of, though we did look up war crimes law thinking on the subject tl;dr you risk war crimes if there isn’t a human in the loop. e.g., think of a minefield as the simplest possible stationary autonomous weapon system, the rest is that with computers.

    • @BlueMonday1984
      link
      English
      44 months ago

      As a personal sidenote, part of me says the “Self-Aware AI Doomsday” criti-hype might end up coming back to bite OpenAI in the arse if/when one of those DoD tests goes sideways.

      Plenty of time and money’s been spent building up this idea of spicy autocomplete suddenly turning on humanity and trying to kill us all. If and when one of those spectacular disasters you and Amy predicted does happen, I can easily see it leading to wild stories of ChatGPT going full Terminator or some shit like that.