• @bitofhope
    link
    English
    61 year ago

    td;lr

    No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem?

    While I agree this article is TL and I DR it, this is not an abstract. This is a redundant lede and attempted clickbait at that.

    Oh wait I just noticed the L and D are swapped. Feel free not to tell me whether that’s a typo or some smarmy lesswrongism.

    • @kuna
      link
      English
      5
      edit-2
      1 year ago

      deleted by creator

      • @selfMA
        link
        English
        61 year ago

        too damn lesswrong