• @gerikson
    link
    English
    1110 months ago

    I didn’t read this but I confident it can be summarized as “how many hostile AGIs can we confine to the head of a pin?”

  • Sailor Sega Saturn
    link
    English
    1110 months ago

    I remember role playing cops and robbers as a kid. I could point my finger and shout “bang bang I got you” but if my friend didn’t pretend to be mortally wounded and instead just kept running around there’s really nothing I could do.

  • @Evinceo
    link
    English
    810 months ago

    Nobody tell these guys that the control problem is just the halting problem and first year CS students already know the answer.

    • David GerardMA
      link
      English
      810 months ago

      remembering how Thiel paid Buterin to drop out of his comp sci course so he spent all of 2018 trying to implement plans for Ethereum that only required that P=NP

      • @selfMA
        link
        English
        410 months ago

        it’s kind of amazing how many of the “I’ll never use CS theory in my career ever” folks end up trying to implement a fast SAT solver without realizing

    • @kuna
      link
      English
      4
      edit-2
      7 months ago

      deleted by creator

      • @selfMA
        link
        English
        410 months ago

        …huh. somehow among all the many things wrong with TDT, I never cottoned to the fact that it just reduces to the halting problem

        are rats just convinced that Alan Turing never considered what if computer but more complex? cause there’s a whole branch of math dedicated to computability regardless of the complexity of the computation substrate, and Alan helped invent it. of course they don’t know about this because they ignore the parts of computer science that disagree with their stupid ideas

        • @kuna
          link
          English
          3
          edit-2
          7 months ago

          deleted by creator

    • @skillissuer@lemmy.world
      link
      fedilink
      English
      510 months ago

      2 points for every statement that is clearly vacuous.

      3 points for every statement that is logically inconsistent.

      this could be enough

  • @bitofhope
    link
    English
    610 months ago

    td;lr

    No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem?

    While I agree this article is TL and I DR it, this is not an abstract. This is a redundant lede and attempted clickbait at that.

    Oh wait I just noticed the L and D are swapped. Feel free not to tell me whether that’s a typo or some smarmy lesswrongism.

    • @kuna
      link
      English
      5
      edit-2
      7 months ago

      deleted by creator

      • @selfMA
        link
        English
        610 months ago

        too damn lesswrong

  • Soy
    link
    fedilink
    410 months ago

    @sue_me_please Don’t think this reply will properly show up on awful.systems, but I can’t resist to sneer.

    It amuses me that for a while the LW people saw Musk as a great example, and he just went ‘I would solve the control problem by making them human friendly and making the robots have low grip strength. Easy peasy’ Amazed that wasn’t a crack ping moment for a lot of them.

    • @selfMA
      link
      English
      510 months ago

      the sneer looks good to me!