I didn’t read this but I confident it can be summarized as “how many hostile AGIs can we confine to the head of a pin?”
I remember role playing cops and robbers as a kid. I could point my finger and shout “bang bang I got you” but if my friend didn’t pretend to be mortally wounded and instead just kept running around there’s really nothing I could do.
Nobody tell these guys that the control problem is just the halting problem and first year CS students already know the answer.
remembering how Thiel paid Buterin to drop out of his comp sci course so he spent all of 2018 trying to implement plans for Ethereum that only required that P=NP
it’s kind of amazing how many of the “I’ll never use CS theory in my career ever” folks end up trying to implement a fast SAT solver without realizing
deleted by creator
deleted by creator
…huh. somehow among all the many things wrong with TDT, I never cottoned to the fact that it just reduces to the halting problem
are rats just convinced that Alan Turing never considered what if computer but more complex? cause there’s a whole branch of math dedicated to computability regardless of the complexity of the computation substrate, and Alan helped invent it. of course they don’t know about this because they ignore the parts of computer science that disagree with their stupid ideas
deleted by creator
:chefkiss:
Word Count needs to be added to the crackpot index.
2 points for every statement that is clearly vacuous.
3 points for every statement that is logically inconsistent.
this could be enough
td;lr
No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem?
While I agree this article is TL and I DR it, this is not an abstract. This is a redundant lede and attempted clickbait at that.
Oh wait I just noticed the L and D are swapped. Feel free not to tell me whether that’s a typo or some smarmy lesswrongism.
deleted by creator
too damn lesswrong
@sue_me_please Don’t think this reply will properly show up on awful.systems, but I can’t resist to sneer.
It amuses me that for a while the LW people saw Musk as a great example, and he just went ‘I would solve the control problem by making them human friendly and making the robots have low grip strength. Easy peasy’ Amazed that wasn’t a crack ping moment for a lot of them.
the sneer looks good to me!