Why did the Alignment community not prepare tools and plans for convincing the wider infosphere about AI safety years in advance?
Did you not read HPMOR, the greatest story ever reluctantly told to reach the wider infosphere about rationalism and, by extension, AI alignment???
Why were there no battle plans in the basement of the pentagon that were written for this exact moment?
It’s almost like AGI isn’t a credible threat!
Heck, 20+ years is enough time to educate, train, hire and surgically insert an entire generation of people into key positions in the policy arena specifically to accomplish this one goal like sleeper cell agents. Likely much, much, easier than training highly qualified alignment researchers.
At MIRI, we don’t do things because they are easy. We don’t do things because we are grifters.
Didn’t we pretty much always know it was going to come from one or a few giant companies or research labs? Didn’t we understand how those systems function in the real world? Capitalist incentives, Moats, Regulatory Capture, Mundane utility, and International Coordination problems are not new.
This is how they look at all other problems in the world, and it’s fucking exasperating. Climate change? I would simply implement ‘Capitalist Incentives’. Wealth inequality? Have you tried a ‘Moat’? Racism? It sounds like a job for ‘Regulatory Capture’. Yes, all problems are easily solvable with 200 IQ and buzzwords. All problems except the hardest problem in the world, preventing Skynet from being invented. Ignore all those other problems; someone will ‘Mundane Utility’ them away. For now, we need your tithe; we’re definitely going to use it for ‘International Coordination’, by which I totally don’t mean buying piles of meth and cocaine for our orgies.
Why was it not obvious back then? Why did we not do this? Was this done and I missed it?
Did you not read HPMOR, the greatest story ever reluctantly told to reach the wider infosphere about rationalism and, by extension, AI alignment???
It’s almost like AGI isn’t a credible threat!
At MIRI, we don’t do things because they are easy. We don’t do things because we are grifters.
This is how they look at all other problems in the world, and it’s fucking exasperating. Climate change? I would simply implement ‘Capitalist Incentives’. Wealth inequality? Have you tried a ‘Moat’? Racism? It sounds like a job for ‘Regulatory Capture’. Yes, all problems are easily solvable with 200 IQ and buzzwords. All problems except the hardest problem in the world, preventing Skynet from being invented. Ignore all those other problems; someone will ‘Mundane Utility’ them away. For now, we need your tithe; we’re definitely going to use it for ‘International Coordination’, by which I totally don’t mean buying piles of meth and cocaine for our orgies.
We tried nothing and we’re all out of ideas!
fixed
I regret I have but one upvote to give.