Why did the Alignment community not prepare tools and plans for convincing the wider infosphere about AI safety years in advance?
Did you not read HPMOR, the greatest story ever reluctantly told to reach the wider infosphere about rationalism and, by extension, AI alignment???
Why were there no battle plans in the basement of the pentagon that were written for this exact moment?
It’s almost like AGI isn’t a credible threat!
Heck, 20+ years is enough time to educate, train, hire and surgically insert an entire generation of people into key positions in the policy arena specifically to accomplish this one goal like sleeper cell agents. Likely much, much, easier than training highly qualified alignment researchers.
At MIRI, we don’t do things because they are easy. We don’t do things because we are grifters.
Didn’t we pretty much always know it was going to come from one or a few giant companies or research labs? Didn’t we understand how those systems function in the real world? Capitalist incentives, Moats, Regulatory Capture, Mundane utility, and International Coordination problems are not new.
This is how they look at all other problems in the world, and it’s fucking exasperating. Climate change? I would simply implement ‘Capitalist Incentives’. Wealth inequality? Have you tried a ‘Moat’? Racism? It sounds like a job for ‘Regulatory Capture’. Yes, all problems are easily solvable with 200 IQ and buzzwords. All problems except the hardest problem in the world, preventing Skynet from being invented. Ignore all those other problems; someone will ‘Mundane Utility’ them away. For now, we need your tithe; we’re definitely going to use it for ‘International Coordination’, by which I totally don’t mean buying piles of meth and cocaine for our orgies.
Why was it not obvious back then? Why did we not do this? Was this done and I missed it?
We tried nothing and we’re all out of ideas!
buying piles of anime for aella’s masked naked parties full of querulous discussion
fixed
I regret I have but one upvote to give.
It makes me feel better that at least they don’t feel like they got too much attention and credibility, because I sure do.
No amount of attention or credibility would ever be enough.
or, conversely, warranted
They’re so enamored with the individualism of a lone genius org coming up with the solution all on its own and so opposed to any for of collective solution requiring trust (because of their troubled childhoods?) that the only acceptable thing has to be a technical wizbang solution. Only now enough eyeballs have seen the problem and realized the obvious: no such wizbang solution can exist (if there’s even a well defined problem!)
A choice quote from the comments:
I am not as gifted at persuasive writing as (say) Eliezer,
Don’t sell yourself short, kid.
Clearly, they didn’t spend enough money on printing copies of Harry Potter fanfiction.
Mikhail Yagudin ($28,000): Giving copies of Harry Potter and the Methods of Rationality to the winners of EGMO 2019 and IMO 2020
I’m in the wrong line of work. were these fucking gold plated? did they give every recipient 1024 copies as a symbol of the simulated torture they’ve earned? am I going to find a copy of HPMOR in the nightstand at the next cheap hotel I stay at?
JFC what a crappy prize.
EGMO 2019 is apparently European Girl’s Mathematical Olympiad (2019’s venue was Kyiv, Ukraine)
Despite the name it seems to be an international competition, with participants from the US, KSA, Peru and Mexico: https://www.egmo.org/egmos/egmo8/scoreboard/
If we defined “winners” to be those with gold medals, it’s unclear to me whether the participants can read HPMOR in English - there are 3 winners from the US, 1 from the UK, and 3 from Latin America. The rest are from countries from the former Warsaw pact.