Original LessWrong post: The alignment problem from a deep learning perspective
arXiv paper submitted to ICML: The alignment problem from a deep learning perspective
Whining on Twitter: https://twitter.com/RichardMCNgo/status/1652042195803987968?cxt=HHwWgIDTrfnnne0tAAAA
An ongoing problem for the rationalist community is that their beliefs are a collection of weird, science fiction-inspired religious myths about a robot apocalypse. This makes gaining traction in the respectable strata of society a bit challenging.
In 2022 Richard Ngo, currently a “AI governance researcher” at OpenAI, tried out a neat rhetorical trick: what if you had someone with a college education rewrite rationalist doctrine in terms of contemporary machine learning jargon? The resulting LessWrong post slaps a fresh coat of academic phrasing on top of the old, hackneyed Yudkowsky mythology. Now that it was communicated with the proper shibboleths, the robot apocalypse would surely gain purchase in the minds of educated professionals.
LessWrong’s reach is limited to people who already fear the coming of the robot god, though, so the impact of this LW post among respectable society was muted at best. Richard Ngo needed a better audience; where do educated professionals and other respectable sorts spend their time?
How about ICML, one of the most prestigious machine learning conferences in the entire world? Being published there would grant rationalist mythology the undeniable imprimatur of the machine learning elite, as well as a world-wide audience of influential ML professionals.
For people unfamiliar with the professional ML landscape, ICML is a big deal. They receive a lot of paper submissions - acceptance can be a crapshoot even for good research - and presenting a paper at a conference of this caliber can be a career-making move for young researchers. Opportunities for competitive research and industry jobs often favor people who participate in venues like this.
The reviewing criteria for publication at ICML dictate that submissions “can be either theoretical or empirical” and that a paper’s results will be “judged on the degree to which they have been objectively established and/or their potential for scientific and technological impact”.
Unfortunately for Richard Ngo his paper did not meet this bar. Rationalist mythology is neither theoretical nor empirical, and as such it cannot be objectively established as having any potential for scientific impact. Richard laments his rejection thusly:
[They] told us that although ICML allows position papers, our key concerns were too “speculative”, rendering the paper “unpublishable” without more “objectively established technical work”.
Not only does this mean that rationalism has been denied a promising debut, but it also means that Richard Ngo has been denied a good excuse to go on a company-funded, all-expenses-paid tropical vacation. ICML is being hosted in Hawaii this year.
One sneering tweet suggests that this line from the paper’s abstract did not improve its odds of acceptance:
[deleted]
This proves that current review practices in science are fatally flawed, a problem that can only be solved by proper application of Bayes Theorem, prediction markets, and a large monetary donation to the Rationalist non-profit of your choice.
Kind of shows how the ICML review process isn’t ideal that it got that far in the first place. A lot of what you end up judged on is how flashy you sound and not what you actually did.
Strange how they never seem to make serious effort at getting published in philosophy/ethics journals where I can find tons of wack shit at a moment’s notice.
How much did it lean into “timeless decision theory”?