r/SneerClub archives
newest
bestest
longest
Richard Ngo submits rationalist mythology as ICML conference paper, whines on twitter when it gets rejected (https://www.reddit.com/r/SneerClub/comments/133h9u8/richard_ngo_submits_rationalist_mythology_as_icml/)
72

Original LessWrong post: The alignment problem from a deep learning perspective

arXiv paper submitted to ICML: The alignment problem from a deep learning perspective

Whining on Twitter: https://twitter.com/RichardMCNgo/status/1652042195803987968?cxt=HHwWgIDTrfnnne0tAAAA

New duds for old myths

An ongoing problem for the rationalist community is that their beliefs are a collection of weird, science fiction-inspired religious myths about a robot apocalypse. This makes gaining traction in the respectable strata of society a bit challenging.

In 2022 Richard Ngo, currently a “AI governance researcher” at OpenAI, tried out a neat rhetorical trick: what if you had someone with a college education rewrite rationalist doctrine in terms of contemporary machine learning jargon? The resulting LessWrong post slaps a fresh coat of academic phrasing on top of the old, hackneyed Yudkowsky mythology. Now that it was communicated with the proper shibboleths, the robot apocalypse would surely gain purchase in the minds of educated professionals.

LessWrong’s reach is limited to people who already fear the coming of the robot god, though, so the impact of this LW post among respectable society was muted at best. Richard Ngo needed a better audience; where do educated professionals and other respectable sorts spend their time?

ICML submission

How about ICML, one of the most prestigious machine learning conferences in the entire world? Being published there would grant rationalist mythology the undeniable imprimatur of the machine learning elite, as well as a world-wide audience of influential ML professionals.

For people unfamiliar with the professional ML landscape, ICML is a big deal. They receive a lot of paper submissions - acceptance can be a crapshoot even for good research - and presenting a paper at a conference of this caliber can be a career-making move for young researchers. Opportunities for competitive research and industry jobs often favor people who participate in venues like this.

The reviewing criteria for publication at ICML dictate that submissions “can be either theoretical or empirical” and that a paper’s results will be “judged on the degree to which they have been objectively established and/or their potential for scientific and technological impact”.

Bitter rejection

Unfortunately for Richard Ngo his paper did not meet this bar. Rationalist mythology is neither theoretical nor empirical, and as such it cannot be objectively established as having any potential for scientific impact. Richard laments his rejection thusly:

[They] told us that although ICML allows position papers, our key concerns were too “speculative”, rendering the paper “unpublishable” without more “objectively established technical work”.

Not only does this mean that rationalism has been denied a promising debut, but it also means that Richard Ngo has been denied a good excuse to go on a company-funded, all-expenses-paid tropical vacation. ICML is being hosted in Hawaii this year.

One sneering tweet suggests that this line from the paper’s abstract did not improve its odds of acceptance:

We outline how the deployment of misaligned AGIs might irreversibly undermine human control over the world

The glow up makes it more obvious, not less, lmao.
I thought server rooms still had Big Red Buttons

[deleted]

[frankly](https://twitter.com/matvelloso/status/1065778379612282885) the field is getting this stuff because it already accepted the marketing term "AI" for 20th-century predictive models fit on obscene amounts of data
You shouldn't underestimate the pressure that selective funding puts on academic naming conventions. Also, a large part of the public perception of the field is actually being controlled by industry, and not academia. Calling it AI makes it easier to get stuff funded. Calling it AI makes it a lot easier to get your management to greenlight something.
I figure that if the field takes this stuff mainstream then we've got bigger problems at hand than rationalism seeping in. If the brain rot gets that advanced then it means there's something seriously wrong with the institutions themselves. The whole point of gatekeepers is that they're supposed to keep the barbarians out.

This proves that current review practices in science are fatally flawed, a problem that can only be solved by proper application of Bayes Theorem, prediction markets, and a large monetary donation to the Rationalist non-profit of your choice.

Can I also donate in kind? I've got this leftover castle lying around...

Kind of shows how the ICML review process isn’t ideal that it got that far in the first place. A lot of what you end up judged on is how flashy you sound and not what you actually did.

Strange how they never seem to make serious effort at getting published in philosophy/ethics journals where I can find tons of wack shit at a moment’s notice.

> A lot of what you end up judged on is how flashy you sound and not what you actually did. I hope that's what happened here! My interpretation was more ominous: that the reviewers understood what they were looking at *and they approved of it*.

How much did it lean into “timeless decision theory”?