r/SneerClub archives
newest
bestest
longest
Hey, why didn't MIRI submit any papers to this leading conference on AI and machine learning? Should someone remind Yud? (https://medium.com/criteo-labs/neurips-2020-comprehensive-analysis-of-authors-organizations-and-countries-a1b55a08132e)
83

For that matter, why didn’t the ratsphere come to the defense of Dr. Gebru, one of the few academics akshully researching on AI safety?

Oh.

IIRC her work was in algorithmic bias which a lot of rationalists seem reluctant to admit is a thing. Which is weird because they could be rubbing it on people’s faces all day every day otherwise.
I really don't understand why the AI risk people aren't jumping all over the algorithmic bias research. "System develops in ways that are contradictory to our explicit values because we don't fully understand it" is exactly the thing they are saying we need to worry about. They should be using this as an opportunity to test their theories about how to ensure AIs are aligned, and to build public concern about AI alignment. Actually, I lied, I totally understand why the MIRI people don't care about this: AI safety is a grift, and insofar as they do care about this situation, they are on Google's side. But if they believed what they say they do, this should be a huge opportunity for them.
Seriously, you'd think the fact that AI is acting "unfriendly" and harming people *already right now* would be a very persuasive point to jump on. I think part of the problem is that a lot of these people *want* an AI to be put in charge of society (a "friendly" one), and realising that there are serious societal issues and not just programming issues make that vision look a lot less inviting.
There's also some...unusual definitions of "friendly" out there. You have your generic transhumanists looking forward to fully automated luxury gay space communism, which is fine, I guess, so long as there's a stop button on my undead simulator. But you also have folks *very deeply* invested in the idea that "rationalism" is the final bulwark against "wokeism". With their culture evolved beyond recognition, their leaders cancelled, and no higher God to appeal to, the emergence of AI authority -- rational by definition -- is their chance to prove they were on the right side of history after all. And if that AI has to break a few eggs en route to securing a future for ~~white children~~ Western civilization, so be it.
The FAI will finally be able to enforce KJV-onlyism and save Christendom!
"AI vs SJW", as Andrew Hickey [put it.](https://www.reddit.com/r/SneerClub/comments/77dxyo/just_out_the_basilisk_murders_by_andrew_hickey/)
having done a little professional research on AI security vulnerabilities: the remarkable thing about deep neural networks is just how *alien* they are. There are some [very strange](https://arxiv.org/abs/1905.02175) emergent behaviors that show up across architectures / datasets. We've got some hypotheses as to what they might be, but nothing conclusive yet. i had a slight come-to-jesus moment working on the abstract for a paper i was writing. "Hey guys maybe we should figure out if these abstract optimizing machines have interests / a conceptual basis actually aligned with ours." *Wait, shit, am I becoming a MIRI guy??* ...i take heart in the fact that none of the MIRI people will actually read that paper.
I don't think anyone here has the stance that we should never worry about harmful AI, I mean algorithmic bias proves that AI systems are *harmful now*. Any leftist should be concerned at how AI tech will be abused as it gets more powerful. The problem with the MIRI types is that they put way too much stock into the AI-goes-foom and becomes omnipotent scenario, with some of them thinking the god-AI will be here in like, 20 years. The idea that somehow a bunch of yudkowsky followers futzing over Newcombs paradox is gonna do anything at all is also somewhat laughable. In truth, the actual "AI safety" research is going to be very close to regular AI research, as "getting the computer to do what you want it to do" is already what AI developers are trying to achieve.
>with some of them thinking the god-AI will be here in like, 20 years There was a thread relatively recently about taking short-timelines to AGI or whatever seriously. One of the key suggestions was to try and remedy your physical health if you had RSI. The very fact that RSI was noted near the top of the list tells you a hell of a lot about these people.
I'm sorry, what is RSI?
Repetitive strain injury. Most commonly acquired by people who sit around at a computer all day and night and do nothing to alleviate it.
I mean the actual question there as you posed it *is* a super-interesting and important one. The problem with MIRI is just everything about how they try to go about tackling it/how they evangelise it
that's a great title, and so far the paper is fascinating.
fwiw its not my paper but i like that paper a lot, the [rebuttals](https://distill.pub/2019/advex-bugs-discussion/) are also online and interesting
[deleted]
uhhh i have like 100 pages of notes in an emacs org file (swear im not an emacs guy but org is great) [On Evaluating Adversarial Robustness](https://arxiv.org/pdf/1902.06705.pdf) is a good introduction to the field; also check out [this](https://arxiv.org/abs/1909.08072) survey paper from last year. You might also want to check out the book [Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/). Interpretability and adversarial robustness are actually very closely related. One of the papers that got everybody looking at adversarial examples was actually an [interpretatability paper](https://arxiv.org/abs/1312.6199) trying to understand what makes models tick. in fact i'd argue that the adversarial people are basically just doing interpretability with well-defined goalposts. But sometimes you want to let your goalposts be fuzzy, especially with epistemologically perplexing stuff like DNNs... see e.g. [Fundamental Tradeoff Between Invariance and Sensitivity to Adversarial Perturbations](https://arxiv.org/pdf/2002.04599.pdf) which calls into question some core tenets of the "adversarial defense" viewpoint. (Namely, it shows that limiting how much input perturbations can affect an output is not necessarily the same thing as making a model that thinks what a human would.)
> swear im not an emacs guy but oh no, the emacs guy butt
https://www.lesswrong.com/posts/NTwA3J99RPkgmp6jh/an-62-are-adversarial-examples-caused-by-real-but
oh hey, interesting. there's some actual decent discussion there too.
Don't forget that the harm is against minorities, and the suggested remedies include more diverse hiring and testing to ensure that you're not perpetuating harmful assumptions - and that doesn't sit well with a bunch of white dudes who are determined to believe that they got where they are with their own talent alone.
I would prefer if you didn't lump everyone working on safety in AI within the same sphere. Many of us are doing exactly what you're talking about. > They should be using this as an opportunity to test their theories about how to ensure AIs are aligned, and to build public concern about AI alignment. Personally, my own research in alignment and explanation exactly aims to do this kind of thing. Algorithmic bias is central to my work. I also recommend checking out the work done at CHAI and the DeepMind safety team. Stuart Russell is the person to listen to, not Yud. I think the problem with the MIRI-esque side are just so caught up in with their utility-monster that everything else becomes trivial. To them, if we really solve the alignment problem and build superintelligence then all problems will simply vanish...
In my experience it’s just not sci-fi enough. People love the imaginary shit so much they only want to engage with that genre of stuff even in their career. Which is understandable but frankly a bit basic at best, and certainly not fruitful; just people talking about how things they half remember from Philip K. Dick books are totally real and so on.
She also recently was let go from google which caused some controversy.
lmao yeah google already created an AI that turns people into nazis; we really should have more people trying to stop that

akshully I think you’ll find it’s too dangerous for them to submit papers any more, an impenetrable wall of steel has been built around MIRI’s latest work for the safety of all humankind. otherwise people might make fun of it, errr I mean might discover things that accelerate the coming of mean AI

[deleted]

man, whatever happened to the coding language they worked on because they needed a specific tool to create an AI?

Enochian already exists.

Maybe there’s a point to getting advanced degrees if you want to make meaningful contributions to a field.

Maybe.

I’d be interested to sneer at read the reasons they give for this discrepancy, if anyone has a link to something relevant?

https://www.lesswrong.com/posts/cKD85YRn7fy95WkmN/reasons-for-siai-to-not-publish-in-mainstream-journals
> Articles in mainstream journals take a relatively large amount of time, money, and ***expertise*** to produce. Pretty good reason for MIRI not to publish

Is this NSFW because to read it is to witness a murder?

LOL nah it's a tradition on here to tag "serious" / substantive posts as NSFW.

have they ever published at neurips?