r/SneerClub archives
newest
bestest
longest
6

LOL based on the upvote patterns former rationalists really have taken over this subreddit

You guys made fun of us for our ideals of open-mindedness and willingness to listen to opposing views. Seems as though those ideals might have bit your sub in the ass :P
> our ideals of open-mindedness and willingness to listen to opposing views *virtue signaling while being crypto-fascist reactionaries who benefits from listening to fascist views? hmmmmmmmm https://www.urbandictionary.com/define.php?term=Fish%20Hook%20Theory
Seems like this word "fascist" is doing a lot of work. Why not just paste the label on anyone I don't want to listen to?
What does this mean? Openmindedness bit SneerClub in the ass??
Sorry bro, this is SneerClub. Not gonna help you with your poor reading comprehension if you don't believe in the principle of charity ;)

This is a terrible sneer…this article is actually measured and reasonable unlike much of the AI hype.

Did you even read it?

It puffed Yudowsky up and tries to play the golden mean fallacy when it comes to the AI safety debate. I mean it's not the worst rationalist piece and has some decent points but it's also the piece that going to be taken the most seriously.
It's useful to have their strongest arguments in one place. I hate to see Yud's name in here, but Sam Harris, Elon Musk, and Grimes have already done yeoman's work getting him into the mainstream.
Haha but at that point what are you sneering about? "Look at these rationalists and their reasonable concerns about AI safety, OUTRAGEOUS!"
tbh rationalists' concerns about AI are not reasonable almost by definition worrying about the pompous scifi scenario of 'AI exterminating humanity because then it will be able to compute a number with higher confidence' is ridiculous when all we have is shitty machine learning, and when shitty machine learning has dangerous uses such as racial profiling and mass surveillance that are already being implemented, and rationalists are conspicuously silent about that 🤔
Hey! I'm the article author. (Feel free to let me know if I'm not supposed to be here, I support y'all and your subreddit's thing and it's fine if that works better without anyone showing up to argue). There's a reason I talk about both racial profiling and dangerous future scenarios in my article - I think that they're the same core problem. ML systems aren't transparent or interpretable, and they do what worked best in their training environment, regardless of whether that's what we want. To deploy advanced systems safely, we need to understand their behavior inside and out, and we need to stop using approaches that will fail if their inputs were biased (as in criminal justice) or fail if the thing they were taught to do in their training environment doesn't reflect everything we value (again as in criminal justice, where US law prohibits treating otherwise-identical black people and white people differently, most of us are horrified at systems doing so, the authors of the system probably didn't intend that behavior, and yet algorithms do it.) The failures will get more dramatic as the systems that are deployed are more powerful and deployed on more resource-intensive problems, but it's the same fundamental failure. As for mass surveillance, in a [different Vox post](https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks) I've strongly criticized a paper for suggesting that mass surveillance would improve law enforcement (I argue it'll just make for more selective enforcement). I think I might be wrong about a lot of things. That's why I write about them, so people understand my arguments and can point out their flaws. I think I am pretty much never conspicuously silent on things.
I'm not gonna read the article because I don't especially care, but you're perfectly welcome to be "supposed to be here" until such a time as I start caring, Merry Christmas
I am more worried about realistic near term or medium term risks like nuclear war or climate change, but I think AI could also become a danger farther in the future, and it's not necessarily a waste of time to start thinking about it now. You're definitely not going to please everyone in sneer club but most of us don't think your article was that bad.
> AI exterminating humanity because then it will be able to do capitalism better FTFY, those silly engineers are focused on the mechanics, not the motive
Wow this is sort of the rationalist's parody of their detractors. The way machine learning often replicates existing prejudices is problematic, but not really in the same league of potential issue as "an AI becomes the dominant entity on Earth and human survival depends on its benevolence." I don't know how likely that is to happen but it is genuinely possible and thus obviously not something that's *by definition* unreasonable to be concerned about.
Worrying about omnipotent AI harvesting the atoms from my body to turn them into paperclips is imo in the same league as worrying about an Independence Day-style alien invasion. You could think up thousands of technically plausible doomsday scenarios and then tell people to donate money to your research institute NOW. At some level I'm sympathetic to people worrying about shit like this because I have anxiety and a fatalistic disposition too, but the xrisk field consists almost exclusively of grifters, and it's them I'm sneering at, not people having panic attacks about impending doom at home because in fact I do that all the time myself And ML-based prejudice is something that actually affects people's lives right now and AI armageddon is a theoretical scenario that at best might happen hundreds of years from now, so I think it's completely fair to care more about the former
> Worrying about omnipotent AI harvesting the atoms from my body to turn them into paperclips is imo in the same league as worrying about an Independence Day-style alien invasion. Well remember the paperclip maximizer thing is a thought experiment demonstrating how intelligence can be put to use pursuing any arbitrary goal, not the actual scenario anybody is anticipating. Obviously agreeing that AI safety is a real issue doesn't imply an endorsement of any particular effort to work on the issue, but it's not like the whole concept is just something Yudkowsky made up to grift people. The likely eventual creation of a true AI really will be the biggest thing that's ever happened. Anticipation of that sort of thing does bring out the grifters, but the presence of those grifters doesn't mean it's not something to think about.
It's by dedicated cultist theunitofcaring.

The writer of this article, Kelsey Piper, runs a pretty popular rationalist Tumblr and got that job at Vox recently to write about Effective Altruism. Her blog is actually pretty good, and if anyone in rationalism has to write for a big media site, I prefer it to be her instead of anyone else.

But her views on AI really are the most frustrating IMO.

> got that job at Vox recently to write about Effective Altruism Neoliberals are desperate to virtue signal and pretend like they don't have a cancerous amoral ideology that is literally ruining the world
As someone who was involved in one of these meetups, I think that's a fairly sinister interpretation. Like, a lot of the folks I've met who were involved in EA are some of the most genuinely compassionate and kind people I know.

And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

What’s that other system that’s like this that “Rationalists” never critique?…

Capitalism? Well, they do critique capitalism. However, they (too many of them) want to replace it with some variant of autocracy/oligarchy run by socially challenged male nerds.
Moldbug has become a pseudo religion to some of these nerds. Search for Moldbug in any culture war thread and he pops up more than almost anyone. They were into Charles Murray for a while after the Sam Harris debacle, but it looks like Moldbug is back on top in his rightful place in the nerd hierarchy.
"actually I'm a brave dissident hero for having no class consciousness and not knowing anything about the world except muh (((cathedral)))"