r/SneerClub archives
newest
bestest
longest
"EA is increasingly focused not on alleviating suffering, either of those presently living or in the future, but merely increasing the probability of a human-controlled future." (https://reddit.com/r/EffectiveAltruism/comments/74sm0a/my_favorite_quote_about_the_effective_altruism/)
18

Not sure what’s worse, the “actually alleviating suffering is pointless because the singularity is coming, donate to MIRI” style or the “actually alleviating suffering is pointless because suffering is part of a well-balanced life according to a theory of eudaimonia me and my equally wealthy and well-fed colleagues invented over dinner” style.

Think I hate the latter more.

I *definitely* hate the latter more.
>suffering is part of a well-balanced life But remember, we're not Christianity with robots. That's just pattern-matching.
"Suffering is good", say the people who don't suffer materially. It's like they're drawing a dotted line across their necks and begging for a guillotine sometimes.
They're like the Edwardian factory owners that argued child labor builds character.
> "Suffering is good", say the people who don't suffer materially. Where are you getting this from...? Because it's sure doesn't look like it's from the linked thread
Fortunately A-M can hate them both for us.
[deleted]
You are underestimating the galaxy brain of Brian Tomasik here. In fact, he supposes that we should prevent the death of humans, [but only because humans cause the mass death and extinction of animals](https://reducing-suffering.org/malaria-foundation-reduce-invertebrate-suffering/), which is the greater good. The saving of human lives is a necessary evil.
> EA and rationalist spaces are crawling with negative utilitarians like Brian Tomasik who think a huge nuclear war would be a wonderful thing Well, they can't all be wrong all the time I suppose 👽☢️🐬
[deleted]
If you find that your movement has, over time, come to focus less on efficiently helping other people and more on arguing the case for human suffering against negative utilitarians so you can better justify donating to MIRI, it's time to leave. Or at the very least push back hard. Were EA more like Scott's quote describes and less like the "ackshyuallee, we don't really go in for that whole 'being good people' thing any more" replies, it would be a pretty damn good thing. As it is, well, it can still divert SOME tech industry capital from machine-god propitiation to fighting malaria, so hey, better than a slap in the face with a wet fish.
> and less like the "ackshyuallee, we don't really go in for that whole 'being good people' thing any more" replie***s*** TBH I feel your use of the plural there is overstating things - there's one person saying it, with multiple people pushing back
which you hate more between an argument that makes perfect sense depending on how you evaluate a bunch of factors regarding how the singularity is going to playout from here on forward with guys philosophing about suffering being meaningless (which i dont know how is hateworthy but whatever)? why do you hate miri, whats wrong with the argument? its not like they chose to work for miri with bad pay to try and convince people to donate so that they could get steal money at a wage probably worse than their qualifications would earn in a real company? or maybe they did and you have sources?

Their efficiency is military efficiency. Their cooperation is military discipline. Their unity is the unity of people facing a common enemy. And they are winning. Very slowly, WWI trench-warfare-style. But they really are.

Why are people so attracted to military aesthetic? Like, this is supposed to make me see EA proponents as heroic, but all it achieves is making me cringe.

its because sitting around eating awful food for a year and then getting shot is a cool and trendy career
My cynical answer: it's hard to organize a large number of humans toward a single goal, except in war. When it comes to killing, we're really good at organizing things. Otherwise, we pointlessly squabble. (It might be added that war is the worst kind of pointless squabble, but yeah.)
Okay, so, I'm absolutely certain that this answer is cynical. My question is: is it true? There are a lot of stereotypes surrounding "military efficiency," but there are also a lot of accounts of armies being largely disfunctional groups, made of people essentially looking out for themselves. Notably, he mentions WWI trenches, which are, AFAIK, more reputed for being wastes of time, money, and more importantly human life, rather than for being carefully organized killing operations. That being said, I have practically zero actual knowledge of army-related stuff, (and TBH, if you've seen some of my comments, you've probably realized by now that I don't have much knowledge of *anything* except some very specific stuff), so I'm kinda waiting for someone else to weigh in on this one.
Military doesen't tend to be more *efficient* than voluntary organization, it is essentially a command economy after all (a lot of eg. Soviet communist organization was explicitly seen as drawing conclusions from World War I and they were uh... Hardly a paragon of efficiency) What military-style organizations can do though is mobilize a lot of resources (at least temporarily) 10 people making 10 gadgets is still less than 1000 people making 500 gadgets.
The actual answer is the [Californian Ideology](http://www.metamute.org/editorial/articles/californian-ideology), the wellspring of LessWrong. (And Extropianism, and neoreaction, and bitcoin.) There's a great essay, which I can't find quickly, specifically linking Silicon Valley thinking, and the culture of tech workers, to military-themed science fiction (Heinlein to Vorkosigan) and the meritocracy-aspiring aspects of the military. Anyone know the one I mean? It'll be something like that.
they're emasculated nerds

Seems to be going in the wrong direction. Clearly helping people is bad because you could end up increasing total suffering by increasing their lifespan. Thus the only true altruism is effecting a global genocide. Our coming AI overlord is going to figure that out eventually, but we could beat it to the punch and just implement the final solution ourselves. I suggest we start with EAs and rationalists, who will best understand the necessity.

Actually the true golden path involves nuclear war. Doesn't matter who starts it, but it's essential. Then the survivors will bring about true socialism (alongside their dolphin comrades), paving the way for us to be welcomed into the wonders of fully automated gay space luxury communist galactic society.
go back in your hole posadas

It’s plausible that AI safety work reduces future suffering, but it seems fairly unlikely that other forms of existential risk reduction do so. (It’s still logically possible, but it requires very strong assumptions.) Indeed, I would expect that most non-AI x-risk work increases future suffering (as well as future happiness). Whether this is a good or a bad thing depends on your value system and how you make trade-offs between happiness and suffering.

Absolutely astonishing take.

Why?

they need to focus more on reducing the suffering of the electrons they torture into forming their words