r/SneerClub archives
newest
bestest
longest
To the racists who have infiltrated EA: fuck you (https://imgur.com/a/Tcw0l61)
64

It’s amazing how the most effective form of altruism just so happens to coincide with their racism. 🙄

They also apparently have missed the fact that society's reaction would be negative means that it's not actually an effective ontervention, no matter how good or terrible it is in a vacuum. If you can't get people to carry out your plan, your plan is bad. Literally 5 minutes of actual thinking would make this clear if you didn't have the answer written down at the start. Fuck that shit, they make it harder for the rest of us to actually make things better.
There were a few people who brought that up, and I have screenshots of that as well if people are interested. I just put up the first four that I stumbled across.. but yeah, I feel a little guilty because I don’t want to throw the entirety of EA under the bus (especially because I don’t think this is particularly representative of the whole), but groups like this shouldn’t exist and if this puts some pressure on whoever organized this then that was my goal.
Just from the outside, it seems like EA is on the path towards a major split between the rationalist+AI-risk people and the more mainstream third-world+animal rights people. Frankly I hope it does, I really think the latter group is helpful and the former is a massive waste of time and energy.
Yeah, the biggest gap is between the long-term (AI-risk) and short-term (global poverty/animal rights) position, but suggesting fucking eugenics as a “solution” to global poverty is so obviously sacrilegious to both of the words “effective” and “altruism”.
There is real AI risk but the MIRI folks are completely incapable of dealing with it. Note also how you've left global warming out of either axis.
That's true, I was intentionally describing the two EA positions on issues of importance. I don't know many EAs who consider it effective to focus on real AI risk or global warming. Given a utilitarian framework (which I don't myself hold), I don't think that real AI risk is a particularly effective focus of altruism. It's definitely important and needs to be given a lot of thought, but it's probably a more efficient use of money (if you think that money is "the unit of caring") to donate to people in third world countries. Same applies to global warming. It's definitely important but I think it's hard to solve that or make significant strides towards doing so in a quantifiable way. Or at least that's what people in EA say, I haven't looked into it. And yes, I'm aware that the same exact thing goes for AI-risk, (except unlike AI-risk, we have evidence to show that global warming is actually a problem).
So I actually do think global warming is a hugely important issue, but I would put it in a third category that EA doesn't really deal with that well: Political struggle. Global warming isn't well suited to being tackled by individual donations in my opinion, instead requiring shifts in public belief, mass mobilisation, ultimately resulting in political action on a governmental scale. For various reasons I don't think the EA toolbox is well suited to political battles, so I try and do both political and personal action.
Oh, yeah I would definitely agree with that. I've been reading some intro level political philosophy recently, because I'm extremely uninformed about that sort of thing, and it's an area that EA at best is unequipped to tackle, at worst opposes as "rhetoric".
This follows directly from an utilitarian approach. If you are convinced that there ist some metaphysical unit of morality, that can be calculated and distributed, why bother yourself with political debates? Politics is a huge waste of time if you believe the answer is right there in the utilons.
> why bother yourself with political debates? If it generates more utilons, of course!
>fucking eugenics as a “solution” to global poverty is so obviously sacrilegious to both of the words “effective” and “altruism”. Fucking eugenics doesn't address the root cause of poverty. They're holding "low IQ people" responsible for disabilities that are likely not a result of their actions. Who wants to be this sentiment is from the same crowd bitching that pedestrians and city streets aren't readable enough for self-driving cars? Never trust anyone -- especially in engineering or any kind of applied science -- whose "solution" to a problem is to wish the problem away or blow everything up. The fucking commonality in such careers that they involve solving problems! Also, I've said it before but its worth repeating: never trust anyone who won't bear the negative consequences of imposing their visions on society. Calling eugenics "bad optics" is a tacit admission they wouldn't be the ones absorbing the negative consequences of the world they envision.
Sounds like the same split atheism had - between the normal people who weren't religious and the weirdos who wanted Richard Dawkins installed as Grand Poobah of the world.
I feel that's a really weird take on why New Atheism ended up dashed against the metaphorical rocks.
>I really think the latter group is helpful and the former is a massive waste of time and energy. I'm surprised that there's any argument about this. Get out of the bubble - you're extremely right and there's no contest.
But they seem to believe they can convince people if they simply argue logically enough in a nice tone (peaceful ethnic cleansing anyone?).
yep, their ability to handle arguments that don’t contain copy and pasting bullshit from hbd propaganda is telling: https://imgur.com/a/fljw1sG
I mean, thats the standard fascist tactic, to exploit norms of politeness to push their agenda. If a group has the expectation that all arguments need to be taken charitably, then its relatively easy to flood the group with dozens of people "just asking questions" about whether certain races are inferior, poor people should be sterilised, etc. Then it basically becomes an hbd discussion group. The only solution is to ban the fuckers (which thankfully it seems most EA groups are doing).
That plus their beliefs about investing in their friends' research into acausal robot gods.
underrated comment lol
i mean just do the math: infinite cost * non-zero probability. but also there is no god because pascal's mugging oh look over there and read more bad essays about how 0 isn't a probability.
> infinite cost * non-zero probability This same calculation applies to global warming, except the probability is more like 100%
yep but wait 1 isn't a probability either
infinite cost, times, like, a hundred
We’re not talking 10^(-40)% here, we’re talking like >3%
More like < and then you have to consider that you need to factor in the chance they will have anything meaningful to do about it, which, after seeing who they are and what they do, I'd take the under on that as well. I do think that AI risk of some form is a problem, but, again, these people are singularly incapable of doing anything about it.
Wow, such precision! Care to show the math that leads you to making estimates about the likelihood bounds of future AI-gods down to individual percentage points?
Sure! This one survey I saw once says [5%](https://www.webcitation.org/6YxiCAV0p?url=http://www.fhi.ox.ac.uk/gcr-report.pdf), but some people seem very worried about it and some people seem unconcerned about it. Especially, humanity as a whole seems unconcerned about it - if the chance was very large (>10%?), this might been mainstream back in Turing's day, and countries might consider it a personal national security problem. Intuitively, the arguments for AGI being a risk seem good, so I'm not going to drop the probability very far, so I just half the odds to 3% and stick a > sign in there because it might be bigger. Surprisingly fast advancements in AGI, or seeing an increased national security focus towards AGI, would increase this. Evidence that AGI is intractable or impossible would lower this. Very convincing evidence that AGI is impossible would drop this far enough to make it Pascalian, and I would want resources to be diverted away from AI safety.
uuuh, that looks like a survey from the "global catastrophic risk conference", which seems a little bit like it might be biased towards AI alarmists. What justification do you have for halving the odds, instead of reducing by a third or a tenth or any other number? Do you admit you are basically pulling the number out of your ass based on existing preconceptions? In which case why bother with the number?
Yeah, I should have probably opened with the disclaimer that it was a rough order-of-magnitude guess and that I can't actually see the future. I could have said >1% or >6% or ~10% or 1-10% or π/100 or "not epsilon vs infinity, but 1/100 vs. 10 billion real human lives". The number is there at all because words are hard to do and numbers are easy to do, and because the appropriate intuitions about what probability-words mean are a bit off for very large-scale problems. Human extinction being "likely" could mean anything from "stop planning for the future at all" to "fund research a bit more". Human extinction being "unlikely" could mean anything from "fund research a bit more" to "no action necessary".
Right, so my contention would be that giving a number like 3% instead of something like "around 1 in 100" gives a false impression of precision in your estimate. I think this sort of thing is a contributing factor to the overconfidence I perceive among the rationalists. It's actually a concern I have with the surveys like the one you linked. The uncertainty range for something like AI-xrisk is many many orders of magnitude, I think asking for a number out of a 100 biases the results towards the upper end of that range. They should be asking "are the odds closer to 1in10, 1 in 100, 1 in 1000 ... etc"

[deleted]

who's gonna make the inevitable "thanos brain" joke?
[deleted]
I mean... have you ever heard of "negative utilitarianism"? it's not a common position, but there are a few people who actually believe this.
> it's not a common position ???
amongst EAs? I was under the impression that most wanted to maximize utils, not minimize bad utils.
I imagine it's not that clear-cut. But negative utilitarianism is certainly a perfectly popular opinion amongst philosophers, some of whom are EAs.
Oh, yeah I'm definitely not speaking for philosophers in general, just from my experiences with people in EA (which might not be representative).
This is already almost the real philosophical position of antinatalism, which states that bringing new lives into the world is unethical, and that humans should all self-sterilize, live to a ripe old age, and go extinct. One person in the linked thread even brings it up.
Have you ever heard an antinatalist elaborate on why s/he hasn't committed suicide out of moral obligation yet? It's always some inferior other that has to stop existing. I'm morbidly curious how they'll justify that their current existence isn't the burden they see others as.
Anti-natalists don't precisely argue that people should commit suicide. They argue that, on the whole, existence is worse than nonexistence and that we shouldn't create any new life. So the ideal antinatalist future is one where all humans live out the remainder of their lives as happily as they can, but nobody ever has a child and humanity goes extinct. They don't argue a moral obligation to suicide. They argue a moral obligation to not procreate. If you want a deeper dive, I've heard good things about David Benetar's *Better Never to Have Been*.
> existence is worse than nonexistence I'm new to this. What metric for "worse" do they postulate?
I'm not especially familiar with antinatalist thought (and there are different varieties, so you're unlikely to get one single answer). Wikipedia is [probably a good choice for an overview](https://en.wikipedia.org/wiki/Antinatalism)- usually the Stanford Encyclopedia of Philosophy is your best bet, but it doesn't seem to have an article. For a deep dive, I've heard good things about both David Benetar's *Better Never to have Been* (seems to be a common text) and Thomas Ligotti's *The Conspiracy Against the Human Race* (Ligotti's first non-fiction work after a long career as a horror author). On a lighter note, there's also a [WikiHow article](https://www.wikihow.com/Live-As-an-Antinatalist).
> Maybe the real singularity was the friends we made along the way.
This is genius.
It’s the only logical conclusion of utilitarianism, as life is clearly composed of more net suffering than pleasure.
I have had someone say that to my face. I told him it was a good thing that he would never actually be in any position of power in any possible future world, otherwise people would kill him first.

Apparently willingness to consider the most rational solution where others would not means playing the eugenics card at every available opportunity. No other possibilities. Just genocide and forced sterilization.

here’s more. I tried to distinguish between different threads by putting pictures separating them that were posted from the groups: https://imgur.com/a/CLbZ3ed

I’m not sure if this is the right group for this kind of thing, but how would you all feel about having an AMF fundraiser for this sub? I know it’s small, but it seems like such a satisfying “fuck you” to rationalists.
I am so down for this you have no idea. My yearly batch is due soon anyway.
I think a fundraiser of some kind would be nice and if people wanted to pick AMF I'd bow to the will of the sub, but one of my big problems with EA is the politically inert version of ethics it pushes. I would rather donate to a good slate of progressive candidates in America, or Black Lives Matter, or the prison strikers.
That would definitely be cool as well. I suggested AMF mostly because of the irony, but we could take a vote or have multiple charities that people could choose to donate to, whatever people prefer. I totally get your frustration though. it drives me mad how many people in EA seem to think of any other version of ethics as not just wrong, but inconceivable. "ah, you might think that you simply follow a different ethical system, but ACTUALLY you're just running on broken hardware" -some lesswrong article
I don't have enough of a sense of humor about altruism to pick AMF out of irony, alas.
What's AMF?
I’m a rationalist and I will spit and rave as much as you want if it means people will be helped. You fricking fricks! 😁

“infiltrated”

There’s nazis at Electronic Arts?

It’s really obvious that none of these people have ever successfully lead people from different backgrounds… which makes them EXTRA qualified to dictate the trajectory of society! /s

Corollary: The # of times in which a person brags about their superior test results is inversely proportional to the extent of their actual accomplishments.

Logged on when I should have logged off to point out that:

The brazen confidence of the “because” in that second screenshot is magnificent.

[I had a bit about Scott Alexander and his fans not understanding the difference between “magnificent” as in “beautiful” vs “sublime” that was going to have a whole bunch of parentheses and maybe even a footnote, joking about implicitly imitating Stewart Lee’s speech patterns, in the vague hope that an intellectually soporific amateur eugenicist from his subreddit might take some sort of umbrage to it - and yes Scott, it is your subreddit, no matter how much you’d like to be able to distance yourself from the opinions there expressed that you privately, constitutionally can’t stop yourself from accepting as your own as long as they’re worded a bit nicer than what a woman you found a video of on the internet had said; shouting her own articulate defence of her thought-out and historically grounded opinions about the subjugation of women and minority races a bit louder than you find comfortable from the climate-controlled comfort of the psychiatric office in which you treat yourself by pretending to give therapeutic advice to people you fear and despise because you’re more frightened of the concept of the opposite sex than you are of the notion that somebody might try to do harm to them.

The women. The ones that pose an existential threat to people who once read an internet blog about how to misunderstand basic statistics. Going out all night and shagging. With big men who wear vests Scott. Black men, sometimes, with bigger, even balder heads.

Yeah, bitches. Which bit I didn’t up writing because I should have logged off instead^1]

  1. But I just did.
>with bigger, even balder heads. Finally discovered my flair

Has EA ever dealt with the Marxist argument that in our current economic system populations have to be kept “proletarianised” and also requires a reserve army of labor?

Dark EA Admin here.

Fite me.

Taking questions. Accepting constructive criticism.