r/SneerClub archives
newest
bestest
longest
A Startlingly Frank Discussion Between Two Rationalists About The Nature Of The Effective Altruism Movement, And The Disconnect With How It Markets Itself (https://cptsdcarlosdevil.tumblr.com/post/174068756613/plain-dealing-villain-cptsdcarlosdevil-milk)
15

oh wow

I don’t think I’m trying to do milk before meat, I think it’s more like… you know when you’re gay, and you want to come out to someone, but you don’t know what their reaction would be and you’re kind of scared, so you start off by saying “hey, if, hypothetically, you knew a gay person or something, do you think you’d support their rights? just hypothetically. I’m just interested because I read a story in the news about the gay. but if you somehow ever met a gay person in real life then, entirely hypothetically, …”

who knew that being an AI doomsday cultist was so much like being a closeted gay person

When I meet a new person, I might want to talk to them about AI risk. But I don’t know what kind of person they are! For all I know, they are the kind of person who will say “ew, you are talking about weird computer things, that sounds like something NERDS are interested in, ewwww, did you not know that nerds are terrible? aren’t you ASHAMED about how you’re LITERALLY KILLING CHILDREN by caring about dumb robot stuff instead of children with cancer? ewwwwww you’re an awful person and you smell”

who knew that the only reason people are critical of AI doomsday hysteria is that they’re mean bullies who hate nerds

So it’s nice to test the waters a bit first, and be all, “so, hypothetically, if I was into doing charitable utilitarian stuff and saving children, like if I cared about getting data and evidence before I donated money, then how would you feel about that?”

  • and if they say “well in that hypothetical, then you would be a bad person and you would smell, because the correct ethical stance is to give the government all your money and go to protests about queer rights” I can attempt to disengage and talk to someone else as painlessly as possible

how charitable.

This also makes lots of sense because the right thing to do genuinely does seem to depend on what kind of person you are. If you are a math genius then you should do AI risk.

no you shouldn’t.

>>This also makes lots of sense because the right thing to do genuinely does seem to depend on what kind of person you are. If you are a math genius then you should do AI risk. >no you shouldn't. Read it in reverse and it becomes a valid point: If you're not a math person then you definitely should not do AI risk; if you are, then you are equipped to consider technical (not tumblr) discussions and are not the target audience of incredibly annoying and stupid wait-but-why style fear-mongering. If doomsday cultists and non-cultists could agree on that point, we would have moved the conversation forward a lot. The fact that outreach towards non-specialists is emphasized so much really pisses me off.
Outreach *from* non-specialists *to* non-specialists. The blind leading the blind.
You're right. This one pisses me off in a different way, but is just as bad and sadly common.
From the outside, this looks like Scientologists or Mormons debating whether to strategically downplay the weird shit, the better to ensnare unsuspecting innocents. From the inside, it feels to true believers like an argument over how much to play respectability politics. The Civil Rights movement had that argument about interracial marriage (and socialism). Then the Gay Rights movement had that argument about trans rights (and socialism). The fact that EAs are having that argument about AI risk (and ~~socialism~~ fully automated luxury gay space communism) doesn't constitute evidence that they are wrong or bad, only that they have different factions and differences of opinion about strategy, just like every other ideological movement.
>From the outside, this looks like Scientologists or Mormons debating whether to strategically downplay the weird shit, the better to ensnare unsuspecting innocents. the person who floated the comparison to Mormonism and the "milk before honey" strategy was an insider, not an outsider. so no, it doesn't just look like that from the outside. (said person also mentioned the words "manipulate" and "dishonest")

In short: Effective Altruism markets itself as being centered on addressing global poverty, when in fact it is centered on manipulating people into believing in the imaginary AI doomsday.

Which brings me to one of the major flaws with a lot of criticism of EA: too often critics take EA’s at their word about what the community is, and criticize charity or charity evaluation, which isn’t the best tactic since charity and charity evaluation are overall pretty good, and the much more important criticism to be made is that the EA community is lying about being centered around charity in order to manipulate people into joining a doomsday cult.

> charity and charity evaluation are overall pretty good Nonsense, right-wing nutjobs in general will point to the existence of charity and use that as an excuse to defund social programs. They have a long and dishonorable history of doing that in America. The question of whether charity is good isn't quite that straightforward. I'd argue that next to broad social movements, charities are nearly irrelevant for effecting long-term good. Agreed otherwise though.
There is still a huge amount of preventable deaths in the third world that could be prevented by just a small increase in the giving of middle and upper class westerners. I agree that in the long term structural factors are make a bigger difference, but these lives could be saved \(or vastly improved\) right now, and I think it's wrong to neglect that.
It's not as simple and straightforward as simply giving more. When I was volunteering in Tanzania, I was at an orphanage and they had a big room full of donations from overseas. Thing was, the people running the orphanage never bothered to take the toys out because they didn't care about working with the kids, so the kids never even got to play with them. Anyway, a lot of donations end up being neglected, not maintained (in the case of physical equipment), or not done in a sustainable way. Donations of things like clothes can put local tailors out of business. I was at one local market where all the clothes being sold were donations from overseas. I'm pretty sure this is not what the original donors had in mind, but they must have destroyed the Tanzanian textiles industry. In Kenya, textiles used to employ half a million people, and now it employs what, twenty thousand? That's bullshit and that's fucked up. I expect something like MSF or famine relief doesn't have the same kind of potential for secondary harm, and I do support some charities in that vein, but it pays to be careful and humble about how much can be reasonably accomplished. And to first, do no harm. I think you're also overestimating how much changing structural factors could improve things fairly rapidly. For example, the US and the EU both dump subsidized produce onto African markets, thus putting a lot of farmers out of business. It would be a massive game-changer if they stopped doing so overnight. Anyway, I'm probably a bit too down on charity, but it makes me furious to see Americans and Europeans suggest donating overseas when probably something like 80% of the problem comes from politicians at home. With EA, I despise how they'll research the perfect charity to give to, but they won't ask why people are poor in the first place. And the answer is, colonialism isn't over yet.
I think your Tanzanian argument is actually a point in favor of the charity ranking scheme- don't give toys and clothes, try to make donations that are maximally helpful rather than feel-good. I also agree about the importance of structural factors- government / policy changes could have massive positive impact. But so long as they're not happening, there is good that can be done through charity, provided it is well-considered.
See, my interpretation of "well-considered, maximally good charity" leads me to agitate for electing politicians that would do things like forgive the debt of sub-Saharan countries, or not help Saudi Arabia blockade Yemen. There is not nearly enough dialogue about fixing terrible foreign policies, and that's one of the biggest factors in why these structural factors are not changing. So it's not a bad thing to give to the best charities on GiveWell, and obviously better than knee-jerk giving, but if you really think you've accomplished the maximal possible good by sticking to just charity, you're bad at thinking about humanitarian problems. Basically, anyone who thinks their duty to humanity ends at charity, and neglects civic engagement, is being a crappy global citizen.
That's why a lot of the charity rankers put direct cash transfer at or near the top.
If only we could make really huge payments and rename them "reparations."
I'm surprised this isn't a component of white genocide conspiracy theories yet.
"bankrupt the whites to feed the high birthrates of the Others" you mean?
(((Globalism)))
Isn't that an anti-semitic dogwhistle?
Yeah hence the parens.
I mean, the whole point of effective altruism is to try and direct donations away from feelgood crap that doesn't work or is actively harmful like the examples you gave.(I mean, it should be, in reality its being hijacked by the AI cultists). The good half of the EA movement is actually in agreement with you. But it doesn't change the fact that there are actually good causes like malaria nets that legitimately save lives with very few negative effects. And I completely agree that it is stupid to ignore politics in the local world or neglect structural issues. but your charity money should still go overseas to effective charities because they need it.
> And I completely agree that it is stupid to ignore politics in the local world or neglect structural issues. That's one part where I think EA people are generally lacking. You disagree? Are there good EA people who do care about structural issues? I've never seen Peter Singer say, "vote more in midterm elections." I've also never seen him say, "maybe capitalism is bad." "Kill handicapped babies," yes, though. Like look at this sentence here from https://www.givingwhatwecan.org/about-us/frequently-asked-questions/: "Some charities are as much as 1,000 times more effective than others." That's straight from an EA site. That's the kind of bullshit I'm objecting to. Like I say, I have zero problem donating to charities that I am convinced are actually effective, but I do have a strong problem with going WHEEE I did 1000x as much good as you did! I feel like this seems like an overly pedantic quibble even to the nerdy people who frequent this sub and that's why nobody's getting it, but whatever. Anyway, I don't know what country you're in, but I do know that in America, right in the heart of Silicon Valley, even, there are people living in Third World conditions, and helping them doesn't require sorting through deworming studies and worrying about whether I'm aware enough of local cultural conditions to be able to help. (I just checked GiveWell and deworming is still one of their top charities, even though it been debunked as a means of increasing educational outcomes.) The experts, when it comes to development in Third World countries, have a really shitty track record.
FWIW, I think the official line is something like: Structural solutions and politics are very hard (both in the ideological / mind-killing sense and in the sense of genuine conflicts of interest). The ought to be a space for doing the obviously right thing, as effectively as possible, and separate from conflicts. That space is supposed to be EA. Just like we allow medical services to continue during war; the price for this is that medical personnel must be neutral while acting in their capacity of treating people. From this viewpoint, I think EA's commitment to non-structural solutions / strict pareto improvements makes a lot of sense. On the other hand, EA people telling others that "political work does no good" with a smug face is rather sneer-worthy.
I also have a problem with unjustified precision in estimates (someone once tried to claim that donating to MIRI would save 8 lives per dollar lol), but the statement that many charities are next to useless while others are highly effective is completly correct, as you yourself pointed out. And yeah, there are people in the west living in "third world conditions", but obviously it's nowhere near as many as in the actual third world. Just because evaluating these things is dificult doesn't mean it's not important, there are about a billion people in extreme poverty and it's insane to give up on them just because they're far away.
I don't think we disagree on much, then. If you think that what I've been saying is that we stop stop giving overseas altogether, you're wrong. I haven't given up on donating overseas, as I've mentioned...a number of times. I'm very annoyed that you're implying I'm 'insane.' I have a couple charities to which I've given thousands of dollars, but it took a lot of work and research to find them. I'm against irresponsible giving where the outcomes could do more harm than good. I'm not in favor of giving up on anyone, but neither am I in favor of simply doing 'something' when I truly don't think there's anything I can do. Whenever you mess with a complex system, there's always going to be unanticipated second-order side effects. I think that there's a temptation to believe that these can always be studied and compensated for, but I'm skeptical of that. If you're interested, here's a good article about how one economist thinks aid _usually_ makes things worse: https://www.washingtonpost.com/news/wonk/wp/2015/10/13/why-trying-to-help-poor-countries-might-actually-hurt-them/?utm_term=.2dd0030b40ab If aid usually makes things worse, then it'd certainly be better to abstain from giving overseas. Here's another good one, where they tried really hard to anticipate possible problems with a water pump, but it ended up being successful in only a few places: newrepublic.com/article/120178/problem-international-development-and-plan-fix-it When the consequences of fucking up include causing people to lose their livelihoods, or worsen their political situation, or even die, and the people making the fuckups include so-called experts, it is better for uninformed people to not do anything than something. In complex situations, you are more likely to make things worse, than to make them better. I think this is fairly uncontroversial. Particularly when the people in question have had a long history of Westerners trampling their autonomy. The frequency with which aid from well-meaning Westerners who don't know very much about the country they're trying to help, ends up hurting locals or being wasted, leads me to be humble about how and where I give. These well-meaning Westerners who are ignorant of local conditions don't just include you or me--it's also the people on the ground themselves. The failure rate is high enough that if you're serious about giving well overseas, that I think you should pick one problem in one country, learn everything about it that you can, and find an NGO that is working specifically on that problem, that has as few foreigners involved as possible. And make sure that it's something that locals themselves _actually want._ That's what I did--I happen to care a lot about indigenous rights,1 and I support an advocacy group where the board is entirely Tanzanian. They have a couple Western volunteers, but everyone else is a local. It was founded by locals and the locals call the shots. That's what giving effectively looks like. Sticking to a focused, small NGO probably won't feel good, because there's going to be this voice in your head saying, oh no, but shouldn't it be possible to find an NGO working at a bigger scale? Well, if you want to avoid unintended side effects, no, probably not. As that NR article says, local conditions can vary so highly that seeking highly-scalable solutions is probably not going to work. 1 Speaking of which, one of many other reasons I don't like EA is that saving 100 people from being genocided off of their ancestral land would be of less utility than saving 200 people from malaria, and therefore my giving to the former would be suboptimal. I think the value of saving an entire culture is unquantifiable, but I don't think that would fly with Singer and his buddies.
Yes, I think we are mainly in agreement there about structural issues being important to adress as well, and apologies if i came off as insulting! I see a lot of people of saying that we shouldn't care about other countries because we should "help our own" and it makes me mad, but I understand that that isn't your position. I guess what I'm trying to point out is that you are also in much agreement with the sane half of the EA movement. There aren't that many people that a) care about foreign lives as much as those of their nation, and b) care about checking whether or not their aid is actually helping people. I think those ideas are worth spreading, even if you don't agree with all the specifics of how they do the checking. And I think it's a damn shame that the only large group which seems to be spreading those ideas is filled with AI cultists.
Thanks for the apology, much appreciated. I'm sure there are good EA people who acknowledge a number of the things I pointed out, but that doesn't look recognizably like EA anymore IMHO. Good EA is watered down to "do a lot of research and pick more effective charities" then, which is...kinda boring to talk about.
Here, maybe this article will also help explain why I think the balance of helping the third world should be way more towards getting politicians to stop the IMF/World Bank more than donating better: https://www.theguardian.com/business/2002/oct/29/3 https://www.alternet.org/story/152335/food_emergency%3A_how_the_world_bank_and_imf_have_made_african_famine_inevitable > Since 1981, when these lending policies were first implemented, Oxfamfound that the amount of sub-Saharan Africans surviving on less than one dollar a day doubled to 313 million by 2001, which is 46 percent of the population. Since the mid-1980s, the number of food emergencies per year on the continent has tripled. The IMF/World Bank are the shining paragons of Westerners swooping in with policies that were developed without much concern or knowledge of the countries they're operating in. When they fucked up, millions of people died or were thrown into poverty. I am a huge fan of forgiving African countries their debts. This would be effectively a massive dose of foreign aid, while hugely empowering to the countries themselves, and all it requires is that the West stop meddling. If all the people who cared about giving overseas would also lobby their politicians to pressure the WB/IMF to do so, this would get way more good done, with probably far fewer negative side effects than most any other aid intervention.
Peter Singer personally has been involved with politics. He even ran for the Australian senate with the Australian Greens.
Fair enough, but that was twenty years ago, and I see nothing on the EA site about the importance of structural solutions. It seems fair to say that EA as Singer et al describes it is solely about donating properly.
Frankly, the biggest amount of preventable deaths in the third world can be prevented by middle and upper class westerners directing their governments to export somewhat less war. But that is the discussion those dweebs will not have in a 1000 years (or about 50 MIRI papers).
what if charity, social programs, and social movements are all capable of effecting positive long term change, and operate best in tandem with each other. also there isn't really a clear divide between charity and social movement- consider the black panther breakfast program. also what if right-wingers will attempt to defund social programs regardless of the presence or absence of charity, and rendering charity nonexistent wouldn't actually slow down their attempts to do that by any significant degree.
Those are all good points, but you know that rationalists aren't the type who would consider structural solutions to problems. And saying that 'charity is good and doing it better, we all agree and cannot argue on that point' is kind of vacuous. There's a lot of smart people trying to improve things, and they make plenty of mistakes worth criticizing. edit: come to think of it, the line between charity and social movements is a bit clearer than I thought. The Black Panther Party lifting up other black people is more like equals trying to help each other out, whereas charity is strictly the powerful offering help to the weak. Charity's rarely about trying to reorganize social hierarchies.
okay but i feel like criticizing rationalists/effective altruists for using charity as a smokescreen to manipulate people into joining a doomsday cult should take priority over whatever criticisms of actual charity there might be. edit: >whereas charity is strictly the powerful offering help to the weak not really, if you check the economic demographics of percentage of income given to charity, the people who give the largest percentage of their income are the people in the lower economic brackets: https://www.forbes.com/sites/katiasavchuk/2014/10/06/wealthy-americans-are-giving-less-of-their-incomes-to-charity-while-poor-are-donating-more/#124cfc661264 https://www.npr.org/templates/story/story.php?storyId=129068241
Fair enough, those are good links, though I still think charity isn't about reorganizing society. And someone a bad day away from being homeless is still better off than someone actually homeless. I guess the reason I don't find criticizing rationalists for the AI doomsday thing to be compelling is because Peter Singer, the founder of the EA movement, is not a rationalist. EA isn't a rationalist baby. It's just a thing that happens to appeal to rationalists for the usual technocratic reasons. So accusing the rationalists of hypocrisy to take down EA is beside the point. Though if you want to attack Peter Singer on grounds of hypocrisy, you could do so. He's stated that people should donate (their total income - $30k) to charity, when he clearly doesn't do it himself. Plus, he's a trust fund baby.
It's more complicated than that- while ideas surrounding charity evaluation have been explored by Peter Singer as early as the 70's, to say he's the "founder" of the effective altruism movement isn't quite correct- a community calling itself "effective altruists" didn't emerge until the late 2000's, mostly as an offshoot of lesswrong, and the term "effective altruist" itself was coined by Yudkowsky, though the similar term "effectively altruistic" was used by a user with the name Anand on a mailing list run by Yudkowsky prior to the formation of lesswrong. at any rate, by focusing criticism of EA on the ideas of Peter Singer you're buying into their narrative- EA as it exists in actual fact is not so much based on Peter Singers ideas (which, while flawed, overall have some merit imho) but rather it exploits his ideas as a cover for recruiting people into their doomsday ideology. and like this isn't idle speculation, the linked discussion is explicitly about how the EA community uses global poverty as a cover to manipulate people into swallowing the AI doomsday narrative. edit: i do agree that on the whole, charity generally isn't about reorganizing society, however it can and should be used in tandem with campaigns to reorganize society. (another example of synergy between charity and political action that comes to mind is Food Not Bombs [tho i think the black panther breakfast program is overall a much more effective example]).
Where would you say the divide between EA, minus the AI part, and Singer's utilitarianism is? I suppose it's technically true the EA community is mostly rationalists, but Singer still associates with the movement, and GiveWell is not, to my knowledge, a front for the AI cult. Anyway, I don't see how focusing criticism of EA on EA itself is buying into their narrative. I don't believe you can apply utilitarianism very well to something as complex as the real world, and EA ignores the need for social change. These two criticisms would still prevent me from falling into the AI cult. The tumblr posts from those rationalists are definitely very sneer-worthy, and entertaining.
> and GiveWell is not, to my knowledge, a front for the AI cult well, the Open Philanthropy Project, which was an offshoot of GiveWell, gave $1,782,000 to MIRI- on the other hand, i've heard rumblings that this precipitated a split between OPP and GiveWell, tho details are murky, but i think this shows the degree to which MIRI has saturated the movement. not everyone in the movement is in thrall to MIRI, but at this point, the people who aren't are acting as cats paws for those who are. given that, i don't think that asking about EA "minus the AI part" is particularly useful.
I think this was intentional: The OPP was split off givewell with the explicit purpose of insulating GiveWell from the doomsday cult. Complaining at an upstream project that a fork is misbehaving is somewhat nonsensical; it's a fork. Sneering at Open Philantropy for funding OpenAI and MIRI is fair game. Complaining that cultists have taken over the movement is fair game, as long as you judge them as functionaries, not persons (i.e. a person can behave sensibly in their capacity of distributing Malaria nets while privately believing that this effort is mostly futile because the end is near and AI will ~~kill us all~~ torture us by dustspecks for all eternity).
i wouldn't say i'm "complaining at" GiveWell- but i think that if they need to resort to such drastic measures to attempt to insulate themselves from the doomsday cult, that's indicative of the degree to which the memetic infestation has spread, and suggests, as i posited, that while "not everyone in the movement is in thrall to MIRI" "at this point, the people who aren't are acting as cats paws for those who are"
The vast majority of EA funds are not directed to MIRI or equivalent research. Most of it goes to things like mosquito nets and deworming.
> #just acknowledge it's dishonest #to present a movement that's plurality ai risk #as one that's plurality global poverty
> I guess the reason I don't find criticizing rationalists for the AI doomsday thing to be compelling is because Peter Singer, the founder of the EA movement, is not a rationalist. EA isn't a rationalist baby. Yudkowsky named it. Singer thought it was great his ideas were being actioned and enthusiastically joined in. He's still what they have for a philosophical heavyweight.
Plus, he thinks it's morally acceptable for me to murder my child.
>rationalists aren't the type who would consider structural solutions to problems AI *is* the Rationalist structural solution to problems. At least for a lot of them; as for those less inclined towards utopian singularitarianism, their political intuitions seem to draw them towards libertarianism, anarcho-capitalism, or neoreaction as "structural solutions," so it's probably just as well if they want to wash their hands of politics. As an aside, after reading that post I started scrolling through some of Ozy's other recent posts and [this one](https://cptsdcarlosdevil.tumblr.com/post/174054462943/so-theres-this-thing-in-the-lw-community-where) kind of reminded me of some of the criticisms folks in this sub make of rationalism. I wonder how much ideological overlap there is between SneerClub and Ozy-brand rationalism. I was a little surprised by >To be clear, I believe that the case for AI risk can be made on the merits. as I think I recall Ozy expressing more skepticism about AI risk in the past. That was probably at least a couple years ago though, so maybe they changed their mind. Anyway, Ozy's very into social justice and once referred to the SSC-comments community as "dumpster-fire rationalism."
>To be clear, I believe that the case for AI risk can be made on the merits. I think there is a problematic loss of precision in terminology when people talk about "the case for AI risk". I mean, let's group viewpoints like this: 1. The entire question of AI risk / AI alignment is bogus BS. 2. This is an interesting, eventually important, and nontrivial problem. Should become a respected small sub-sub-field of CS. 3. The end is near! Accusing MIRI of being doomsday cultists a la (3) makes sense. You can reject (3) while still subscribing to (2), which Ozy appears to do (and I myself do as well). It appears to me that a lot of people subscribe to (1) because they think doomsday cults are silly, and subconsciously skip over option (2). It appears to me that a lot of (3) adherents skip option (2) as well.
> AI is the Rationalist structural solution to problems. I giggled, for real. Very true. Very true. Kinda pathetic, but true.
> As an aside, after reading that post I started scrolling through some of Ozy's other recent posts and this one kind of reminded me of some of the criticisms folks in this sub make of rationalism. nah this sorta "why we suck" that's never actioned is pretty normal for the rationalist subculture. one of the replies gets it right: "A lot of this stuff is empty insight porn that appears smart but is actually incoherent. See the whole conflict-theorist thing."
As a sneer on the side, I cannot stand this writing style where people just go on without punctuation. I don't know whether it's supposed to be casual or convey a sense of urgency, but to me it just looks like breathless, mindless ranting. Only thing I hate more is using italics to *really emphasize* your point because your thoughts are *important*.
it's tumblr poetry it's endemic to the site i use it a lot that it's irritating as hell is probably a feature ymmv ofc
Imho the main risk with charity-centric approaches is a political crowding out effect. Just like all the debates about development aid de facto (not necessarily intentionally) tend to supplant debates about economic imperialism and unequal exchange.
[deleted]
[hey neat, remember when the givewell offshoot open philantropy project gave $1,782,000 to MIRI.](https://intelligence.org/topcontributors/) edit: Also I do appreciate that you edited out the part where you called me, in all caps, a FUCKING LIAR. I do hope we can be cordial to each other in our discussion, and i do appreciate your willingness to self-criticize, and to realize and correct oneself when one falls short of respectful discursive norms. edit: >May I ask where you're getting the info for accusations like this: "when in fact it is centered on manipulating people into believing in the imaginary AI doomsday" where i am getting my info is by looking at the linked post in which two rationalists frankly discuss the way that the narrative of global poverty and charity is used to manipulate people into buying into the AI narrative. and also by looking at how EA affiliated groups conduct themselves, such as the Open Philantropy Project giving $1,782,000 to MIRI. edit: ooh i see you edited out the question you asked at the end as well. i'll be keeping my answer up as i feel it is relevant to the discussion.
> These mention AI risk, but not as the #1 cause: why are they mentioning it at all when AI Risk is imaginary nonsense, which shouldn't have any place in what is supposedly a community based around empirical analysis of the merit of charitable groups? also the very linked post, which is, as i should emphasize, a discussion between two insiders in the community, mentions a disconnect between the public PR of EA and their internal operations. >It sounds like your complaint is about MIRI, not EA. my complaint is that EA at this point has become hijacked by MIRI, and that the segments of the community which aren't in thrall to AI doomsday hysteria are being used as cats paws by the segments which are.
[deleted]
>I wouldn't call it manipulation though. it would seem the Rationalist in the linked post sees things differently- note the tags > #and god maybe if the world is going to end in twenty years it's worth it #the people you **manipulate** will thank you when they aren't paperclips #we can't use a functional ea movement if we're all paperclips #this is what every fundamentalist christian thinks but you can't do meta-level reasoning forever #just acknowledge it's dishonest #to present a movement that's plurality ai risk #as one that's plurality global poverty edit: i appreciate that you edited out the part where you said more than 90% of EA money goes to non AI risk stuff. it's good to be self-aware and to realize and correct oneself when one makes possibly incorrect specific claims. >If you have arguments that he hasn't considered, can you share them? the entire concept is utter timecube-level lunacy.
[deleted]
>I stand by the >90% number prove it.
[deleted]
also, the percentage of EA money which goes toward non-AI risk stuff would be at 100% if EA were anything close to a functional charity evaluation community.
This is exactly the phenomenon that Ozy is complaining about. No, if you just look at the home pages of various EA organizations, you definitely *won't* find very much info about AI risk, just like the homepage of Scientology doesn't say anything about Xenu and body thetans: every cult worth the name knows you don't open with the batshit crazy stuff so you don't scare off the new meat. You have to be embedded in the space for a while and know about the previous decade of history with EA before you know that the majority of them *do* consider AI risk the primary goal, that they have gradually pushed out and silenced EA people who don't care about AI risk, and that the global poverty stuff is mostly considered an on-ramp to get people into AI risk. I do not have a citation for this other than looking at the percentage of people on EA forums who talk about nothing else, but the linked post is two EAs starting their discussion from the baseline assumption that it's true, so...
[deleted]
listen, want_to_want, here's some advice for you. * don't piss on me and tell me it's raining * especially don't piss on me and tell me it's raining when i've spotted members of your community discussing how they strategically piss on people and tell them it's raining * *certainly* don't tell me "actually, over 90% of the liquid pouring on you is non-piss content," because that just raises the question "*why is that number not 100%*"
[deleted]
Except they're ignoring actual, current, real AI risks like the fact that [tech companies are trying to make pre-crime a real thing]( https://www.newscientist.com/article/mg23631464-300-biased-policing-is-made-worse-by-errors-in-pre-crime-algorithms/). I'll take the AI risk thing more seriously if they start looking at real life rather than pretending to be in the plot of Terminator, but they're not going to do that, and I wouldn't be surprised if some of them are making money off stuff like PredPol.
[deleted]
Good thing he's calculated the expected utilons of mitigating current, real AI risks against the hypothetical future ones there. Or maybe he really is just all about the warm fuzzies.
[deleted]
For the sake of argument, say there is actually an "x-risk." (This is ceding a massive amount of territory, but the argument is so weak that you can do that and get away with it.*) Now look at AI risk from things like PredPol which are actively causing damage right now. This is one example but you can also look at things like [risk from high-speed trading](https://en.wikipedia.org/wiki/2010_Flash_Crash), [court use of AI](https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/), etc. So there is the problem of how much resources we ought to allocate to promote ethical use of AI now, or head off skynet in the future. AFAIK, this calculation has not been done and the "x-risk" is generated through jury-rigging the numbers via Pascal's mugging thought experiments. I mean, I think this calculation is ultimately gibberish, but show me where this calculation has been done and I'll at least give them credit for being internally consistent. *A further caveat, there is definitely an x-risk if an AI, say, could accidentally fire a nuke but this is not usually what the rationalists are referring to when they talk about x-risks.
[deleted]
>I just did a napkin calculation... >So it probably beats your "ethical use of AI" proposal without breaking a sweat. This was my whole point to begin with, you didn't actually do any calculation on the counterfactual ethical use of AI case. Why is ethical use of AI not factored into the utilon calculation? Unlike futuristic "x-risks," the probability of it happening is 100%, because it's already happening.
the risk of the plotline of "i have no mouth and i must scream" or "terminator" coming true is small.
[deleted]
>Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain. actually it's nearly identical in every important respect
[deleted]
are you seriously suggesting that Skynet and AM weren't smarter than human intelligence
[deleted]
okay but you are definitively wrong when you say that they don't portray greater then human intelligence. like that was just an overtly incorrect statement. moreover, AM's access to nukes was hardly the limit of it's power- most of it's power emerged from it's own vast intelligence. also "AI with access to nukes does bad shit" is vastly more plausible than "AI gains god-like powers purely through it's own intelligence". a terminator-esque scenario is thus actually *more* plausible than the one described by Bostrom.
[deleted]
you think that an AI code on someones *laptop* is going to gain exponentially increasing intelligence and become god
[deleted]
you think that someone's going to accidentally make a program on their laptop that steals their own credit card edit: and uses that credit card to become god
[deleted]
so to clarify: you think that someone's going to accidentally make a program on their laptop that steals their own credit card
[deleted]
how often has it happened before
[deleted]
i asked you a question. how often has it happened before. if the answer is "never" (and it is) then i'd question how exactly you can say with any certainty that it's a "likely scenario" because you see, your little scenario has 3 stages: * someone inadvertently writes a program on their laptop which steals their own credit card * said program uses that credit card to buy computing power to make itself more intelligent * this leads to it becoming a god the implausibility here starts high, and only gets higher with each step why should i take step 3 seriously when neither step 1 or step 2 has ever occurred, nor shows any signs of being likely to occur get back to me when step 1 occurs with some kind of frequency. until then, this hypothetical is even less plausible than the sci-fi scenario's it's obviously drawing from.
[deleted]
instrumental convergence in AI is purely hypothetical. saying something seems plausible because it's a function of something which is purely hypothetical isn't a strong argument. moreover, in your scenario, in order to gain human or greater intelligence the program would need to steal the credit card to buy more computational power- but now you're saying that it would need to be a Strong AI- ie, an AI of *human or greater intelligence*- in order to be able to do this. you realize this produces a catch-22 which renders the whole scenario you're describing not only implausible, but impossible. as you dig yourself deeper, your argument is just getting weaker and weaker.
[deleted]
weak response. give me a real argument, not canned platitudes and buzzwords. edit: oh wow completely changed your response. Strong AI means "human or greater intelligence" so don't bullshit me with this "it doesn't need to be very smart at first" nonsense. if it's not "very smart", or more specifically, not at least human intelligence, it's not a Strong AI.
[deleted]
you said that no one has inadvertently written a program that stole their own credit card because we haven't had Strong AI yet. if it needs to be a Strong AI to do this, how would it become a Strong AI in the first place, since buying more computational power is instrumental in it becoming a Strong AI. don't backtrack or try to obfuscate the self-defeating nature of your argument.
[deleted]
the first resource grab is the credit card number. you said "It hasn't happened yet because strong AI hasn't happened yet." you're contradicting yourself.
> But it leads to strong AI. That's why we haven't seen it: we'd be dead. LMAOOOOOOOOOO so any resource-grabbing automatically leads to strong AI, which automatically kills everyone. lmao. edit: ooh, nice move, you deleted the "we'd be dead" bit, in what is- dare i dream it?- perhaps a moment of self-awareness about how ridiculous and shrill your argument is becoming.
[deleted]
to re-iterate: > the first resource grab is the credit card number. you said "It hasn't happened yet because strong AI hasn't happened yet." you're contradicting yourself.
[deleted]
there's absolutely no reason for that to be the case. no reason for a program with a credit card number to immediately buy more computing power no reason buying more computing power would automatically lead to it reaching full superintellgience status. moreover "steps 1-3 can occur quickly" how would you know? this is **all hypothetical** and isn't based on anything that's ever really happened. for all you know the process of reaching super-intelligence through self-improvement could take decades, or be impossible. call me when someone accidentally writes a program that steals their own credit card, until then, this is all fantasy nonsense.
[deleted]
> I agree that the chance of that world isn't very high, but I also don't have strong arguments why it would be very low. i don't really care whether you have a "strong argument" for why it would be low. do you need a "strong argument" for why the probability of, say, the tooth fairy stealing my credit card, becoming god, and killing everyone would be low? it's always possible to imagine fantasy scenarios which have no relation to any observed actual events and frighten ourselves because we don't yet have arguments to prove they *couldn't* happen, but there's no good reason to spend time on that when we could be spending time on concerns which are empirically quantifiable. which was *supposed* to be what EA was about.
[deleted]
i claimed the the risk of *the plotline of "i have no mouth and i must scream" or "terminator" coming true* is small and you *agree* moreover the onus is on *you* to prove this is in any way likely if you want this cause to be taken seriously as an effective use of charity.
[deleted]
i think the best argument for it's implausibility is your own rambling, self-contradictory argument for it's plausibility.
[deleted]
one final question, tho i'll forgive you if you don't respond. should we treat the possibility of alien invasion equally seriously, simply because there isn't really an argument as to why it *wouldn't* happen? or should we deal with reality, and not unfalsifiable hypotheticals?
[deleted]
getting good at go is quite a different thing than becoming god, or even merely stealing a credit card, and the idea that one indicates the other is just as speculative as alien invasion or tooth fairies. edit: actually, i'd like to hone in on the alien invasion example- the tooth fairy example was largely just taking the piss, but the alien invasion example is a closer analogue. plenty of people have scared themselves just as silly as you have, about AI, with similar mathematical flights of fancy about aliens- comparing even the most minuscule probability of intelligent life emerging on a planet (and mind you, unlike the possibility of an AI deity, we *know* the possibility of intelligent life emerging on a planet isn't 0, because it already happened here on earth) with the vast size and age of the universe, and concluding that there *must* be intelligent life which emerged somewhere in the universe a million or even a billion years before us, and that if their technological development continued unhindered for that million or billion years, they would be as gods to us. why should we take your mathematical flight of fancy any more seriously than theirs? edit: also, on what basis do you conclude that a program being good at go is a "warning sign" that a program is going to steal your credit card and become god?
[deleted]
> Fast capability growth in one domain is a warning sign for other domains when there are mathematical similarities between domains. what's the "mathematical similarity" between playing go, and stealing a credit card and becoming god show your work, i want to see these calculations!
[deleted]
nonononono, you said "there are mathematical similarities between domains"- you haven't elaborated on how to mathematically represent playing go, how to mathematically represent stealing a credit card (and becoming god), and you haven't done a comparison between these two equations. show your work!
[deleted]
i looked at the wikipedia page for deep learning, and i didn't see anything that convinced me that being good at go is mathematically similar to becoming god. i don't care if there are mathmatical similarities between "many domains", i want to know what the mathematical similarities are between *these* domains (playing go, stealing a credit card and becoming god). show your work! show the equations!
[deleted]
> real world decision making real world decision making=/=becoming god, i should know, i've made all kinds of real world decisions, and yet, am still not god.
[deleted]
you're just giving me canned soundbites and not addressing my main point, which is that there's a massive leap in logic you're taking in terms of the assumption that good at go=able to become god, and have as of yet offered no real credible argument as for why your claim would be that case.
[deleted]
emphasis on "your best". i didn't ignore it at all, i simply didn't find it particularly convincing. you just keep coming back to "ah, but you see, computers have played games, and done other mundane things. according to this interview on Lesswrong, this is a Warning Sign" sorry but you're gonna have to point to something a little more credible than that. you don't seem to have grappled with just how massive a leap it is to go from, well, really *anything*, to becoming a deity, and haven't given any credible reason for me to think that the relatively meager accomplishments of contemporary weak AI are a "Warning Sign", and instead you just keep insisting that i should be shaking in my boots at the prospect that these meager accomplishments mean AI doomsday is immanent. i'm sorry but i just don't find it credible. being able to learn how to navigate different environments isn't the same as becoming god. and you didn't really answer the question anyways- i asked what the mathematical similarity between playing go and becoming god was, and all you did was point to more examples of computers playing games. the reason for focusing on the phrase "real world decision making" is that all you credibly made an argument for was that the applicable skills for playing go and other games might be applicable for real-world decision-making, which seems not *completely* implausible, but it's still an absolutely massive leap between mere real-world decision making and becoming a deity.
[deleted]
the idea that becoming a god is a "pretty small leap" is so utterly ludicrous that it's hard to know how to respond. also i can't help but notice the goalpost-moving, from the omnipotent superintelligence usually posited in these scenarios, to an AI that's merely "as good as a team of people" whether or not something "sounds scary" to you isn't really the determining factor in whether something should have a place in what's *supposed* to be a movement based around empirically quantifiable charitable donation. this is supposed to be about rational thinking, not acting on fear over hypothetical scenarios. vampires sound scary to me, but donating to a charity to give everyone a wooden stake to protect themselves from a hypothetical vampire singularity doesn't seem like effective altruism to me.
[deleted]
except the narrative presented by MIRI has been about omnipotent AI. this isn't about how *you* think about AI x-risk, it's about how MIRI does. also, why the assumption that being as smart as 100 people would automatically give someone the power to kill everyone on earth. wouldn't someone that smart presumably be smart enough to know that's wrong?
[deleted]
the idea of an AI which myopically tries to achieve one goal with which it was programmed, but also has the kind of autonomy necessary to steal it's own creator's credit card, is absurd. and you're not addressing the rest of my post. also, this thread also actually started with you belligerently accusing me of lying (tho, to your credit, you later edited your wording to be less belligerent) despite that there was a linked post demonstrating everything i was saying to be true. and your insistence that we "hug" the question of "whether AI-risk is small" closely is just a pathetic attempt on your point to avoid the much more relevant question of "what the fuck is MIRI doing in a movement *ostensibly* about empirically evaluating charitable causes when they are not a charity and have no empirically evaluatable results they can point to of any value
[deleted]
> I'll keep replying only while it's about AI x-risk, sorry. lol
> It hasn't happened yet because strong AI hasn't happened yet. . >It doesn't need to be very smart at first, 🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔
Why can't it be both? If we are truly trying to make *utilitarian* calculations as to where the money should go, which is what EA claims to do, we should factor them both in: a small probably of AI X-risk multiplied by a small probability that we can do anything about it would yield a minuscule number of utilons gained from donating to AI risk research, and certainly no marginal utilons above making more mosquito nets. And it *is* both: with high probability, none of the current state-of-the-art techniques (neither the deep learning that industry uses nor MIRI's weird-ass formalism) are going to yield strong AI, it will require a new breakthrough that hasn't happened yet (if and when it does, I will be happy to update my beliefs). And the fact that we can't do much about it until the breakthrough comes follows logically from this.
[deleted]
"At that point it will be too late", in lieu of actual facts about effectiveness, is exactly the kind of emotional scaremongering EA was supposed to rise above. Surely if you're an EA you scoff at the ads with the sad music for charities with low effectiveness, but like I was saying elsewhere in this thread, these principles are quietly dropped whenever AI risk gets involved. "Rigorous evaluation for thee, but not for me" is how AI riskers view EA and that's why I bailed out. > That's doable now. No it's not, that's my point. To the extent they're accomplishing anything at all, MIRI and OpenAI are building a Maginot Line.
> "Rigorous evaluation for thee, but not for me" is how AI riskers view EA and that's why I bailed out. Were you in on the EA stuff? Does it carry through to the rest of the EA world? Have all the Dylan Matthews been driven out by the EA cultists? etc, etc? I'll have to at least mention this stuff in my prospective book, and there's disconcertingly little handy in the way of smoking guns.
[deleted]
::googles logical uncertainty:: ::virtually all results are from MIRI, few if any mentions from anyone outside the robocult:: Top Kek
I sort of thought the whole point of **Effective** Altruism was that only causes/organizations whose efficacy can be independently empirically verified deserve funding. If you're a charity that can't prove you're actually helping some particular cause-- even if the cause is uncontroversially good and important!-- you're not supposed to get EA money. And for every cause except AI risk, they stick to their guns pretty well. But, for some reason (spoiler alert: >!nepotism and funneling money to the ingroup!<), when it comes to AI risk, that dictum immediately goes in the garbage. MIRI has never demonstrated that anything they do is reducing the risk of intelligence explosion, because we still have no idea what the architecture of strong AI will look like. *Even if* you accept that intelligence explosion is a real risk, that is not sufficient to prove that donating to AI risk causes is worthwhile for EA.
[deleted]
OpenPhil [evaluated MIRI's effectiveness in 2016](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support) and found it sorely lacking. I'll give you the money quote if you don't feel like digging through it (emphasis mine): > Based on that review process, it seems to us that (i) MIRI has made relatively limited progress on the Agent Foundations research agenda so far, and (ii) **this research agenda has little potential to decrease potential risks from advanced AI** in comparison with other research directions that we would consider supporting. We view (ii) as particularly tentative, and some of our advisors thought that versions of MIRI’s research direction could have significant value if effectively pursued. In light of (i) and (ii), we elected not to recommend a grant of $1.5 million per year over the next two years... Which leads into what really irks me about the AI X-riskers: they're a cancer that is gradually metastasizing over all of Effective Altruism, which is a philosophy I theoretically like and support. They perpetually push for proportionally more money from EA's total funding to go to MIRI et al (you are correct that *at the moment* the total EA budget that goes to X-risk is small, but that amount increases every year and I do not have *any* confidence that they will ever stop of their own volition; this is a one-way ratchet), and anytime anyone expresses skepticism about whether this is a worthy cause, they are immediately subjected to immense needling and social pressure to change their mind. (Which works, because they're all in the same social circles in the same 5-mile radius of Berkeley) I did not make my Scientology comparison in the parent comment lightly: the Church did the same thing to the IRS when it did not like that it wasn't getting tax-free status, until the IRS finally gave in and called it a religion. It's not and the IRS was right the first time, but they had to make the pressure stop. Hence why OpenPhil revised their opinion a year later in favor of more money for MIRI, with another report that was **much** less rigorous and a lot heavier on the warm fuzzies and "well, it might be important, so we might as well" than the one I linked. Holden's new opinion on AI risk is a lot less rigorous and a lot more pie-in-the-sky than his old one as well.
[deleted]
MIRI had been producing work for many years prior to the OpenPhil report and all of it was considered unsatisfactory; I am deeply skeptical that the logical induction paper alone completely changed OpenPhil's mind "on the merits", especially since it has gone completely unnoticed by the broader AI research community (it has no citations except in other MIRI papers AFAICT, and no one appears to be talking about it outside the Bay Area rat crowd). It is much more likely that the publication of this otherwise unimportant paper gave OpenPhil cover to change their mind because of the aforementioned social pressure.
>If you think it's obviously wrong, state your argument. only if you'll explain why donating to Santa Claus by putting money in an envelope and addressing it "the north pole" isn't Effective Altruism.
> #just acknowledge it's dishonest #to present a movement that's plurality ai risk #as one that's plurality global poverty