r/SneerClub archives
newest
bestest
longest
Surprised the Global Priorities Institute isn't talked about more here. (https://www.reddit.com/r/SneerClub/comments/tsm2r1/surprised_the_global_priorities_institute_isnt/)
14

I’m quite surprised you guys don’t sneer at the GPI more, it seems to me that they are the philosophical core of what EA and rationalist-adjacent people espouse and preach. Not to mention that a lot of their researchers explicitly state that they post on other sites like LessWrong.

They have papers defending fanaticism, some about AI risk, and others by Nick Bostrom which in and of itself is a pretty big red flag. Plus all the stuff I’ve read there is very big on reducing everything down to math. Some of it seems at least minimally sneerworthy to me.

Do you guys think they’re sneerworthy?

As long as it isn’t another round of bagging on anyone who isn’t a breadtube communist who hates maths and Peter Singer, because the jokes get so bad when people here don’t actually really know what they’re talking about

Na this really has nothing to do with that, im just saying that there might be stuff worth sneering at, or that we can find content for the sub with GPI.
[deleted]
Look man, I would ask this at badphil so I could get some help with figuring out exactly what's wrong with it, but I can't, and you know exactly why. So I came here used the appropriate tone for the sub, so if it seems I'm trying to grasp at nettles, it may be because I can't ask this in the appropriate context. If you'd like, we can discuss some of this stuff over DM so I can show my stance more clearly.
Is Bostrom himself even 'sneerworthy'?
For me? Yeah, and more so every day, but he’s not on the semi-official list that isn’t there
he sorta is really, he and Yud are all up in each other's stuff, I'd consider anything Bostrom a solid on-topic personally
I mean yeah, I just haven’t been paying much attention for maybe the last four years, and the shape of the manifold between this sub and who it’s at has changed a bit in that time I’m certainly happy with having any or most Bostrom stuff I see of late

If so, a ton of mainstream philosophy is also sneerworthy… which isn’t necessarily something I’m opposed to, but still

I just think that within the scope of this sub, these guys should be included.

EA is obviously not a sufficient characteristic in and of itself for sneering

Completely agree with you, I'm not trying to dunk on EA itself, just what the philosophy has evolved into.
So why did you say EA?
I said EA because a lot of what GPI researches has become part of what I consider the 'extended' philosophy they hold. I don't mean the charity bit of EA.
So what are you objecting to that isn’t EA but is EA somehow which you are labelling EA?
Look what I'm objecting to is the sort of stuff that comes out when you look up EA in this sub, not the practice of EA. So stuff like longtermism and AI risk.
That’s a big thing at GPI, yeah?
Seems like it, another user posted one of thier recently published papers, and one called 'in defense of fanaticism' defends the notion that we should be swayed by infinite utilities, which could lead to stuff like longtermism and AI risk. So yeah it seems pretty big there.
That’s extremely circumstantial, what *is* wrong with fanaticism?
Probably that it leads to stuff like having to bet on impossibly small probabilities of great outcomes, practically that means Pascal's wager type scenarios have to be accepted thus we would always need to do whatever said wager says, without contention. There are caveats, obviously, but it's generally a really shitty position to defend IMO. By the way, the sort of stuff the other user posted is pretty much status quo of research there.
I’m familiar with what they do, some of which is somewhat fantastical, I know people at least somewhat adjacent to them I also know personally at least one person with a reasonable profile inside that world who has specifically written *against* stuff that came out of LessWrong after finding out about it *purely by accident* and without knowing of any connection - so the link is tenuous from where I’m standing
To restate my point, I'm not trying to say that it's is all sneerworthy, just the fantastical stuff, and I'm also not saying that there aren't people who write against this stuff. Just that we scan probably find some sneers there.
I don’t know what’s wrong with academics working on fantastical stuff
Nothing necessarily, doesn't stop fantastical stuff from being fantastical, or the subject of a sneer.
a sneer, which this this paper is worthy of, because?
Dude, it seems like you're getting hung up in me mentioning a paper in passing, I just used it as an example, probably a bad one at that.
Im pressing you to make your case because I think you’ve just found something that looks vaguely sus to you and you don’t want to put in the work of working out whether or why it’s bad before you invite everyone else in to party
> practically that means Pascal's wager type scenarios have to be accepted thus we would always need to do whatever said wager says, without contention This seems to me an unlikely summary of the paper you’re discussing
I'm not trying to do a summary of the paper, just of the general concept. There's a reason why many people treat fanaticism as unpaletable.
So what if they do?

I mean shit, even the name is sneerworthy.

"Global Priorities Institute" sounds like it's the name of a shadowy villainous organization in a cyberpunk game.

I didn’t even know that’s a thing that exists. If you have something sneerworthy that’s in the scope of the sub, then why don’t you post it yourself?

I plan on it, just wanted to know if this may be something sneerworthy

Sure. What is Good can’t be reduced to math

Excuse me for being smooth brain, but what are you saying?
That the big problems I have with "rationalism" is that it attempts to treat questions of values and priorities as if they were straight up utilitarianism. Utility should only be a rough guide. The logic they use is so transparently self serving that it could be in textbooks addressing motivated reasoning. Have any of these galaxy brains asked those they would help what they want/need?

These guys do good work. Not a good subject of sneer

Still might fall under 'rationalism' and some part of the scope of this sub, so we may still find something yet.
This is just such a confusing and unhealthy approach. If you find something sneerworthy, post it. Don't go out of your way to find things to get upset about.
I was just curious about if these guys fell under the scope of the sneer, I stumbled upon them recently and they seemed like the type of stuff we talk about here.

GPI seem pretty good to me. I’m fairly sure you know but in case others don’t: “fanaticism” in these kinds of contexts means “taking some principle to its logical conclusion even if that ends up in really weird places”. In academia lots of types of fanaticism come up, usually for EA stuff it’s fanaticism about the idea that “expected utility maximisation is always correct even if it takes you to making bets that feel really weird” e.g tiny % of winning huge amounts. FWIW they also have papers criticising this kind of fanaticism and trying to see what dropping it as a principle does. In these contexts “fanaticism” has nothing to do with the kinds of charged usage it has irl

Also they explicitly target economics as a big research area so I don’t think it’s surprising or bad that a lot of their papers are to reduce things down to math - economists always try to build math models for stuff, most good economists I’ve talked to are pretty upfront about how they’re obviously a simplistic approximation for the world, but they can still be useful anyway

Here’s their Twitter: https://twitter.com/GPIOxford

This seems right up this sub’s alley.

Please let’s not have this sub be a catchall for shitting on literally anything which discusses existential risk Personally, I quite like science fiction movies