r/SneerClub archives
newest
bestest
longest
Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed (https://archive.fo/MOD1w)
89

SBF’s willingness to commit crimes in order to funnel money towards EA was a feature, not a bug.

Get a load of this quote from the article: > “both Will and Nick had significant amounts of evidence that Sam was not ethically good. That puts you in really murky territory: what are you supposed to do with that information?” Apparently it is only feasible to use rationality for informing ethical decisions when you're not at risk of concluding that you need to turn down large amounts of money.
One thing the article doesn't mention is that McAskill and his band of EA's resigned from the FTX fund \*immediately\* after the allegations against SBF were made public. If they had any doubts about SBF, they would have hesitated at least a little. Maybe they'd wait until there was an investigation, maybe they'd hope that even if there was some fraud, the majority of the money was still earned in good faith. But no, the whole group resigned unanimously on the same day. They knew the tap had run dry and that the whole thing was irreparably rotten. They were just waiting until it became public to pull the plug.
I bet their excuse is the same one that all corrupt people give: "just because bad people gave us money doesn't mean that they influenced our decisions". Given that they are hyper rational masters of their own destiny it would have been unethical *not* to accept money from bad people. For people who are such fanboys of the efficient markets hypothesis, you'd think they'd see the contradiction here. If money didn't have a powerful effect on people's behavior then *why would we use it as compensation for employment?* Of course McAskill might retort that clearly *he* is not thusly influenced, given that he has thus far avoided participating in any semblance of real employment. Checkmate, normies.
[deleted]
It's incredible to me that EAs seem to never question the assumption that money is their most powerful tool for accomplishing their goals. I assume this is a reflection of their greed and/or limited imaginations.
As Yudkowsky said and Kelsey Piper named herself after, [money is the unit of caring!](https://www.lesswrong.com/posts/ZpDnRCeef2CLEFeKM/money-the-unit-of-caring)

Lets look at the facts, and tell me you wouldn’t have made the same choice as EA did.

Everything I read about McAskill makes him come across as a conniving, two-faced pos with a massive self-promotion budget.

I’m looking forward to the day when someone publishes an expose of the skeletons in his closet.

[deleted]
A good chunk of Oxford's faculty of philosophy is either transhumanist or effective altruist, so ... probably not that much.
[deleted]
Eh, you'll have to be more specific about which people you mean. I hung out with the folks at FHI a few years before MacAskill, and even though I've never met him, I can guarantee that he's the product of the environment there, not some outlier.
>This isn’t Wall Street where people have loyalty, these people are academics. lol ["Academic politics is the most vicious and bitter form of politics, because the stakes are so low."](https://en.wikipedia.org/wiki/Sayre%27s_law)
I mean Boris Johnson went to Oxford the ethical bar for alumni is very very low
This Time article strikes me as the kind you publish when you're still trying to get your sourcing lined up for your *really* devastating article.
I hope so!

Three former Alameda employees told TIME he had inappropriate romantic relationships with his subordinates.

my surprised face: 😐

[deleted]

I think the people who are buying castles are into utilitarianism as a personal preference - as many utils for themselves as possible. The rest is window dressing for the grift.
"There is no 'We' in utilitarianism' but there are four I's, since it's all about me me me"
It's not very far from eugenics at all, it's logically implied -- if purifying the human gene pool of harmful traits increases the chance the human species survives indefinitely into the far future then you're oppressing or killing mere millions to secure the future of trillions or quadrillions
You don't understand - Bostrom was just using *a slur as a hypothetical*!!!!!!!!!!!!!!!!
We were somewhere around the 8th exclamation mark that the sneer began to take hold.
Sorry I'm using GPT4 to write sneers and it's a little enthusiastic
It was in the year 14 BB (Before Basilisk) that [the hypothetical](https://www.indy100.com/viral/chatgpt-elon-musk-racial-slur) 'what if we must say slur to stop nukes' was uttered, destroying the woke mindset forever.
Eh, I don't think it's even that deep. They would have all these problems regardless of underlying moral philosophies. I think the *core* problem is simply that *talking* about rationality doesn't actually *make* you rational. The very early premises of "rationalism" were pretty simple - a lot of "thinking" is about making predictions; the human brain is wired in a way that makes our predictions low-accuracy in certain situations; it's possible to increase awareness of that, and use different approaches to try to improve accuracy in those situations. And they even successfully identified a number of pretty good ways to identify those "problem situations" and mitigation strategies. (No, they didn't necessarily *invent* them, but they at least demonstrated awareness of them.) The problem was that stating those approaches doesn't actually "patch" your brain to use them. It's an ongoing struggle, which they generally failed at (and even failed to acknowledge). Yudkowsky et al. are the equivalent of fitness coaches who developed a workout routine and then didn't follow it themselves. They fell into traps that they had, themselves, earlier acknowledged as traps. And now they're so deep in those traps they can't even see it.
> And they even successfully identified a number of pretty good ways to identify those "problem situations" and mitigation strategies. They didn't. The basics of LessWrong including the nuts and bolts stuff about how to reason is deeply confused. It isn't an accident that people consistently get whacky decisions by following this stuff, the ideas it presents about how to make decisions are thoroughly whacky. Anyone interested in improving their thinking ought to work through a Critical Thinking 101 textbook, of the kinds regularly taught to freshmen at university, and in which they will find more of value than the sum total of everything LessWrong has produced on reasoning, and without any of the bullshit. > Yudkowsky et al. are the equivalent of fitness coaches who developed a workout routine and then didn't follow it themselves. No, they're the equivalent of people who tell you they're fitness coaches, but when you go them for a fitness plan they tell you to just put crystals under your pillow.
[deleted]
Which system? Trying to make better, more accurate predictions? That seems to work really well. Again, I'm not talking about the later things like longtermism. I'm talking about the parts of early rationalism that point out things like ways to identify your own biases and correct for them, the danger of confirmation bias, etc.
[deleted]
I think you misread their comment. The "fitness coach" metaphor is fairly obviously referring to the "pretty good... mitigation strategies" for addressing some cognitive biases (that are mentioned toward the start of the comment). I think you'd need to do a lot more to justify how these "render everyone socially inoperable outside the cult" (especially given that, as said comment notes, most of them weren't invented by modern rationalists).
[deleted]
I don't understand, in that case, why you accused the person you replied to of a "duck and feint"; the things mentioned in their second comment ('trying to make better, more accurate predictions'; 'the parts of early rationalism the point out... your own biases and [ways] to correct for them, the danger of confirmation bias') are the same things (to my mind, fairly obviously) as were being talked about in their first comment. Also, in that case: >I think you'd need to do a lot more to justify how these "render everyone socially inoperable outside the cult" Like, are you really arguing that strategies for mitigating cognitive biases, inherently limit your social abilities and/or draw you into the EA/etc fold? Am I missing something?
[deleted]
Since this is apparently a linguistic point that needs to be made: saying I "don't understand" isn't *really* saying that I'm confused. It is, in fact, an idiomatic and (albeit only slightly) more polite way of saying I think that your point was bad and you were wrong to make it, and an invitation for you to clarify or make a better one. I'm replying to your comments because I think you made a bad argument on the Internet, and I both a) value good argument (and try to live up to that ideal, even if I certainly don't always achieve it) and b) enjoy arguing with people on the Internet, even if I know it usually doesn't lead to much. You're not obligated to explain anything, I suppose, if you're not interested in defending your point. As for "directly accus[ing]" you: In my first comment, I wasn't accusing you of anything; I genuinely thought you might be interested in substantiating your point with actual evidence. After your reply contained none, I then critiqued your argument further (because at no point did the person you replied to perform a "duck and feint," notwithstanding your ridiculous most-recent reply that one shouldn't read that as an accusation on your part), and I re-raised a critique I'd made of your argument because you didn't address it. In short, I originally merely thought your argument is both wrong and badly made, and I raised specific points against it. Rather than replying to those points in any way, you've instead said I took a "vicious" reading that's "motivating" me and that you think I think you've committed "bad behavior." I think that speaks for itself. edit: upon rereading, this comment came across as really condescending. apologies for that. that being said, I still stand by the content of it.
[deleted]
"Somebody honest"? Jesus. My tone may have been increasingly sarcastic, but I've done my best to argue coherently and in good faith here. Since we're apparently done with that: Does snarkiness constitute 'dishonesty,' in your mind, or is it more of a generalized 'anyone-who-disagrees with me' sort of thing? I mean, for heaven's sake. You're clearly an intelligent person. I don't understand why "please give any actual reasons for your claim" was apparently too much to ask.
> and in good faith here then you're very bad at it and should consider a different approach
Obviously I discarded it for that comment, which I specifically note so I don't give the opposite impression ("since we're apparently done with that"). That being said, as an outside observer -- seriously, am I going crazy? Going back to the original comment that started this, they seemingly misread the comment they replied to, and pulled a claim from their ass, and when I asked them to elaborate they said instead that I was "missing a lot indeed" and implied I had bad motives. What?
[deleted]
I wasn't playing dumb, jackass, I was giving you the benefit of the doubt to explain your ass-pull of a criticism. I read your original comment (for reasons I explained) as responding to something that *wasn't present* in the comment you responded to. I guess this isn't a place where we challenge people for making shit up? Sounds just like the so-called rationalists.
[deleted]
Yeah, excuuuuuse me for butting in on that *private conversation* you were having on the online discussion website.
[deleted]
The first person gave reasons for what they said (and I'll note that I think "successful system" maybe has connotations that overstate what they claimed, but anyways). You didn't. You then went on to say that you felt like they were performing a "duck and feint" by... staying consistent to what they'd said, the first time? Seriously, I don't even necessarily disagree with your actual take on the issue, I have no real opinion on that. I disagree with you, or anyone else, responding to a well-reasoned argument with fuck-all besides the equivalent of "I think you're wrong :) :)" and acting like that's a reasonable response, or saying it's a "duck and feint" to try to further engage with said vacuous statement.
[deleted]
Really? Did you really read "I don't understand why you accused X of Y because [X didn't say Y]" and think, "oh, huh, this guy must not have any problem with the argument, they just sound genuinely confused!" I really feel it's reasonable for the "your point is bad and you are wrong" to be left implied (at least until they responded by accusing me of having "bad motives" and such, but y'know). Like, yeah, obviously the section you quoted was me being assholish, I'm not denying that. But that's because the person I was arguing with, as far as I can tell, *did* have (or at least chose to express) the above reaction (quote, *"that you “don’t understand” a phrase I used in a conversation with somebody else is hardly an accusation which reflects poorly on me"*). Which is kind of unbelievable to me, to be honest.
It's not an intentional duck and feint, I'm just not sure what you would think is so bad about their original ideas, so I assumed you were talking about later wacky stuff. A few personally notable examples for me - [belief in belief](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief), the [taboo](https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your-words), the [affective death spiral](https://www.lesswrong.com/posts/XrzQW69HpidzvBxGr/affective-death-spirals). Notably, the entire wacky direction of modern "rationalism" can, I think, be described exactly by that last one.
"Belief in belief" isn't good. It doesn't talk explicitly about the robot apocalypse or any of that nonsense, but you can easily see the wackiness of Rationalism in it. *It consists entirely of Eliezer Yudkowsky making things up about human psychology in the context of imaginary thought experiments.* That's pretty much his entire schtick, and it's never changed: he conjures thought experiments and then makes up bullshit about them. It's exactly the same heuristic that he uses to decide that we're all going to get killed by Skynet.
The problem goes far deeper since rationalism is ultimately self defeating, due to the is ought problem. Rationalism can be regarded as what happens when a mf loves empiricism but never reads Hume

I can understand people who are under investigation, but Karnofsky not responding does not make him look good.

I think most sensible people realized SBF was a scammer as soon as he got involved with crypto. Anyone who’s still into Bitcoin after 2015 or so is either an idiot or a grifter or both

Someone needs to adapt The Secret History but as an EA farce rather than a classics farce.

[*The Basilisk Murders*](https://www.reddit.com/r/SneerClub/comments/77dxyo/just_out_the_basilisk_murders_by_andrew_hickey/)?
Ooooooooh thank you for the rec!
everyone who comments here should read it basically, it's about our very good friends

game recognise game

(and here, game = grift)

Ha! The kicker to this is absolutely perfect; chef’s kiss