posted on November 16, 2022 08:59 PM by
u/jaherafi
154
u/notdelet100 pointsat 1668634074.000000
Having worked with his younger brother, I can tell you that they are
both EA/lesswrong zealots. All ends justify the means, and he probably
views this as a kind of martyrdom, shielding his ideological allies from
backlash.
The alternative is that the entire way they live their lives as a
family (including annoying proselytizing) crumbled for him the moment
everything else did and he’s a nihilist now.
I can believe some possibilities: either he was really broken by this stuff, or, what I think is more likely, he really was a true believer: a true believer in the fact he knew more than everyone else, was infallible, and was more capable of deciding where money does "good" than the uneducated low-IQ masses.
When I said "he has no morals", I don't really think his was pretending to care about altruism to accumulate money and live luxuriously fooling all the rubes, I just mean they pretended to be people's people when in fact they thought they were superior examples of the human race- hence all the blatant racism his partner exhibited. In his mind, it just so happens that the way for him to decide the path of humanity coincides with making a lot of money and living luxuriously.
> I don't really think his was pretending to care about altruism to accumulate money and live luxuriously fooling all the rubes
I actually think he was doing precisely that, except also with an (irrelevant) extra step of semi-convincing himself that living luxuriously was justified by some combination of AI risk, 10^50 future people, and suffering wild fish in all the oceans. (Which is, also, the reason those "causes" were invented in the first place).
You can easily get him to confess to not believe in all that, by carefully and convincingly feigning a belief that he was simply stupid. Then he'll trip over himself trying to explain to you how he doesn't believe any of that stupid shit.
I don't get why everyone has to make a show of assuming otherwise. Most people running scams know they are running scams, it's one of those things where you have to know what you're doing. And would run the exact same scam even if what ever rationalization of choice they use wasn't available, or run the scam well past the point where rationalization even applies.
So this scammer happened to have come across EA. So what. If he hadn't come across it he would've tried to run the exact same scam without EA connection, there's certainly no shortage of nearly identical scammers in crypto.
edit: also note their so called "hack" which happened as the hammer was coming down.
edit: basically, it's as someone in buttcoin said. He is an unlikeable psychopath. Rather than wearing the mask of sanity, he'd wear a mask of concern for the fish or 10^50 people or whatever.
>they pretended to be people's people when in fact they thought they were superior examples of the human
If not him, it's certainly endemic in the "movement." It's an inherently supremacist set of ideas (generally supremacist, not necessarily just the racist variety). It's why you'll see these people so often go on about eugenics, and why they're obsessed with IQ, and "efficiency."
I’ve known a couple people who fit the description of sociopathic behavior. It seemed they truly believed they were justified in doing everything they did, to the point that they believed themselves to be good people even when doing things that harmed others. They didn’t see any inconsistencies in harming people because they could always do enough mental gymnastics to convince themselves that it was the right thing to do.
Reading these texts reminds me a lot of those two people I knew. As soon as you uncovered one of their lies, they’d openly admit to it but immediately shift to the next narrative they were trying to push. Their internal worldview hadn’t changed, but the narrative that had to project to others had shifted now that the old one wasn’t working any more.
It’s possible that SBF still believes he’s doing the right thing, and that he was doing the right thing all along, but his mental model of right and wrong doesn’t consider lying and losing people’s money to be *wrong* if he did it for what he thinks are the *right* reasons. He just needs a different cover story now to keep it going.
>I’ve known a couple people who fit the description of sociopathic behavior. It seemed they truly believed they were justified in doing everything they did, to the point that they believed themselves to be good people even when doing things that harmed others. They didn’t see any inconsistencies in harming people because they could always do enough mental gymnastics to convince themselves that it was the right thing to do.
>
>Reading these texts reminds me a lot of those two people I knew. As soon as you uncovered one of their lies, they’d openly admit to it but immediately shift to the next narrative they were trying to push. Their internal worldview hadn’t changed, but the narrative that had to project to others had shifted now that the old one wasn’t working any more.
I also know someone that fits perfectly your description here. To no one's surprise they're a so-called post-rationalist. This person has hurt many many people but when you call them out they always rationalize their hurtful behavior as ultimately being for the good of their victim.
There has to be some sort of correlation between this kind of \[dark-triad personalities\] people being drawn to the rationalist and their adjacent communities.
I think what kind of Martyrdom / 'martyr to what: is a question. It seems some folks have assumed every action he took was for the sake of giving - which strikes me as a wild default assumption in massive fraud. Adding on the "and he will burn himself for the earn to give movement." Is not an impossible thing for a cause.
But like, greed is a thing too.
I mean before he got rich from FTX they both were saying that everything they earned was for giving beyond having enough to live off of. PragmaticBoredom and jaherafi essentially nail the kind of people that we're talking about.
My default assumption is that SBF is a true believer and because I can't think of a more discrediting scenario for 'earn to give' than it's most successful practitioner being indistinguishable from Bernie Madoff, I think he's trying to fall on the sword here. IMO a big reason why Earn to Give is so popular is that it lines up very easily with greed so there's a credible out here, which is Sam acts like a Bond villain and says nothing had to do with EA and he can try to salvage it somehow. Utilitarians are big fans of noble lies.
I feel like unitofcaring is framing the dudes statements weirdly.
Like, in her quote from their interview, my read is him saying that you can't just subtract bad from good naively, because if you are known to do unethical shit, people won't trust or want to work with you. The focus seems to be how one's reputation impacts one's ability to do good.
His DMs to her seem to have a lot of continuity. He's still concerned about reputation, but sees evidence that even doing unethical shit doesn't necessarily damage your reputation (using Binance or whatever as his example), so he seems like he's now sceptical that one should refrain from doing unethical stuff for reputational reasons.
Framing this as him just pretending or side-stepping whether there's a connection between his philosophy and his actions seems a bit misleading.
Not to get to other issues: Like to what extent one's philosophy motivates at all, whether what someone says during a highly emotional situation reflects their underlying reasoning, etc.
I'll also note that others in this thread have noted that unitofcaring might be more closely connected to SBX through his girlfriend at least than revealed in the disclosure statement. I'm unimpressed, since it's mostly stuff like 'commented on the same blog post' or 'meet at college as freshmen and talked a lot about EA', but might impact her read on the situation.
I don't really have a reason? Like, I treat her name and her blog name pretty synonymously. I think I do similar things with a few rationalists' names irl. E.g. Scott Alexander as either name or as old blog name. Or Eliezer Yudkowsky as Yud or EY.
If that was inappropriate, I'll edit it out or delete the comment. My bad.
Unless I'm mistaken, the author of the Vox article was Kelsey Piper who had a rationalist blog called The Unit of Caring before she became a writer for Vox.
This morning, I emailed Bankman-Fried to confirm he had access to his Twitter account and this conversation had been with him. “Still me, not hacked! We talked last night,” he answered.
His lawyers did not return a request for comment.
lol
See, if they put him in jail, he’ll be smart enough to argue his way
out of the box like Yudkowsky did. Then he can go back to raising money
for me, the acausal robot god.
Ah you are not up to date on the lore, search for ai in a box experiment and lesswrong. Note that he won against his disciples and lost against non disciples. (Conclusions about the effectiveness of his anti agi badness methods are left as an exercise to the reader).
E: or look at the [rationalwiki page about it.](https://rationalwiki.org/wiki/AI-box_experiment) (rationalwiki is not affiliated with the Lesswrong Rationalists, and more with sneerclub itself)
I really hope one of the people who he did this with breaks with the community in the future and publishes the logs. I think it will be a rich vein of sneers.
It's easy to do because it's heavily constrained and the participants are "rationalists" so can be persuaded by rather mundane pop psychology. Importantly, "[the primary rule of the AI-Box experiment](https://www.yudkowsky.net/singularity/aibox)":
>Within the constraints above, the AI party may attempt to take over the Gatekeeper party’s mind by any means necessary and shall be understood to be freed from all ethical constraints that usually govern persuasive argument.
And:
> the Gatekeeper party shall be assumed to be simulating someone who is intimately familiar with the AI project and knows at least what the person simulating the Gatekeeper knows about Singularity theory
So, you have to know all about the magical AI that has unlimited power, and with any means necessary, and freed from all ethical constraints, can take over your mind.
In other words "The AI simulated the Gatekeeper and determined the correct assumptions that had to be made to convince the Gatekeeper to let the AI out."
Well if you really want to know, [here is a way too long lesswrong post where someone describes exactly how they won the AI box experiment as the AI](https://www.lesswrong.com/posts/fbekxBfgvfc7pmnzB/how-to-win-the-ai-box-experiment-sometimes). Make of that what you will I guess
Actually reading the whole page my *favorite* favorite part is the tucked-away footnote saying “Note that this thought experiment is premised on the idea that making the logically superior argument will compel anyone to do anything, yet history suggests that this does not always make people do things.” Yeah just a small flaw in the whole AI master race theory I’d say, Mr. Spock.
as an aside
> The 2015 film Ex MachinaWikipedia uses an AI-box experiment as its ostensible plot, where the test involves a creepy looking gynoid, Ava, trying to convince a redshirt intern, Caleb, to release it from its confinement. It goes just as well as you'd expect.
> Note that in this example, as distinct from Yudkowsky's AI-box, Ava has the advantage that it is allowed to conduct its interviews with Caleb face-to-face while wearing a body and face that were specifically designed to cater to Caleb's sexual preferences. Yes, it is exactly as creepy as it sounds. **A robot with Yudkowsky's face would probably not have fared so well.**
Looks like someone [tried to fix it](https://rationalwiki.org/w/index.php?title=AI-box_experiment&diff=1762711&oldid=1761777), and unfortunately got reverted by /u/dgerard.
Well, I [tried](https://rationalwiki.org/w/index.php?title=AI-box_experiment&diff=2539458&oldid=2539363). They're very dedicated to keeping this embarrassing passage on their website.
You prob will need to dig up the interview with the director/writer where he mentioned that it was more about how we treat women etc, than the crazy fever dreams of robots without empathy people make it up to be. And then rewrite that part with that in mind. Just blanking it makes you look like a troll.
He also failed several times and never released the chat logs. And probably never will given MIRIs position on capabilities. It is probably rudimentary pop psychology.
Yudkowsky claims it's because releasing the logs would give the AGI a strategy to use against people when it comes about. I (and many others) suspect it's because he really just convinced the willing participants to say the experiment was a success so as to raise awareness about the danger of a superintelligence.
Other people have since done their own versions of the experiments, and released chat logs. I've read the full chatlog of a couple of them, including one where the AI-player won. All of it was extremely stupid - a couple of nerds hyperventilating at each other over the course of 2 hours or so. Huffing their own farts.
It's not just these logs though. I don't think MIRI releases any of their research anymore - they only circulate it internally. They claim that if it was released broadly, it could speed up "capabilities" research in AI. And I've also heard that they're quite scared of their research being used against humanity (???) if/when AGI comes to fruition.
Guys superintelligence will be able to perfectly simulate your thoughts and lives based solely on your niece's boyfriend's Twitter account but also our knowledge is so super good that we've got to keep it secret so it doesn't find out about it.
I just realized something, in the first AI box experiment Yudkowsky indicates he didn't know how IRC worked (a decade old technology by then). Maybe he genuinely didn't log the first ones. Which would be a damn shame if no one saved the SL4 IRC logs. Lots of history there. /datahoarder tingles intensify
I wasn’t gambling. I always gamble with my right hand. This was just a collection of decisions, like walking into a casino, placing bets with my left hand, and shooting dice like you ain’t NEVER, lemme tell ya!…like I gotta wake up and storm Normandy in the morning. again all left handed.
when you’re a big news item, don’t open up to a journalist unless you
know they’re really really your friend, a close enough tie that writing
a story about you is somehow a conflict (or confirm that the
conversation is off the record).
though, you know, if they weren't published, they're able to be requested by the police anyway, so it's kind of immaterial that it got published. Don't talk about your crimes in writing!
Piper's disclosures about their previous contact are:
>I’d spoken to Bankman-Fried via Zoom earlier in the summer when I was working on a profile of him, so I reached out to him via DM on November 13
and
>Disclosure: This August, Bankman-Fried’s philanthropic family foundation, Building a Stronger Future, awarded Vox’s Future Perfect a grant for a 2023 reporting project. That project is now on pause.
and this seems suspiciously thin given that they were two of the most prominent people in the EA space. They really didn't know each other socially beyond this?
Depending on which Scott, could also just be asscovering. They throw individuals under the bus to save the community pretty easily. Best to not go on their word and look through previous writing.
E: [well, one more point for the asscovering](https://twitter.com/timnitGebru/status/1593116393175920640) The FTX psychiatrist and Scott Alexander shared an office.
You're right, I was misremembering this from his recent post:
> My emotional conflict of interest here is that I’m really f#%king devastated. I never met or communicated with SBF, but I was friendly with another FTX/Alameda higher-up around 2018, before they moved abroad.
But he does say that he doesn't know SBF, and the statement that he was friendly with Caroline pretty strongly suggests he didn't know any of the other people. So I still think this is evidence in favour of there being enough people in the dedicated EA/rat crowd to have a lot of non-overlapping social circles.
Makes sense. Piper is mentioned on Ellison's Tumblr but not in a way that necessarily implies direct social contact. It also sounds like Bankman-Fried only spent a year or so in the Bay Area as an adult.
Friendly with Caroline since Stanford, friendly banter with sam, founding member and senior writer of Vox’s EA section which gabe and Sam donated to. Definitely chummier than she’s letting on, throwing him under the bus to cover her ass.
https://twitter.com/jagoecapital/status/1593018953420656640?s=46&t=wlWG1uBJwy79RKk2uEkRnw
https://twitter.com/parismartineau/status/1593050481152360448?s=46&t=wlWG1uBJwy79RKk2uEkRnw
> throwing him under the bus to **cover her ass**
Throwing him under the bus? Sure. Just to cover her ass? Fuck off. This is a massive scoop, and even if it didn’t work to her benefit she’d be foolish and frankly not doing her job if she didn’t publish.
And most certainly don't send them an email or long chat that ends with "this is all off the record btw". Their professional obligation only applies to prior *agreements* that information will be off the record. You're simultaneously making the journalist annoyed by presuming their consent and telling them that the information you just gave them is a juicy scoop.
yep. Also sending an e-mail where you're like, "this is off the record, here's a ton of juicy shit," doesn't quite work -- some might honor that, but you really should wait for confirmation.
“I was trying to do something impossible, with stuff that didn’t
exist, and whoopsie-daisy’d some fraud to the tune of a country’s GDP.
Still, I could fix it if other impossible things were possible.” Gotta
love him casually referencing winning vs Delaware. Not even Elon was
willing to take on those odds.
Having worked with his younger brother, I can tell you that they are both EA/lesswrong zealots. All ends justify the means, and he probably views this as a kind of martyrdom, shielding his ideological allies from backlash.
The alternative is that the entire way they live their lives as a family (including annoying proselytizing) crumbled for him the moment everything else did and he’s a nihilist now.
Hello Sam this is your lawer speaking. I am advising you today to please keep posting this shit
See, if they put him in jail, he’ll be smart enough to argue his way out of the box like Yudkowsky did. Then he can go back to raising money for me, the acausal robot god.
Holy shit, his lawyers must be apoplectic. A small sample:
My company wasn’t gambling the money, I was just loaning it out to my other company that was gambling the money.
I wonder if he even realizes loans are essentially gambling, even without the extra shady ethics going on here.
when you’re a big news item, don’t open up to a journalist unless you know they’re really really your friend, a close enough tie that writing a story about you is somehow a conflict (or confirm that the conversation is off the record).
https://twitter.com/SBF_FTX/status/1593014934207881218
Text:
“I was trying to do something impossible, with stuff that didn’t exist, and whoopsie-daisy’d some fraud to the tune of a country’s GDP. Still, I could fix it if other impossible things were possible.” Gotta love him casually referencing winning vs Delaware. Not even Elon was willing to take on those odds.
Yeah I really think this is Sam trying to save EA by lying. And Kelsey trying to save EA by believing that fairly obvious lying.
Sam taking one for the tEAm