posted on March 07, 2023 05:37 PM by
u/PolyamorousNephandus
89
u/SuppaCoup63 pointsat 1678223423.000000
Sonia Joseph, the woman who moved to the Bay Area to pursue a career
in
AI, was encouraged when she was 22 to have dinner with a 40ish
startup
founder in the rationalist sphere, because he had a close connection
to
Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a
core
HPMOR
professor on him. Joseph says he also argued that it was normal for
a
12-year-old girl to have sexual relationships with adult men and
that
such relationships were a noble way of transferring knowledge to a
younger generation. Then, she says, he followed her home and insisted
on
staying over. She says he slept on the floor of her living room
and
that she felt unsafe until he left in the morning.
EDIT: finally read it. Very solid article overall. I think it is
still too credulous when it comes to the scientific validity of “AI
safety”, though. Things like this deserve more elaboration: > Larissa
Hesketh-Rowe […] says she was never clear how someone could tell their
work was making AI safer.
Like many religions, the core tenets of Rationalism include beliefs
about the supernatural. It’s hard to tell if “AI safety” work is
productive because it consists of diagnosing and solving problems in
machines that don’t actually exist and which, depending on your
definition of “superintelligent”, can not exist.
That might seem like a lesser or separate problem from things like
sex abuse, but I think these could be related issues. If you’re a
professional computer scientist who believes impossible things about how
computers work then maybe you’re going to have other beliefs that are
untethered from reality too. Respecting other people’s boundaries
necessarily requires identifying and connecting with a reality that is
separate from your own imagination.
It’s the same problem with EA, too - they treat everything as
abstractions and in doing so they become disconnected from reality.
I feel like the ending still takes Yudkowski and Bostrom’s specific
AI concerns too seriously but this is probably the best one step summary
of these weirdos and the specific ways they broke their brains.
At dinner the man bragged that Yudkowsky had modeled a core HPMOR
professor on him.
…is this Michael Vassar, bragging about Quirrelmort being based on
him? I wonder if he didn’t realize Quirrel was (still) Voldemort and
super evil, or if he knew but didn’t see why that’s a bad thing to be
proud of.
I really like the ending. Conceptualizing modern AI safety
researchers as a human example of a poorly defined paper lip maximizer
is perfect. It’s both a super clear metaphor, and one they would fully
understand.
If a tree falls in the forest but you didn't hear it because you only listen to sounds that come from other Rationalists, what is the tree's epistemic status?
I like how their response to an article that accuses them of myopia and insularity is to deliberately retreat into myopia and insularity.
>While I am generally interested in justice around these parts, I generally buy the maxim that if the news is important, I will hear they key info in it directly from friends (this was true both for covid and for Russia-nukes stuff), and that otherwise the news media spend enough effort to do narrative-control that I'd much rather not even read the media's account of things.
anyone with the barest amount of either self-awareness or media literacy would see the gigantic problems with this approach, but...
> FWIW, I'm a female AI alignment researcher and I never experienced anything even remotely adjacent to sexual misconduct in this community. (To be fair, it might be because I'm not young and attractive; more likely the Bloomberg article is just extremely biased.)
lmao
> the Bloomberg article is just extremely biased
what does that even mean? that all women interviewed are straight up lying?
Like, what's more likely, a male dominated community full of social awkward nerds being full of sexual misconduct or women just straight up making shit up?
cult-like levels of cognitive dissonance going on
“[Bankman-Fried] who invested close to 00 million in related causes
before dismissing effective altruism as a dodge once his business fell
apart.”
Good gravy. I knew he’d invested a lot, but that is really silly
money for a group that has produced close-to-nothing. No wonder they’re
buying castles.
Damn. Dill and Ziz are both far from reliable sources so I’m skeptical…but this is nuts if true. I knew Eric a little, and it would be incredibly tragic if somebody triggered his latent schizophrenia on purpose.
hats off to those responsible for the decision to release this on the
eve of international women’s day (a fact no doubt lost on the
ratsphere)
…and, how deliciously ironic, they keep talking about bayes and base
rates of abuse, as though “holup, it’s not like our abuse is
significantly worse than the rest of the world generally” is exculpatory
even if true. keep digging, bros
HPMOR scared me off within a few chapters by the author’s need to
keep sprinkling rape references into his fanfic based on a relatively
tame middle-grade novel series. My friend who was promoting HPMOR
defended it by saying that the story makes it clear that rape is
BAD.
I also picked up on a subtext that good men are rational and good
women think like rational men. Not that “rational people have common
attributes regardless of their gender” or that “rationality is good in
any gender,” but that “rationality is masculine and good.”
The Gnostic Gospel of Thomas, rejected as an authoritative text in
the codified New Testament, includes this verse:
Simon Peter said to him, “Let Mary leave us, for women are not
worthy of life.” Jesus said, “I myself shall lead her in order to make
her male, so that she too may become a living spirit resembling you
males. For every woman who will make herself male will enter the kingdom
of heaven.”
I swear, Rationalists have similar assumptions about gendered
cognition.
Admittedly I have a hard time taking AI doomsday concerns seriously
but I really can’t imagine someone or multiple someones getting wound up
enough about it to have a psychotic break – that’s got to be mostly the
drugs’ doing right??
If they’re only talking/thinking about doomsday all day, living together, working together, maybe not taking breaks or eating/sleeping enough, I could easily see this happening to vulnerable people even without drugs.
I have a stressful tech career and I can absolutely put myself in a pretty dark place emotionally if I don’t take care of myself. When work is your hobby/passion you can get really in your head about it, and it’s even harder if your friends are your coworkers and equally passionate.
That being said, they’re definitely all on drugs.
In my experience on places like /r/askphilosophy, it's not super rare to find people, sometimes quite up front about being diagnosed with an anxiety disorder, who will grab onto some skeptical and/or doomsdayish speculative theory which becomes a fixation of their anxiety. Some person is fixed on the possibility of being a Boltzmann brain, another that they die every time they sleep a la teletransport paradox, etc. Sometimes fixation on such an idea is not the cause of a mental crisis but an expression of one.
holy FUCK
For those without a bloomberg subscription: https://archive.ph/sLihW
EDIT: finally read it. Very solid article overall. I think it is still too credulous when it comes to the scientific validity of “AI safety”, though. Things like this deserve more elaboration: > Larissa Hesketh-Rowe […] says she was never clear how someone could tell their work was making AI safer.
Like many religions, the core tenets of Rationalism include beliefs about the supernatural. It’s hard to tell if “AI safety” work is productive because it consists of diagnosing and solving problems in machines that don’t actually exist and which, depending on your definition of “superintelligent”, can not exist.
That might seem like a lesser or separate problem from things like sex abuse, but I think these could be related issues. If you’re a professional computer scientist who believes impossible things about how computers work then maybe you’re going to have other beliefs that are untethered from reality too. Respecting other people’s boundaries necessarily requires identifying and connecting with a reality that is separate from your own imagination.
It’s the same problem with EA, too - they treat everything as abstractions and in doing so they become disconnected from reality.
this article goes in fucking hard, and it doesn’t even get to the race and IQ stuff
I feel like the ending still takes Yudkowski and Bostrom’s specific AI concerns too seriously but this is probably the best one step summary of these weirdos and the specific ways they broke their brains.
…is this Michael Vassar, bragging about Quirrelmort being based on him? I wonder if he didn’t realize Quirrel was (still) Voldemort and super evil, or if he knew but didn’t see why that’s a bad thing to be proud of.
I really like the ending. Conceptualizing modern AI safety researchers as a human example of a poorly defined paper lip maximizer is perfect. It’s both a super clear metaphor, and one they would fully understand.
[deleted]
“[Bankman-Fried] who invested close to 00 million in related causes before dismissing effective altruism as a dodge once his business fell apart.”
Good gravy. I knew he’d invested a lot, but that is really silly money for a group that has produced close-to-nothing. No wonder they’re buying castles.
This is a pretty good sneer and excellent summary of Yudkowsky’s contributions.
I think you mean “co-favorite Basilisk”.
https://fredwynne.medium.com/an-open-letter-to-vitalik-buterin-ce4681a7dbe From a rationalist
hats off to those responsible for the decision to release this on the eve of international women’s day (a fact no doubt lost on the ratsphere)
…and, how deliciously ironic, they keep talking about bayes and base rates of abuse, as though “holup, it’s not like our abuse is significantly worse than the rest of the world generally” is exculpatory even if true. keep digging, bros
I really hope this stuff gets followed up on.
HPMOR scared me off within a few chapters by the author’s need to keep sprinkling rape references into his fanfic based on a relatively tame middle-grade novel series. My friend who was promoting HPMOR defended it by saying that the story makes it clear that rape is BAD.
I also picked up on a subtext that good men are rational and good women think like rational men. Not that “rational people have common attributes regardless of their gender” or that “rationality is good in any gender,” but that “rationality is masculine and good.”
The Gnostic Gospel of Thomas, rejected as an authoritative text in the codified New Testament, includes this verse:
Simon Peter said to him, “Let Mary leave us, for women are not worthy of life.” Jesus said, “I myself shall lead her in order to make her male, so that she too may become a living spirit resembling you males. For every woman who will make herself male will enter the kingdom of heaven.”
I swear, Rationalists have similar assumptions about gendered cognition.
Admittedly I have a hard time taking AI doomsday concerns seriously but I really can’t imagine someone or multiple someones getting wound up enough about it to have a psychotic break – that’s got to be mostly the drugs’ doing right??