r/SneerClub archives
newest
bestest
longest
Is it possible that Yud is overestimating his own importance? (https://i.redd.it/sm9vycuz4ch91.png)
182

Underestimating it imo. Man is too modest. Like, he should give himself proper credit for writing a harry potter fanfic.

A harry potter fanfic that's the milk-before-meat onramp onto some blog posts about cognitive biases, that's the onramp onto some blog posts about "correct language" and "correct thought", that's the onramp onto unsubstantiated claims about exponential general growth in ML (...that's the onramp onto "give me x% of your income") (edit: "... so Satan doesn't kill you")

There is a fascinating potential take on recent Yud – that he truly believes he, personally, has fucked up AI alignment and doomed humanity, and he’s constantly grappling with reconciling this immensely narcissistic personal failure with the hedonistic desire to just write more fanfic for his math pets.

There is the temptation to hope that his full-on “the end is nigh” shtick of the last few months is real, and that it’s making it impossible for him to enjoy the lifestyle he’s carved out of all of the bullshit that led him to this point over the years
He's literally Ted Faro lmao
Oh no not hedonism noooo Wtf is wrong with writing stories why would you describe writing stories like that

at the center of my own observables

On TikTok, they call this “main character syndrome.”

That's also what they call it in parts of Twitter that don't feel perpetually compelled to reinvent the linguistic wheel because they think that's the tame thing as being smart
True, not to mention that redefining language is a technique that cults use to separate their followers from the rest of the world. Additionally, this is often used to introduce cliches and teach thought stopping techniques.
Which is ironic since Yud actually talked about thought-terminating cliches being bad, and how thinking about concepts without specific language and only narrowing down terminology later on is useful for the "Rationalist project", in his early blog posts. While at the same time using Capital Letters, thou shalt / shalt not type commandments and communal social norms to guide the cognition of his readership, including in the very same blog posts where he talks about how that kind of thing doesn't lead to thinking that's good if you want to be "rational".

The only other person I can think of with the level of self-regard required to say “I happened” is a fictional serial killer:

“Nothing happened to me, Officer Starling. I happened. You can’t reduce me to a set of influences.” – Hannibal Lecter

That was my first thought upon reading "I happened".
Has he mentioned Hannibal Lecter before? With the AI escape experiments he did, I'm pretty sure the conceit of genius he draws upon as an idealized self-image/public persona has a lot of Hannibal mixed in with the von Neumann.

the actual answer is Peter Thiel, with Yud as his first avatar

Totally different avatar, but Yudkowsky has proven mastery of the four primordial elements of the Bay Area: \- Expected utility \- Thielbucks \- Garbage Twitter takes \- Berkeley undergrads
I had a girlfriend a few years ago who was an undergrad at Cal, and she left me for a guy who woke up at 5am every day to meditate and subsisted entirely off of Soylent. So based on that single anecdote and absolutely nothing else this is surprisingly accurate.
That fucking sucks man. Hopefully now they've both transcended the soylent samsara or whatever the fuck so that you don't have to see them again
I never had to see them again, thankfully. As I side bonus, I haven't been cornered into any conversations about Ethereum since then so I consider the way it turned out to be a net positive on my life.
Given the context, I feel compelled to link [this Avatar clip](https://www.youtube.com/watch?v=fVeAEwrL1Ts). More seriously, happy things are looking up.
That might be the most relevant thing anyone has ever sent me in response to an internet comment.
Lol. It was just too good of an opening not to use the clip.
I'M THE EGOTAR YOU GOTTA DEAL WITH IT
As someone who went to college in the bay area (not Berkeley) in the 00's, this comment makes me sad.
What’s the Berkeley undergrad part about?
Tbh, I have no clue.
A lot of the longterm AI safety stuff has strong ties to Berkeley (like Stuart Russell's org). If you get to the Berkeley kids while they're young and malleable, who knows what you can do? (I also just needed a fourth element that wasn't as obvious as paperclips)
Math pets, Berkeley undergrads, bad kink practices, and Yud. Don't you hate this comment can be read that way, entirely seriously?
This is why San Francisco needs to build a wall
A new fanfic-based cult is born from this comment.
What’s the Berkeley undergrad part about?
See my reply above
There is no Grand Vampire but Thiel, and Big Yud is his ~~blood boy~~ profit.

“Possibly I overestimate my own importance”

E: two months ago “the world is literally doomed because there’s only one of me.” (https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)

Man, there's so much weird in that, even trying to break off little pieces just ends up going down bizarre paths. > The metaphor I usually use is that if a textbook from one hundred years in the future fell into our hands, containing all of the simple ideas that actually work robustly in practice, we could probably build an aligned superintelligence in six months. Like, what? Where did that come from? "If we sent back one (1) textbook to the year 1922, in six months they could build a gaming laptop." > I figured this stuff out using the null string as input Rationalists, and EY in particular, are endlessly inventive at coming up with ways to say, "I have planted my ass in this armchair and here I will do my reasoning."
Because the idea that “super intelligence”, or even AGI in general, requires more than a Turing Machine is anathema to them. It goes hand in hand with idea of their doomsday scenario, the fetishization of logic that can deduce all possible things ex nihilo, without needing any practical experience or interaction at all. The idea that you have to actually stand on the shoulders of giants is anathema to their worldview. The idea that doing things in the real world takes time and coordination simply doesn’t enter in to their calculations.
> the fetishization of logic that can deduce all possible things ex nihilo, without needing any practical experience or interaction at all. This is a logical necessity when you feel a deep-seated need to have all the prestige and egotism that (they perceive) intelligence brings them, but none of the cleverness, insight, motivation, and persistence to actually be an accomplished person. If getting what you want is hard but *pretending* like what you got what you want is easy, then an inevitable next step is concocting a reason that the hard route to your goal is actually the wrong route.
Maybe I misunderstood but are you saying that AGI or superintelligent AI or whatever could require something "more" than a Turing machine as it's foundation? Genuinely curious as to what you meant.
What do you mean by “more”? What do you mean by “foundation”? A Turing machines and its equivalents are logically equivalent, but only given an infinite amount of time and no necessity for coordination with temporal events. They can all theoretically figure out the proper sequence of electrical impulses to send to a robotic bat swinger that will hit a home run, but that doesn’t hook them up to the robot and strike 3 was called 1000 years ago.
>Because the idea that “super intelligence”, or even AGI in general, *requires more* than a Turing Machine is anathema to them. This is the "more" I was asking about, but I see what you meant now, not more as in "computationally more than a TM" but more as in "hypothetical computers not always being proper real-life problem solving tools" (a real shocker, I guess). My bad.
No worries. I also believe that analog processing may be a necessity to achieve sentience/consciousness. I suppose you can achieve a simulacrum of analog processes by using digital processing of sufficiently fine grain, but it's not actually the same. The analog single contains the information. The digital signal does not, and that seems to me to create certain complications. But that's just personal rumination that's not informed by any research.
I personally think there wouldn't be a theoretical necessity for analog simulations as I don't think brains themselves are endlessly fine-grain in a meaningful way, especially considering how naturally noisy they seem to be. That being said, analog computing is a really cool concept and it does seem to mesh well with various AI stuff, so maybe there'll be a lot more of that in the future.
I'm not saying that I believe it's necessary. I'm saying that it's a possible requirement that we can't rule out. Once you've fixed the precision in your digital system, you have an extra cost to make changes to your system. You have to change not just the use of the signal, but also the measurement of that signal. It's worth considering that neural network development is now exploring using analog signal processing in custom ML processing cards, simply because processing the weights in an analog manner is so much more efficient.

Transhumanism already had a bit of a priviledged white person problem before Yud came along.

before Thiel started showering dollars on the chuds in the movement, the technoprogressives were privileged white men (with an unfortunate fondness for gene therapy eugenics), but they did tend to believe they were there to bring the rest of humanity along with them
Well i recall the Mores ~~sponsored~~ (e: sorry they ran it, no idea where the money comes from) humanity plus magazine once posting a 'we must take care to bring the poors along with us' article, which was then quickly removed (before it could be waybackmachined (it did remain in their rss feed which is how I read it) and then the h+ article flow dried up. And a lot of other usa transhumanists are Ron Paul libertarian types. People who say in principle they support poor people and progressive causes, but in practive never do shit about anything, because the importance of causes goes : becoming immortal > AGI > other science fiction things > ... > solving real problems with science fiction things > woo spirituality > some cool high tech which might solve real world problems. And well one of the Mores is a privileged white woman. (All of this is my exp of the more important people (financial and political influence wise) of the movement others mileage might vary) E: bit of a late edit, but as an example, be careful with transhumanists who say they support transpeople but later mention stuff like being biological materialists. They will put crazy barriers before their actual support for trans people (like you can only say you are a different gender if you do DNA changes/chromosome changes/brain changes etc etc, bullshit like that). This is how transhumanist fiction can be transgender supportive for decades now and the people who support and popularize the transhumanist anti-aging science be transphobes in 2020.
Immortality-type transhumanism is just so *boring* tbh, and pretty funny when computers reliably break down much faster than the human body does Much more fun and interesting to center your transhumanism around like. Hotswappable (and even custom!) genitals, replacements for disabled organs, that kinda thing Letting the furries have functional tails sounds a lot less technologically and morally questionable than "lets upload our brains and become immortal skynet" and i will never stop being disappointed in the priorities of (supposedly) a lot of transhumanists
Agree with you there.

Man, these observables ain’t shit.

Just wait another 10 years bro the observables will kick in then bro I swear bro by 2030 shit's gonna be crazy bro

I mean, the LW junk is rather off-putting to those who are actually smart… I found about said trash by seeking out other people who were interested in transhumanism and was left with nothing but disappointment and disgust.

possibly. It’s possible.

Yeah the more I think about it the more possible I think it might be
But is it *probable*

It doesn’t make any sense when you go so far as simulating the electrical signals between neurons though. Like where is the consciousness