r/SneerClub archives
newest
bestest
longest
Most-Senior Doomsdayer grants patience to fallen Turing Award winner. (https://www.reddit.com/r/SneerClub/comments/13yanmm/mostsenior_doomsdayer_grants_patience_to_fallen/)
67

https://preview.redd.it/qwwm0q9ggl3b1.png?width=1080&format=png&auto=webp&v=enabled&s=bf23d9774fae579e7df38ad3c871d61ce3c4f5a4

https://twitter.com/bitcloud/status/1664157917648666626

“senior alignment researcher”

If he grinds out a few more years he might finally make it to "principal alignment researcher," but everyone knows that's a hard promo to land
Easier when your field and title are both make-believe
Forget promo, yuddo is headed straight for a PIP. At least spray an amazon echo with vinegar or something, jesus. These AIs aren’t gonna align themselves
He's got to make it to lead alignment researcher first
Especially hard when it requires a recommendation letter from another principal or higher outside your management chain.
Unaligned seniors are a serious problem
Just look at The Villages and all of the fucking and sucking and resulting STDs that the singletons get into.
That's typical with occultists. Aleister Crowley invented titles for himself willy-nilly. I think Ipsissimus is after Senior Alignment Researcher but I'm not sure. It's been a while since I read Thelema's books.
"His Excellency, President for Life, Field Marshal Al Hadji Doctor Idi Amin Dada, VC, DSO, MC, Lord of All the Beasts of the Earth and Fishes of the Seas, and Conqueror of the British Empire in Africa in General and Uganda in Particular"
You know, ultimately, I have to wonder why they don't think we've already lost this war. People keep talking about how reality is a simulation, well, how do we know that reality isn't the output of some kind of ChatGPT? A mess of patterned nonsense with no actual meaning behind it? Every time we try to figure out what's going on we stumble on the sheer incoherence of this puzzle. Really makes you think, doesn't it...
"How do we know this isn't all the dream of a butterfly?" is another way of asking that question
The lineages are a fucking mess.

My favorite part is the person was asking the question to LeCun

My favorite part is that in his autobiography Yud claimed he did away with all egoism and yet he can’t help but compulsively trawl Twitter for even the remotest reference to himself.

Sure he won a Turing Award, but he didn’t win a Yudkowsky Award. We all know which one is more valuable.

If I were being extra charitable, this might be a Yud attempt at humor, as much as it irks me that everything he does is shrouded in as much plausible deniability as he can muster (“it was a joke… or was it?”). But even if it was a joke, Yud is too dense to realize the optics do more harm than good by further undermining his credibility, just like wearing fedoras to interviews…

Occam’s razor tells me he’s just huffing his own farts again.

You almost have to admire his method-acting level commitment to trolling and his own fictional eschatology. I get the impression he has broken through the fourth wall of reality and is genuinely able to believe his mythos with his whole being while simultaneously not actually committing to the truth of anything he says because, hey, there is always a non-zero probability that he could theoretically maybe possibly be not not wrong.
He definitely, actually believes this. A lot of rationalists seem to be eager to jump on board the "*we're the mainstream now!"* train in light of the credulous media coverage about their beliefs. They've been exulting so much in their echo chambers that they don't realize that some credulous media coverage is not the same thing as real credibility.

Yudkowsky isn’t alone, I’ve seen internet commenters in many places declaring victory for the robot apocalypse. They apparently think that a bit of credulous media coverage means that their beliefs are mainstream and they can now get away with summarily dismissing naysayers.

In fact, I bet that Yudkowsky is getting way ahead of himself here because he can’t resist the idea that this is the moment he’s waited for his entire life: now he’s legit, and it’s his rivals who are the crackpots!

What none of them have realized yet is that people who aren’t crackpots vastly outnumber them, but the non-crackpots aren’t as loud as the rationalists are because they’re busy doing real things rather than dedicating their lives to doing PR for a cult.

It reminds me of the first half of a standard TV show trope: the lonely nerd suddenly becomes very popular due to some kind of strange circumstance or coincidence, and then they go overboard in taking advantage of their improved social stature. This leads to public revelations about the nerd's character flaws, which in turn brings them right back to being socially ostracized again.
Hmm I wonder what all these retained lawyers are about? 🤔

“researcher” yeah right. Speculation is not research.

New here. Is this Yuddsy guy for real?

The entire sub is basically around mocking him, his friends, and the members of the movement he started.
Thanks for the friendly heads up. I'm not *that* new (always a bit bemused by LessWrong and their fellow aren't-we-brilliant travelers), but the opportunity was there, and I took it.
Congratulations, you are now a "senior alignment researcher researcher"
Do I get a patch?
Depends on which version you are running. ([My face right now](https://www.youtube.com/watch?v=xdfCdQzmPEM))
Oooh. That was sharp.
Sorry. Then yes, he's "for real", at least in the sense that he's been consistently like this over the last... 25 years or so.
> Is this Yuddsy guy for real? You can put this on my tombstone.
He’s lost Peter Thiel as a donor as a result of his doomerism, so he probably believes his own doom predictions (as opposed to being a pure grifter). He has been consistent with his prediction of doom if AI alignment isn’t solved (where a solution looks something like a complete mathematical specification of ethics programmed into an AI’s goal system with the level of reliability of provable programming). He originally had the goal of solving AI alignment himself (or as leader of a team), and when his “research institute” predictably fell well short of that impossible goal even as deep learning and transformers got more and more impressive^\* , he shifted into doom predictions with no ideas^+ other than “shut it all down”. \* In fact, their research had been focused more on decision theory, policy, and abstract concepts like AIXI. This work was (theoretically) intended to be used to develop a good old fashioned symbolic AI. They mostly ignored the potential of neural networks even as deep learning took off in 2012. Also, they didn’t bother putting their “research” through peer review, other than one or two failed attempts, and their rate at generating papers(especially considering they weren’t subject to peer review) was anemic, more comparable to a decent grad student or mediocre PostDoc than a top tier researcher. \+ Well, he’s had other ideas, but they are wacky sci-fi ideas even he admits are wild long shots. Ideas like use “experimental nootropics” to augment a team to solve alignment.
> In fact, their research had been focused more on decision theory, policy, and abstract concepts like AIXI. This work theoretically intended to be used to develop a good old fashioned symbolic AI. They mostly ignored the potential of neural networks even as deep learning took off in 2012. This part will never not be funny to me. For people whose entire grift is based on the premise that they, uniquely, are able to foresee the inevitable problems with this technology and solve them before they happen Yud and company have a dogshit track record on anticipating changes in the field. This does not appear to have ever caused them to doubt the correctness of their other insights. Amazing.
> He’s lost Peter Thiel as a donor as a result of his doomerism, so he probably believes his own doom predictions (as opposed to being a pure grifter). One thing I will say for Yud is that I do think he is sincere in his beliefs. I don't think he's sincere in wanting to do much about them himself, but I think he absolutely does believe in what he's peddling.
> One thing I will say for Yud is that I do think he is sincere in his beliefs. I don't think he's sincere in wanting to do much about them himself, but I think he absolutely does believe in what he's peddling. I think he's sincere about wanting to do something about them himself but doesn't see how his approach is incapable of producing anything like results.
Yeah, he was, and still is, very unappreciative of the peer review, publication, and collaboration processes. As flawed as the peer review process is, it still provides sanity checks and suggests related work to cite and contextualize your work. And the publication processes gatekeeping of status and prestige might not be the best, but if your priority is “save the world” and not (putting Eliezer’s motives absurdly charitably) “make a principled choice to bypass a flawed gatekeeper”, the status and validation is valuable (especially if you aren’t putting out working code as proof-of-concept). And collaborating with algorithmic bias researcher and/or interpretability researchers would let him both illustrate the “need” for and application of AI alignment (in a reduced simplified scale)… I suppose it’s for the best he didn’t do any of these things, because if he had, real practical immediate concerns would get conflated with at best highly speculative concerns and at worse sci-fi nonsense. But maybe if he had, he would develop a more realistic viewpoint in the first place…
I'm not even talking about the methodology (whether to go the academic route of publication, peer review, etc or to go the NGO route or whatever), I'm talking about the very basic, "It's not clear how the work he has actually been doing in any way is related to a solution to the problem he is worried about." Even if he were right that alignment as he conceives of it is the major safety problem in AGI, nothing in his approach does *anything* to get us closer to solving it!
MIRI did a small amount of work in the direction of trying to develop a formal abstract concept along the lines of AIXI… but didn’t get very far with that, and even if they had it’s not clear the result is something that could guide any actual AI development (as opposed to being a somewhat interesting mathematical/philosophical concept to guide thought experiments). The fact that MIRI didn’t get as far as a detailed extension of AIXI reflects poorly on their ability to actually do research…
> MIRI didn’t get as far as a detailed extension of AIXI I'm pretty critical of the fact that they even wanted to do that. "Let's numerically approximate a non-computable procedure" should never have seemed like a sensible research direction to begin with.
Bayesian update: at least going full doomer and advocating we shut it all down IS something that would get closer to a solution to his posed problem. Credit where it's due...
He strikes me as sincere the way any cult leader is sincere; he absolutely believes his nonsense until he needs to pivot in which case he’ll absolutely believe the opposite nonsense, rinse and repeat ad infinitum.
Most of the really cumbersome aspects of academia are just quality control that we haven't figured out how to improve on IMO. We've had hundreds of years and loads of suggestions. It's not a perfect system but judging evidence and arguments is actually really hard. Peer review, thesis defenses, multiple rounds of revision, are analogous to jury trials. What would you replace juries with that you are *certain* would be an uncontroversial improvement? It says something that "rationalists" hate "the cathedral" exactly because of those systems.
eLife is experimenting with a new process where all papers are published after peer review and the author has the choice of whether to take the paper down, revise and resubmit, or leave the paper as is, with the reviews and author response (if any in addition to or instead of revisions) are made available. eLife’s intent is to get the immediacy and openness of preprints with the scrutiny of peer review. Although it preserves a lot of key components, it’s still a big change. It’s also really recent, they just implemented it this year. Presumably, if various stakeholders come to trust this process and find the openness worth the tradeoffs, it could spread… Lots of journals have adapted to online archive preprints and all digital formats more moderately… I think the peer review system will change over time, but Eliezer’s ideas tend to be radically disconnected to what the various interested stakeholders would accept and what is remotely practical.
I'm aware. I think it's better that preprints and publications are separate. We can see the before and the after and the improvement. I don't care what EY thinks right now. If he wants me to, and wants to tell me what to do, then he is free to go to college at any point.
> where a solution looks something like a complete mathematical specification of ethics programmed into an AI’s goal system with the level of reliability of provable programming How can anybody be stupid enough to believe this is even possible, much less feasible.
The best theory I’ve seen in sneerclub is that he consciously or subconsciously picked an impossible goal to in advance anticipation of his eventual failure. This way, his own ego is protected from failure by the fact that the goal is impossible.
[( ͡☉ ͜ʖ ͡☉)](https://www.reddit.com/r/HPMOR/comments/1jel94/hate_for_yudkowsky/cbemgta/?context=100)
What a terrible day to not be Jared, 19
He's had *ten years* to delete that and he still hasn't done it? I don't know how to interpret that.
You sweet summer child

lil bro think he a scientist :skullface:

If he’s a senior alignment researcher for reading sci-fi and staring at his belly button after, what does that make the sci-fi whose work he’s reading?

I call him LeCunt

In conclusion, alignment.

You know I opened up the sequences today, and was profoundly annoyed. This is metaphysics. It’s metaphysics that puts on an almost obnoxious pretense of materialism, and it is metaphysics that is almost absurdly interested in positioning itself as not metaphysics, but it is still metaphysics.

This is false objectivity. But false objectivity is key to religion, nearly every religion puts on a pretense of false objectivity, that it has found an external authority which can precisely define truth. The fact that they’ve done this entirely in material, corporeal form doesn’t make this not a religion. There have been plenty of religions which were entirely corporeal in nature. (Hobbes interestingly had an almost bizarre theology where he interpreted Christianity entirely along corporeal lines, thinking that God was a material creature living in the physical universe who does miracles entirely through the laws of nature).

So imo this is basically a theologian who calls the clergy of his religion “scientists”, talking down to an actual scientist with actual knowledge in a truly natural field.