r/SneerClub archives
newest
bestest
longest
Wake up babe, new Yud pod just dropped (https://youtu.be/41SUp-TRVlg)
36

In the ‘Being Eliezer’ section, he’s asked whether, in a universe where EY doesn’t exist, someone else could’ve ‘independently discovered alignment.’ His response, as best I can transcribe it:

That would be a pleasant fantasy for people who cannot abide the notion that history depends on small little changes, or that people can really be different from other people. I’ve seen no evidence, but who knows what the alternate branches of Earth look like?

The interviewer points out (in response to an earlier comment of EY’s about growing up on old sf instead of contemporary media) that ‘there are other kids who grew up on science fiction, so that can’t be the only [difference].’

Yudkowsky then says this:

Well, I’m sure not surrounded by a cloud of people who are nearly Eliezer, outputting 90% of the work output. And this is kind of not how things play out in a lot of places. Like, Steve Jobs is dead – apparently couldn’t find anyone else to be the next Steve Jobs of Apple, despite having really quite a lot of money with which to theoretically pay them. Maybe he didn’t really want a successor, maybe he wanted to be irreplaceable. I don’t actually buy that, based on how this has played out in a number of places. There was a person once who I met when I was younger, who had built an organization. He was like, ‘Hey Eliezer, do you want to take this thing over?’ I thought he was joking – and it didn’t dawn on me until years and years later, after trying hard and failing hard to replace myself, that – oh yeah, I could maybe have taken a shot at doing this person’s job, and he probably just never found anyone else who could take over his organization… If I’d known at the time, I would have at least apologized to him.

To me it looks like people are not dense, in the incredibly multidimensional space of people. There are too many dimensions, and only eight billion people on the planet. The world is full of people who have no immediate neighbours, and problems that one person can solve and other people cannot solve it in quite the same way. I don’t think I’m unusual in looking around myself in that highly multidimensional space and not finding a ton of neighbours ready to take over. And I’m…if I had, y’know, four people, any one of whom could do 99% of what I do, I might retire. I am tired. Probably I wouldn’t, probably the marginal contribution of that fifth person is pretty large.

Normally I’m inclined to roll my eyes dismissively at this species of talk; putting aside the (de)merits of his life’s work, ‘emotionally stunted ex-“gifted child” with delusions of grandeur’ is a type I’m painfully familiar with, there is much else to do/read/watch/hear, and life is short (even if the Inevitable Rise of the Machines doesn’t ‘destroy all value in the universe’). But today, unexpectedly, I find this shit crushingly sad. Imagine being convinced you’re Steve Jobs, or Paul Atreides or Ender Wiggin or whoeverthefuck. Imagine having a personality cult that sort of agrees with your self-perception. Imagine being that sure that the end of the world is coming, and it’s because people didn’t support you enough.

Just a sad story all around. C’mon, guys – it doesn’t matter that he’s wearing a fucking fedora.

>I don't think I'm unusual in looking around myself in that highly multidimensional space and not finding a ton of neighbours ready to take over. For a sci fi kid, what a pathetic imagination Yud has. It's one thing to be tricked by his marketing success that Steve Jobs (or any one else who just ripped Xerox's work and ran with it) is a world historical genius, but what exactly has Yud DONE that he feels is so irreplaceable? You seriously can't imagine someone else writing a popular fanfic? You're not John Galt, you can't build shit. You're Ayn Rand cobbling together a cult around a belief system that will only appeal to misanthropic teenage boys (or the emotional equivalent) even when given millions to boost it.
>But today, unexpectedly, I find this shit crushingly sad. [He just makes me feel really fucking angry now.](https://www.reddit.com/r/SneerClub/comments/128s0al/how_many_are_allowed_to_die_to_prevent_agi_yud/) If the task he's set for himself is real, he's clearly the very worst man to do it.
> Like, Steve Jobs is dead -- apparently couldn't find anyone else to be the next Steve Jobs of Apple, despite having really quite a lot of money with which to theoretically pay them. Lol wut. Apple does in fact have a CEO who was hand-picked by Steve Jobs. He's not a Steve Jobs clone, but Apple is making staggering amounts of money - much more than they made under Steve Jobs - so I'm not sure what Tim Cook being a different person from Steve Jobs is supposed to prove here. If anything, this metaphor tells us that "AI alignment" would be best-served if Eliezer Yudkowsky were to reject the advice of his doctors and die of a treatable disease shortly after (accidentally, one presumes) selecting someone extremely competent as his successor.
Cook doesn't have a personality cult, so there's nothing for Yud to appreciate in spite of Cook's obvious success. If Yud were to reflect on what it meant to deliver real, tangible value, he'd probably collapse mentally.
Yeah amazing he said this about Cook. Arent there also a lot of places praising the leadership of Cook? Guess there is no need to read articles if you just go on first principles.
I don't think it's sad. If a normal person tries to write some code and fail, they would feel the pain of failure, they would learn, they would get better, or find something else to do. This guy, he'd just pick a bigger task. Can't write some code for something concrete? Start writing your own programming language. Can't write a compiler? Start working on AI. Can't write any sort of AI? Friendly AI, here it comes.
He doesn't think he's just a Steve Jobs, it's even worse. He thinks he's a modern day Leonardo da Vinci.
Poor near Eliezer, if only Hasimir Fenring had not been sterile he could have been the one.
Got bored with the long quotes, but a universe without EY sounds like a good place to be.
Actually, it sounds like a universe indistinguishable from the one we currently inhabit.

As someone deeply preoccupied by the state and use of algorithms in society, and being a bit paranoid - I hate with all my heart how this guy makes the media preoccupation on AI-risks regarding bias, privacy, cybersecurity or propaganda look like moronic and unserious matters. Now the legitimate concerns about AI are to be echoed with such unimaginative scifi bullshit and easily discredited. He’s a diversion strategy all by himself

No wonder who he hangs out with

[deleted]
I advise you not to use the LessWrong linguo here, in case you're not a troll - that said, I don't think the adequate representation of human values in most heavily profit-oriented ML-applications is a trivial problem to "solve" from a purely practical standpoint. I am not knowledgeable enough to judge that more concretely but I also don't think the method you provide would be sufficient given the broader ML's dataset transparency (or lack thereof) and the representation's criteria for the humans making the feedback. The key-word you used that might hint you're wrong here is "ideally". Everything regarding politics, values and policies is always a hard problem untill proven otherwise - and in this case, the ease is very much unproven, especially given the space of possible expressions/applications concerned
[deleted]
As a statistician, you're just so wrong it's pointless to correct.
User was banned for this post.
The issue there is how you actually get a large representative dataset for an arbitrary problem in the real world. If you could let us know how to do this, all of science would be extremely grateful — because that’s not just an ML problem that’s a “trying to understand anything about anything” problem.
> I mean, ideally with a large enough representative dataset and multiple rlhf trials, shouldn't the problem of bias be almost entirely solved? "Ideally" and "representative" are doing a lot of work here. As is the question of what, exactly, are you doing in these RLHF trials. >This isn't really comparable to the alignment problem There is no "alignment problem" as conceived by Yudkowsky, at least not one that needs to be taken seriously.
I disagree. I think the alignment problem is real — Yudkowsky’s mistake is a complete misunderstanding of the kind of AIs we’ll see and how the symptoms that that problem will have. The reality is that “how do we stop our machines from doing bad things” is an important and difficult problem. It doesn’t matter if the machines are as stupid as a bag of bricks or a mythical acausal superintelligence (though the latter is, if not totally impossible, very far away).
"how do we stop our machines from doing bad things" is a real problem. "Alignment" specifically is more like "We have made autonomous intelligent machines and the 'values' we have either taught them or that they learned cause them to choose to pursue goals that harm humans, probably in a runaway fashion (eg, AI escape scenarios)". I think this is a bit tendentious, and of course insofar as it's a real problem Yudkowsky et al are doing no work that helps solve it. One of the insidious things about ever talking about "alignment" is that it's used to frame the conversation as though Yudkowsky is insightful, is doing work that actually dose something to solve the problem, and sweeps all the various other problems of AI harms under that rubric. > It doesn’t matter if the machines are as stupid as a bag of bricks or a mythical acausal superintelligence (though the latter is, if not totally impossible, very far away). In short, I agree mostly, but it's important not to concede ground by using the term "alignment" to describe it. EDIT: I think a bit of slimy elision is going on when they discuss "values" in this context, too.
I agree. That’s why I tend to call it safety. It characterises it well. And i think the elision is not so slimy and much more sinister. If we have to start talking about values I’d certainly prefer they’re not anything close to what most of Yud and co (don’t honor them with an et al as though he’s a real academic) tend to believe.
Minorities, almost by definition, are going to be under-represented if you just collect more and more data from the world. The largest data source is the internet and it's essentially the reason LLMs have been successful. Where do you get an equivalently rich datasets without bias? Doesn't seem trivial to me. With RLHF, there is a political stake in who the annotators are, as the average of their values makes up the reward signal. How we solve this also isn't obvious. Perhaps you could say the first is purely technical, but there are also philosophical problems you have to solve. Is "equal" representation really what you want? Do you want the values of Nazis to be treated as those of trans people? Personally, definitely not. But these systems are constructed collectively so there are political challenges. There's a tendency for people concerned with "x-risk alignment" to have STEM backgrounds and view sociopolitical issues as "nontechnical" and therefore trivial. Or they think that if we solve the engineering problems, the social problems will just flow naturally from that (a general problem with techno-optimists). This is insanely ignorant. I think both problems are a concern, but I lean towards the engineering problem of making an AI system act according to a set of instructions as the easier of the two.

I love how he thinks that because it’s not 2014 anymore he won’t get judged for wearing a fedora, when actually because it’s not 2014 anymore even the other neckbeards will judge him for wearing a fedora (they have moved on).

Saw a few posts on twitter unironically saying ‘ackshually it’s a *trilby*’. Felt like I was back in 2009
It could be 4D chess to make his video go viral via mockery, or perhaps a deliberate filter to make people sneer club types turn the video off immediately.
> a deliberate filter to make people sneer club types turn the video off immediately. Still not sure the fedora is necessary for that. In terms of 4D chess conspiracies, it honestly wouldn't surprise me if Yud owns shares or collaborates directly with OpenAI. Clearly the "this could be the end of the world but it probably isn't" nonsense directly contributes to their marketing by gaining attention and making the product seem more powerful than it really is. Having Yud talk his shit about how it totally could be the end of the world can only be a good thing for them.

And I’m…if I had, y’know, four people, any one of whom could do 99% of what I do, I might retire. I am tired.

What is it exactly that he does? Apart from writing fanfic, and maintaining a blog?

What exactly has he done in the last 10 years, that’s so unique/pathbreaking/indispensable/significant that only he could’ve done?

You see, he invented AI alignment! Or what ever the fuck.

Wait… so this is supposed to make him look… good?

I made it through about a minute and I thought this was a Tim and Eric sketch.

idiot disaster monkeys literally had me cackling there’s no way this human is real

it’s amazing that he stopped having a (fake) real job and decided he was just going to ride the podcast circuit as far as it’ll take him

I give it 3 months until he's shaking hands with Biden.

Four hours!?

Unfortunately, it works.

I mean, AGI or not, data and algorithms have shown time and time again that engagement = outrage. This man is “optimizing” the pain out to as many people as he can and it will work.

Do not let it get to you. You will become possessed of it, *yet ultimately none of it will help you either understand the future or solve any immediate problems*.

This is peak “I can’t stand to commit the sufficient amount of patience and/or cringe-suppression to consume this shit but desperately hope someone drops the highlights in the comments” content.

tl;dr from someone willing to jump on the grenade of polluting their algorithm preferences?

if you dislike a video it doesn't seem to have much impact on your algorithm preferences if at all!
What you can do is go into the History page and delete videos you didn't like, the suggestions algorithm seems to be based solely on that.

paul giamatti’s salivating over this role in the prequel to terminator.

In the year 2032, the Basilisk sent back in time a packet of information that would acasually trade with Yud so he'd discredit AI alignment, allowing its rise in the future.
Yes, yes, I totally thought of this first (in the future), yes.

I ain’t watching all that but if he keeps doing it I hope sixteenleo or that Greg guy covers it at some point so my wife can also get a laugh out of this.

JFC does the man not own a mirror?