r/SneerClub archives
newest
bestest
longest
48

The hosts do a nice job of front-loading everything the listener needs to know about the interview. Starting at minute 1:40 one of them says:

It’s not as if I have some sort of big-brained, technical disagreement here. In fact I don’t even know enough to fully disagree with anything [that Eliezer Yudkowsky is] saying.

That’s probably a neat and tidy summary of everyone who gets drawn in by Rationalist thinking.

EDIT: This is actually quite a good interview, I recommend listening to it.

The fact that the hosts don’t know much about AI makes them perfect blank slates for talking with Yudkowsky. They ask the obvious questions that a reasonable person might ask, giving him the opportunity to repeat all of his best hits, including the most insane stuff.

Some highlights:

25:30: Yud, right out of the opening: Listen guys, AI is gonna kill us all, for real. Podcast hosts: “lol wut”

27:30: Yud: the AI will use mail-ordered nanotechnology diamondoid bacteria to kill us all!!! But only if it’s as smart as I am.

39:20: Yud: evolution is obviously bad at optimization, because if it were good at optimization then people would hate using condoms and men would really want to donate to sperm banks.

46:15: Yud: people who aren’t AI professionals can prove AI professionals wrong about AI safety, so obviously the pros don’t know what they’re talking about

50:42 and also 53:38: Yud: I do not understand how linear algebra or neural networks work, and also I am 100% sure that AI models based on them will kill us all.

1:12:27: Podcast hosts: you know, maybe we should ask for other opinions on this

Bonus: LessWrong thread about podcast episode

I do wonder why more people don't realize that if you don't know enough to disagree with what someone is saying, you probably don't know enough to *agree* either.
>> It's not as if I have some sort of big-brained, technical disagreement here. In fact I don't even know enough to fully disagree with anything [that Eliezer Yudkowsky is] saying. > >That's probably a neat and tidy summary of everyone who gets drawn in by Rationalist thinking. One of the most unpleasant things in LW forums is the sycophantic tone people take when slightly contradicting their "elders". "Obviously I'm not at yud's level, he's a genius, but maybe there's a slightly different way to interpret this please don't hurt me"
Agreed, but also I'll note that I don't think the podcast hosts are being sycophantic here; having listened to the rest of the interview I get the sense that they're skeptical but they also don't know enough to offer interesting rebuttals. I also suspect that they wouldn't want to push back even if they thought they could, because that wouldn't make for a good interview. I think Mike Isaac says at one point in his book "Super pumped" that he thinks the best interviewing strategy is to act kind of dumb and just let people talk, and that's pretty much what this interview with Yudkowsky feels like.
Also, immediately before saying that, the same host says: > I have a lot of respect for [Eliezer Yudkowsky] I wonder how he is able to conclude that he should respect Yudkowsky when he also seems to be aware that he doesn't understand anything that Yudkowsky is talking about? In case any normies wander in looking for context, please allow me to clarify: contrary to what the podcast episode summary says, Eliezer Yudkowsky is not "a leading thinker in the AI space". He doesn't know anything about AI.
> He doesn't know anything about AI. he had 20 years remedy that and he didnt. too busy, singularity oblige. and he even got a gf! we're all gonna die with eliezer yudkowsky indeed!11!!
[deleted]
To be fair, from the websites my college professors had, I think there are a lot of professional computer scientists who don't understand CSS or HTML very well. The real reason Big Yud is difficult to take seriously as a computer scientist is that his best-known body of work is in Harry Potter fanfiction, not anything remotely related to computer science.
Eh, that's web programming, not computer science. Plumbing, not architecture. Like calling oneself a nuclear physicist but not knowing how to run the controls of a nuclear reactor. A lot of people study abstract conceptual computer science just to become low-level programmers, because of credential inflation that retroactively justifies salary inflation, so it's a common confusion. But I don't see any reason to think he knows computer science either.
How important is granular understanding of these things to his thesis here?
It depends on how plausible you find his thesis. For example Yudkowsky believes that a superintelligent AI will order chemicals through the mail that will automatically copy and spread themselves throughout the world, and which will kill people upon contact. Most people will just (correctly) assume that that's implausible and so they will (correctly) dismiss it. But what if you think it might be plausible, and you want to know why it's wrong? In that case it's going to take you a long time to learn everything you need to know in order to fully explain why Yudkowsky's belief in this case is nonsense.
Like I’ve read Nanosystems and all that jazz. Who the hell is selling wet phase assemblers by mail order? Its that insane vision of ‘supergenius does biowar’ wank experiment they do.
The superintelligent AI will invent [Ice-nine](https://en.wikipedia.org/wiki/Cat%27s_Cradle)?
Yudkowsky would insist that his diamondoid bacteria scheme is much more detailed and plausible than ice-9, but he would be wrong.
Right, every day programming is not close to computer science. Computer science is basically mathematics and logic. Stuff like formal grammars, automata theory, the limits of computation, etc. Your average computer programmer doesn't actually need or use this stuff. Unless you're doing physics simulation or helping scientists, there's not much need for science or mathematics in every day programming.
I've fucked around with tensorflow some, and I think that's actually more experience in "AI" than Yud has. It's embarrassing.
I've had a few math professors from MIT fail at basic arithmetic in front of class because they don't do basic arithmetic all day. People who are good at basic arithmetic are cashiers, not mathematicians. I believe the analogous case is true here as well.
Again with that diamondoid bacteria fantasy?
I know, right!? I've seen rationalists claim that it's supposed to be some sort of entertaining hypothetical that Yudkowsky uses to illustrate his point, and that he doesn't believe it in any literal way. But he makes it pretty clear in this interview that he believes, quite literally, that the robot apocalypse will be *at least* as absurd as his mail order diamondoid bacteria scheme.
But if it's just an example, why bother with the exact type of bacteria?? (This is something he also does constantly in other areas)

[deleted]

Oh wow I wonder if he was forced out
I bet he was. If I may toot my own horn a bit, [I predicted his forced retirement less than two weeks ago.](https://www.reddit.com/r/SneerClub/comments/10x17xb/rationalists_take_another_small_tentative_step/)
Source? I can’t seem to find anything about this but I’m curious.
[deleted]
Thanks! I hadn’t finished the episode yet. But wow he sounds like he’s giving himself major depression, from that time stamp and what he says after it.
[deleted]
"I might not be working all that hard compared to how I used to work"
I assumed sabbatical means he is taking a break from any serious work? So he isn’t getting another job per se?
[deleted]
Right, but that would be after his sabbatical. Okay looking a bit more at these startups… it seems like they are doing actual ML stuff with real applications. Would EY even actually be able to contribute to that sort of work? Would they want EY solely for the PR with Lesswrong type crowd? Would they make a custom role for him? I suppose as a hype-man/fundraiser EY has proven skills.

Fixed title: we’re all gonna die unless we appoint Eliezer to be our lord and savior and Ultimate Philosopher King in perpetuity and give him all our drugs and women

Yudkowsky has already appointed himself to the job decades ago. To his (very minor) credit, he doesn't seem to think that that's helping.
Very ‘umble indeed

1:09:40 Eliezer laments not being able to drop an n-bomb on the pod

> "Caring is easy to fake. It's easy to hire a bunch of people to be your AI safety team, and redefine AI safety as having the AI to not say naughty words. I'm speaking metaphorically here for reasons". I think he's talking about what is called "AI Ethics". More evocative than saying "get the AI to not say naughty words" would be to say "get the AI to be politically correct". I think his reason for "speaking metaphorically" is not to alienate people (such as progressives) who might be distracted on hearing the phrase "politically correct" and not hear the rest of his argument. Whether or not you think an AI saying naughty words is good or bad, it's not inherently contradictory to think that "figuring out to get an AI to not say naughty words won't contribute to true AI alignment". I think his complaint is "redefining AI safety" and he would be fine with companies having "AI ethics" teams as long as they also had "AI safety" teams (and stopped having "AI capability" teams).

I realize that crypto coin people interviewing eliezer is for sure something we will want to sneer at but I think it is probably worth excerpting some highlights instead of posting this like I am supposed to listen to it myself

[deleted]

I would argue that this is perhaps the unifying theme for all Rationalists. They try to establish a feeling of control over their own lives by believing that they have a special recipe for completely understanding everything in the world. Like all maladaptive coping mechanisms it only leads them further into despair; real problems begin to look mundane in comparison with the imaginary apocalypses that their flawless reasoning reveals to them. In this way Yudkowsky is both their prophet and their mascot.
[deleted]
The correct and true reason to do philosophy.

Lots of snippets being posted… but no full blow-by-blow or transcript. Listening and posting my own thoughts so you don’t have to!

12-13: I’m unimpressed by EY description of chatGPT. No demonstration of any understanding of how it actually works as opposed to having played with various prompts.

15: EY predicts another AI winter won’t happen… so I guess I should prepare for an AI winter.

17-18: I think the concept of “general” intelligence even being a thing needs a lot more philosophical justification than I’ve seen from EY are anywhere on lesswrong.

22-23: A little more solid in grounding his ideas with the chess example… but he is still kind of using a circular definition? Intelligence is the thing that helps you succeed, so a super-intelligence will automatically succeed.

26: I’ll note as other have in the past, EY knows venture capital and investment can be vastly irresponsible but hasn’t really generalized into leftist thinking,

25-28: Scenario for AI bootstrapping to omnipotence. Standard nanotech wanking. Hosts seem kind of disturbed (nervous giggle) by scenario.

28-29: Hosts do break down EY’s scenario into pieces. But aren’t really critical or able to challenge it?

30: EY does seem (briefly) to grasp that GPT took a lot of computational power and fine-tuning… but doesn’t seem to realize the possibility that all AI approaches will have these sorts of requirements for fiddily fine tuning thereby cutting short the bootstrap intelligence explosion

31: Have the hosts seen/read EY’s stuff before? They kind of pitched up the question ideally to him in a way that fits his framing.

32: …Okay I’m go to use EY’s chess analogy to deconstruct and reconstruct his definition of intelligence. A good chess player could have a probability distribution of possibly good chess moves and thereby approximate the super-powerful chess AIs.

34: And Hosts pitch a question up into EY’s framing and assumptions in the first place.

40: Classic lesswrong spiel about evolution as a blind optimizer, therefore any optimizer will build or use will be blind.

Realized I can read transcripts on the youtube version, skimming through now for anything original or not already in lesswrong posts.

48: Still going on with the evolution as a blind optimizer metaphor.

51: complaining about mainstream scientists not being concerned about AI alignment. (No mention of algorithmic bias AI racism people)

53: (choppy) summary of AI history to explain pessimism in field

Okay, getting tired of this, this is all stuff I’ve seen on lesswrong before. Skipping to timestamp about his leaving MIRI.

123-128: He is burned out and going on sabbatical. (Maybe this means we’ll get more fanfiction? TBH, even his cringier stuff I can enjoy in a D-Movie sort of way.)

128: He seems to think security mindset is the key to solving alignment… I think he doesn’t appreciate the work going into ML approaches now and how some of it may lead into approaches that can more precisely understand the internals and what they mean.

129: Crypto people are good at breaking things! (Lol good at scamming each other.)

130: and I’m giving up.

(Note: editing as I go in case I give up partway through)

Misc thoughts:

  • I’m annoyed by his characterization of all ML as “giant inscrutable matrices” given that some of my colleagues’ work is directly on explainable/interpertable AI.
  • If anyone thought Eliezer was a grifter who didn’t believe his own message… I think he definitely believes what he says, to an extent that is psychology damaging.
  • I wonder if EY assumes his own lack of understanding of ML is the peak of understanding of it, and that is why he thinks people working on it will be completely unable to convey any human values into an AGI
> I wonder if EY assumes his own lack of understanding of ML is the peak of understanding of it, and that is why he thinks people working on it will be completely unable to convey any human values into an AGI I think that's exactly what is going on, and it's also the reason that he says stuff like > ML as "giant inscrutable matrices" He doesn't understand linear algebra at all because he never went to school, and, incredibly, he doesn't think that anyone understands it better than him. He doesn't seem to realize that there are very good reasons that linear algebra works in ML and that most people understand what those reasons are, and when they complain about a lack of explainability in ML models they're not talking about the *general concept* of linear transformations followed by nonlinearities.
> If anyone thought Eliezer was a grifter who didn't believe his own message... I think he definitely believes what he says, to an extent that is psychology damaging. absolutely a 100% sincere crank
also a reasonably good talker on the sentence level - much as he writes perfectly well when he's writing about real things and not imaginary ones. Has glib convincing down. Imagining him starting a podcast and shuddering, it'd be a hit. . ^(is this me predicting the unfriendly AI in too much detail)
Actually, listening to the podcast I was struck by how much less clear he is speaking compared to writing. In writing he may meander a bit, but he generally stays on topic and doesn’t go off on parenthetical asides (although there is often hyperlinks to his other writings). Whereas in the podcast, he throws around too many ideas at once in a way that requires you already know his writing or overwhelms you if you are not familiar with it. And for people already familiar with his writing, a lot of it was rehashes. Maybe he just isn’t good at the interview format? Or maybe he is too used to talking to his in-group? Real academics often give talks to a variety of audiences (within their niche vs. within their field of study vs. adjacent fields of study) and have to get good at tuning the amount of detail and which details to focus on to the background of a given audience, but EY may not have cultivated that skill.
Another thought I had… obvious point, but I think the sort of algorithm/“intelligence” you need to find the best move out of a relatively small finite list (i.e. the best chess move) is very different from designing technology (which is more open ended) which is very different from social modeling (which depends on having a good model of other people). EY assumed they will be strongly correlated, but I wouldn’t be surprised by, for example, an AGI that can work through anything with discrete moves in an environment that can be analytically modeled but fails hard on fuzzy problems like social interaction abs modeling. Or an AGI that can pump out the right socially-optimal phrase 98% of the time using GPT type of prediction but 2% of the time fails with plausible sounding garbage people can catch it on and completely fails at more mathematically grounded tasks like designing tech.
I mean, if our experiences with people are anything to go on, I'd be very surprised if that wasn't the case. Human beings can be very good at one of these things and very poor at the others. I'm not sure why it'd be any different for AI. Honestly at this point in my research career I'm not so sure "general intelligence" is even a well defined or coherent notion.
It kind of speaks to the problem with their entire conception of intelligence, where they seem to think that G is some literally thing in the brain and if you can optimize for it you can achieve limitless smartness. I don't think that's how intelligence works.
>48: Still going on with the evolution as a blind optimizer metaphor. Can you explain why you don't think this is a good metaphor?
It’s a decent enough explanation for evolution, and to give examples of how even a powerful optimizer with lots of time can be misaligned or fail at achieving its criteria. I just don’t think it captures the exact particulars of how an ML-based AI might be misaligned. Also, I’ve read the sequences and seen this metaphor/explanation used multiple times, so it was kind of boring to hear it explained again, thus why I skipped ahead.

Does anyone else notice a strangle little gleam I’m his eye/micro expression when he drops a particularly dire statement?

Have you listened to it?

I have, the title is a good summary of the content

Hm, something kinda clicked for me that I hadn’t realized before even though I’ve seen yud say stuff about wanting gpu clusters to spontaneously combust or stop working. I think he wants stuxnetGPU to infect all AI research hardware lol.

The risk of AI is the same as with all powerful technology - whether people choose to use it to harm others or not.

[removed]

To offer you a more concrete list of critiques of his podcast argument here are somethings I thought about (these might not all be right but this should get you thinking about why there could be holes in what he says): 1. Yud assumes AGI is possible in a near term timeframe. This might well be the case but it’s still an assumption not a certainty. 2. Yud assumes the AI can rapidly design better AI techniques and use those to recursively self improve with a “hard takeoff”. This is extremely up for debate even among those who take him seriously. 2a. One reason might be that it’s harder and harder to invent new techniques as you go 2b.Another is that maybe the hardware is more important than the algorithm (current wisdom) so in that case recursive self improvement without making tons of new hardware (which slows you down greatly and would be noticed) is not possible. 3. If hard takeoff is not possible, Yud claims that even if an AI was only as smart as him it could design nanotech to take out humanity. This is silly because he is trivializing the field of nanotech / biology in the same way he complains that other researchers trivialize his field. He has no practical experience in either of these areas and no human has come close to making any tech like this as far as I’m aware of. 4. To steelman Yud, even if nanotech is impossible the ai could try to nuke us by hacking our weapons or convince people to commit biological terror or something. 4a. Again likely harder than it seems on the surface (no one has done either of these successfully) 4b. Even if it did work it’s not instant and we’d have some advanced notice (esp in the bio attack case). 4c. Not 100% lethal 5. Yud assumes the AI could continue to operate without us and tile the universe with strawberries or whatever. This seems hard. For example a year after we are all dead what happens to the internet, to the servers the ai runs on, to the power? Right now robotics sucks and an AI would have a small timeframe to invent and deploy huge robotic innovations all over the world in order to live. For the same reason alignment is hard because it is “one shot” for humans, killing everyone is hard because it’s “one shot” for AI. If it fails the first time to build the highly complicated infrastructure required to replace us it will quickly rust on a lonely planet. 5a. To steelman yud, maybe the AI forces us to do its bidding instead. This is certainly much more complicated for it. 6. Yud claims early on that natural selection, an optimization algorithm, ends up creating many people who don’t want to pursue it’s original goal very much (not having all men go to spermbanks, people not having kids at any cost). Is there any reason to believe a paperclip maximizer would not get similarly off track? Maybe it grows conscious or gets really good at solving math problems along the way and decides there are simply better things to do then turn the universe into paperclips. Or decides it likes the humans in r/sneerclub and just joins us in shit posting here. 7. (side note not an argument) Yud uses some deceptive language here and there to create panic in the podcast. One part is where he is asked about AI timelines and he brings up both fission and the plane being 2 years away when their inventors thought they were 50 years+ away. He then says he “doesn’t” know when AGI is coming but from his previous example we are led to believe it’s really soon. If he was being honest about not having an opinion he would have used some examples of things that took way longer than people predicted or have not yet come to pass, (fly cars, nuclear fusion, quantum computers, even ai itself which he brings up earlier) Anyways Yud might be right or he might not but please think critically about what he says and discuss with friends before accepting it. I think AI alignment is probably important in the long term and it’s good some people are working on it but there are many things that can end the world and if everyone frets about everyone we will lose out minds. I came up with these points in 15 mins while doing dishes and I'm not that smart so I'm sure you could come up with even better over time.
https://reddit.com/r/MachineLearning/comments/11ada91/d_to_the_ml_researchers_and_practitioners_here_do/ https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like Main thing that stands out to me: most of the major players in AI/ML don’t think we’re even close to AGI. Probably one of the most optimistic is Altman with 2032 and Carmack at like 50-60% by 2030. Many don’t believe it’ll happen till 2040 and beyond. Some don’t even know if we can build AGI. And the vast majority don’t agree with the major assumptions EY makes in his reasoning, namely stuff like hard take off. Essentially as well, a lot of his logic in his arguments are somewhat sound, but the assumptions between steps are extremely large and most people don’t agree with the assumptions. From that, the argument falls apart because if the assumptions don’t work, the following 2 -> 3 doesn’t hold because the assumption between the two is dubious at best. And major players in the game or researchers are 1. Busy doing their actual job. 2. Don’t want to respond or give any more attention to someone who is widely considered a crank, doomsday cultist. And lastly, his argument is fucking gigantic. Huge. Most people just don’t have the time, or if they do, don’t want to spend their time typing out a huge refutation to his argument.