r/SneerClub archives
newest
bestest
longest
Yudkowsky drops another 10,000 word post about how AI is totally gonna kill us all any day now, but this one has the fun twist of slowly devolving into a semi-coherent rant about how he is the most important person to ever live. (https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)
153

Extreme TL;DR, so I’m just going to post a few highlights from the last few paragraphs where he starts referring to himself in the third person here:

I figured this stuff out using the null string as input, and frankly, I have a hard time myself feeling hopeful about getting real alignment work out of somebody who previously sat around waiting for somebody else to input a persuasive argument into them. This ability to “notice lethal difficulties without Eliezer Yudkowsky arguing you into noticing them” currently is an opaque piece of cognitive machinery to me, I do not know how to train it into others.

Reading this document cannot make somebody a core alignment researcher. That requires, not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you; that is what makes somebody a peer of its author.

The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so. Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly - such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn’t write, so didn’t try. I’m not particularly hopeful of this turning out to be true in real life, but I suppose it’s one possible place for a “positive model violation” (miracle). The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that. I knew I did not actually have the physical stamina to be a star researcher, I tried really really hard to replace myself before my health deteriorated further, and yet here I am writing this. That’s not what surviving worlds look like.

In this non-surviving world, there are no candidate plans that do not immediately fall to Eliezer instantly pointing at the giant visible gaping holes in that plan. Or if you don’t know who Eliezer is, you don’t even realize you need a plan, because, like, how would a human being possibly realize that without Eliezer yelling at them?

This situation you see when you look around you is not what a surviving world looks like. The worlds of humanity that survive have plans. They are not leaving to one tired guy with health problems the entire responsibility of pointing out real and lethal problems proactively.

>I knew I did not actually have the physical stamina to be a star researcher, I tried really really hard to replace myself before my health deteriorated further, and yet here I am writing this. I've wondered how EY would adapt to the narcissistic crisis of failing to single-handedly bring about techno-utopia by being very clever. Didn't think I'd get to find out for another decade or two, but it looks like we're already there: the narrative is "I burned too bright and so ruined my body and mind, no others are great enough to take up the torch, and now the world is doomed". No lessons have been learned. I say this without any guile: I'm concerned by the perverse incentive for him to become "more unwell" (e.g. neglect basic maintenance of mental/physical health in the name of the Great Work), because that would give him a license to be less successful, without needing to experience any narcissistic injury. Hopefully he ends up in a holding pattern which is a bit less self-destructive than that? (Hopefully I end up with weird parasocial attachments to Internet celebrities which are a bit less self-destructive than this?)
Well that narcissistic crisis itself is an awesome evasion of the crisis involved in having been hired to write a trading bot or something like that and failing (escalating the failures to a new programming language, AI, and then friendly AI). The reason they go grandiose is that a failure at a grander task is a lesser failure, in terms of self esteem crisis. So on one hand he has this failure "but at least I'm the only one who tried" instead of just trying his wits at some actual fucking work to completion and finding out that it's a lot harder than he thinks, if it is outside the one talent he has (writing).
>escalating the failures to a new programming language, AI, and then friendly AI). Dont forget arbital.
> the one talent he has (writing) Writing is....a talent he has?
Well, he managed to amass a bunch of followers by writing fiction and various bullshit, so I would say he has at least a bit of a writing talent. He could probably write for a living, but not any other normal job (excluding convincing people to just give him money, which isn't really a job).
He's clearly a great fantasy sci-fi writer. The trouble is people take it seriously. Including Eliezer.
Solipsism but it's about being worth anything instead of existing I've known quite a few narcissists in my life but they were the personification of humility compared to this absurdist megalomania, wtf
If your schtick is writing down original insightful things. And you're quite successful at it, you build a whole identity and career on it. What on Earth are you going to do when you run out of major insights? You become Tom Friedman. Many such cases.
My understanding of Tom Friedman is that he started not on original or insightful things, but on covering himself in the glory of made up war stories and the borrowed work of more competent war correspondents.
[deleted]
It gets really fucking bad. Even on the SSC subreddit (which is otherwise mostly outside the blast radius for the sci-fi stuff), you'll come across about one young man per month with a full-blown obsessive anxiety disorder over AI, and you'll see other users giving them unhelpful advice like "just develop stoic detachment over the looming end of the world, like I did". I hate it.
I've never really gone on the SSC subreddit, but holy fuck I was nearly one of those guys a few weeks ago. To tell you the truth I'm still pretty spooked by many of the ideas, but I'm cognisant now thanks to all of you that fear of a Yudkowsky-esque AI and all the assumptions it involves is essentially irrational. I'm so fucking thankful I found Sneerclub early on instead of continuing down the path I was going. And I worry about other guys like me who're just discovering this stuff. I cannot tell you how quickly now any internet reading you do on "risk from AI" converges to LW and Bostrom.
What are the posts or comments that convinced you? If you are ever bored I think it would be a good idea to write the counterarguments and post them here, on r/slatestarcodex and on LW (even if there is the chance they might ban you). Edit: why was this downvoted? people have explained to me that the subreddit is about disliking rationalism instead of giving good arguments, but don't you think it would be valuable?
Here, have a link that addresses how the rationalist faith in science is fueled by a terrible naivete about how any real science is done: https://www.reddit.com/r/SneerClub/comments/8sssca/what_does_this_sub_think_of_gwern_as_understood/ As for better in depth critiques of why the race/iq stuff the rationalists like to slobber over is wrong, Agustin Fuentes has a good book called Race, Monogamy, and Other Lies They Told You, and probably some decent blog posts if you don't want to commit to a book. Francois Chollet, a Big Cheese in the world of AI, has a good, digestible blog post about the implausibility of the singularity: https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec > but don't you think it would be valuable? Pretty much everyone on this sub was already convinced one way or another and we're not interested in growing the sub. I for one view the fundamental frivolity of this sub as an attack on rationalism, not the ideological content per se, but against the peculiar emotional makeup it requires--a dire and tedious self-important earnestness. Now, the real question for you is, don't you think it's valuable to have a space where people are just allowed to be, and they're not obligated to use their Powers for Good all the damn time?
I am not that convinced by the Chollet article (even less so by anything EY posts). I found his argument that no general intelligence can ever exist rather flimsy in so far as there is even an discernible argument. Granted, I have no technical expertise to evaluate the implications of the no-free-lunch theorem but this section of the article is too hand-wavy for me. I mean we know from psychology that intelligence is best conceptualized as the general factor g. Even if this g is not as general as Chollet defines "general intelligence" I wonder why that would matter if there is a more exclusive concept of intelligence that can still improve upon itself.
We do not "know from psychology that intelligence is best conceptualized as the general factor g". G is a myth. http://bactra.org/weblog/523.html
Thanks for the reply, but I have to disagree with you here. G is widely accepted, in a branch of psychology least affected by the replication crisis. For a construct with zero truth behind it, funny how it yields the best predictions. Is it the final word? No, of course not. But it beats all of its competitors. In fact, g is so well supported that none of the untainted IQ reseachers even bothers to reply to Shalizi. Therefore: https://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/
lmaooooooooooooooooooooooo
Is this supposed to be an argument?
No, it's supposed to be a sneer. This is SneerClub.
I mean your argument was basically just that only racists disagree with Shalizi, therefore he's wrong
I agree that hair splitting about whether an intelligence is truly "general" is kind of moot in the face of the standard arguments about ASI (the Chollet's invocation of NFL was the one part of the article I thought Yud did alright pushing back against, even if his reasoning was quite long-winded), but the article also contained a whole of good points besides that one. Ones that, IMO, Yud wasn't able to very capably refute.
What do you think of Yudkowsky's reply to Chollet? https://intelligence.org/2017/12/06/chollet/
I'd need to go back and reread both Chollet's article and Yud's reply in full to give a proper analysis, but flicking through it to jog my memory, I have a couple of points. For instance, I remember the classic "AI will be as far ahead of us as we are ahead of chimps" doing a lot of heavy lifting at several points. This analogy just sort of feels like it could be true intuitively, but it just isn't. Here's why, explained by an actual member of the EA community: [https://magnusvinding.com/2020/06/04/a-deceptive-analogy/](https://magnusvinding.com/2020/06/04/a-deceptive-analogy/) . I mean, even consider, compared to the 16th Century people Yud talks about, how good our models of reality actually are. Something like the standard model isn't perfect, but it's INCREDIBLY good at describing a huge amount of things in the real world. Same goes for chemistry, evolution, etc. 16th Century people didn't even have phlogiston for God's sake. When faced with the fact that science has been progressing at a roughly linear rate for a while in spite of massive increases in resource investment, Yudkowsky simply said "we're doing science wrong" and we're only slowing down "because of bureaucracy". It couldn't possibly be because of complexity brakes, or anything like that. I remember him conceding that "some degree" of environmental experience is required for an intelligence to function in the world. But this is the same guy who thinks a superintelligence would be able to develop general relativity as a theory of spacetime after seeing three frames of an apple falling or some shit. Superintelligence is not magic. A superintelligence would be at least partially constrained by having to observe the world as it functions in real, human-scale time. Yudkowsky believes it would have to make barely any observations before it arrives at perfect models of the world it observes. This is not an idea that, to the best of my knowledge, is shared by many people at all in actual AI research, or even AI safety research. This next thing isn't actually a proper argument, but it's very much worth noting. Every citation he makes in his reply is of himself, except for one Wikipedia article. He has no outside evidence. But I guess there's no point in collecting outside evidence right? He's the only thinker on his own level, and there's no point even continuing the progression toward friendly AI now because there's no one on Earth who can follow in his footsteps. His repeated citation of the "Harmless supernova fallacy" on Arbital isn't actually a counterargument to Chollet. It's just him repeatedly going "but it COULD go bad, even if we have precedents for it". It's clear on a basic level that he and Chollet (along with a lot of other AI researchers) diverge on the issue of whether cognitive capability alone is enough to dominate the environment, removed from any actual environmental factors and pressures. He would then bring up the case of humans having eventually dominated their environment. But we did this over the course of tens of thousands of years, and against other animals that mostly (or at all) don't seem to even have a concept of self-identity. A superintelligence would be BUILT (i.e. constructed deliberately) by a society applying immense selective pressure to find a system that comports with our own values. A society that would, at least initially, be able to model and predict many of its behaviours with reasonable accuracy. Things can and will go wrong as we build these systems. But believing in strong AI doing bad stuff is one thing. Believing in a full-on FOOM scenario is another entirely. Look at this preprint from a guy at Berkeley using ACTUAL MATH (something Yudkowsky is very averse to, even in MIRI's own papers) to place some constraints on FOOM: [https://arxiv.org/pdf/1702.08495.pdf](https://arxiv.org/pdf/1702.08495.pdf) (Benthall is a fucking cybersecurity researcher!). You could make the argument that it's worth focusing on a worst case scenario with this stuff if it's sufficiently likely, but the more reading I do of actual, credentialed experts in relevant fields like compsci, neuroscience, materials science, economics, etc. the less likely it actually appears to be. And then why should this issue take precedence over things like nuclear war or climate change? Look at what's happened in Ukraine over the last fortnight with the HIMARS systems and tell me that's not more pressing. I should note that I didn't agree with everything Chollet said. I understand that the NFL theorem doesn't really apply practically to a superintelligence. It would only need to be better at humans in the specific domain of thinking that humans are good at (and of course, there's some debate about whether this in itself is possible). But IMO most of his arguments held up very well in the face of Yudkowsky's response.
Very weak, he kind of misses the point entirely and just repeats "nuh uh, AI beat humans at go" and "nuh uh, humans are way better than apes" over and over again. The fundamental point that he doesn't address is that the pace of science and technology is not set by a growth in the speed of human thinking (which has not changed overly much in the last millenia), it's set by the growth of societal knowledge. And this growth is fundamentally un-foomlike, because it requires building stuff and looking at stuff, and doing rigorous experiments with special built equipment. AI insights can speed up this process, but not infinitely.
For the record, it wasn't me who downvoted. Now I know based on your other comments here that you're part of the ratsphere, but I will take your question in good faith. Maybe you're afraid like I was. I don't know. First of all, to address "meanness" here, that's sort of the point of this sub. It's in the name. But "meanness" doesn't necessarily equate to baseless ad hominem. When someone makes the claim that they are the only person on Earth properly equipped to research a subject and that no one else is on their level, calling them an egomaniac, mean or not, is warranted as a legitimate critique of character. If this person has such an inflated sense of the importance of their ideas, shouldn't their entire system of reasoning be subject to intense scrutiny? It's worth pointing out these character flaws, because character flaws are often tied to broader bunk arguments. And Sneerclub engages with actual rationalist arguments plenty anyway. You'll find quite a few good examples in this thread alone. Maybe you won't recognise them though, because they're often quite funny and not written in the extremely dry and dense style that LW posters are used to. Sneerclub may just see what they're doing as poking fun at the ratsphere and not engaging with them seriously, but just by poking fun at them, they are actually engaging in good critique. As for posts that convinced me, I made a whole ass thread about it a few weeks back: [https://www.reddit.com/r/SneerClub/comments/uqaoxq/sneerclubs\_opinion\_on\_the\_actual\_risk\_from\_ai/](https://www.reddit.com/r/SneerClub/comments/uqaoxq/sneerclubs_opinion_on_the_actual_risk_from_ai/) And the good people here were very kind in giving me their arguments. Magnus Vinding (a member of the EA community), has some great essays countering many Yudkowsky-esque arguments wrt AI, and links within his essays to many more arguments and collections of evidence against it: [https://magnusvinding.com/2018/09/18/why-altruists-should-perhaps-not-prioritize-artificial-intelligence-a-lengthy-critique/](https://magnusvinding.com/2018/09/18/why-altruists-should-perhaps-not-prioritize-artificial-intelligence-a-lengthy-critique/) A casual stroll through the machine learning subreddit will tell you that most of the actual researchers in the field point blank don't buy many singularitarian premises. And in terms of engaging with actual researchers in these fields, that is honest to god the best thing you can do. As AGI researcher Pei Wang points out on LW itself: [https://www.lesswrong.com/posts/gJGjyWahWRyu9TEMC/muehlhauser-wang-dialogue](https://www.lesswrong.com/posts/gJGjyWahWRyu9TEMC/muehlhauser-wang-dialogue) "The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it." As was pointed out by someone in my thread before, you need to engage with some of the basic tenets of fields like compsci, neuroscienc, etc. in order to understand why the assumptions Yudkowsky makes are so contentious. The actual researchers don't have the time or inclination to debate someone like Yudkowsky. He likes to present all of the death scenarios as resting on facts rather than long strings of assumptions about topics he has a minimal technical understanding of. It's easy to get sucked into the whole world of Yudkowsky because at face value, when you don't have any training or knowledge of the fields he expounds on, it SEEMS like everything he says is well supported and logically consistent. When you step outside of the bubble he has very carefully crafted (involving its own community/ideology, its own jargon, a whole echo chamber of self-citation), you see that many things don't hold up. Even other alignment thinkers like Rohin Shah who frequent Lesswrong aren't pessimistic like Yudkowsky. Stuart Russell, one of the few legitimate AI researchers in on all this stuff, has actually said he thinks we'll solve alignment (can't find the exact article where he says this, will have to look a bit harder, but I remember being glad about reading it). I mean, bare minimum, MIRI's dismal output in the 22 years it's been functioning should tell you that the approach they're taking is fucking useless. Alignment will actually be solved by a combination of practical security approaches like CIRL, and regulatory/social frameworks like the EU AI Act (which isn't enough, but it's a decent start in a field that up until now has precisely 0 regulation). Even Stephen Hawking, one of the big names always cited in support of Bostrom's Superintelligence, believes that inequality from capitalism is a bigger threat to future humans than robots: [https://en.wikipedia.org/wiki/Stephen\_Hawking#Future\_of\_humanity](https://en.wikipedia.org/wiki/Stephen_Hawking#Future_of_humanity) The whole thing reminds me of K. Eric Drexler and grey goo, except this is even worse because Drexler wasn't a raging egomaniac and had at least some credentials in his field. Nobel prize-winning Adam Smalley delivered several famous takedowns of Drexler's conception of nanomachinery, which were widely backed by materials scientists in general. Even Drexler eventually conceded that a grey goo scenario was unlikely. You know who one of the only people who disputed Smalley's arguments was besides Drexler himself? Fucking Ray Kurzweil. No surprises there. And given how many of Yudkowsky's "failure modes" rely on Drexler-style nanomachinery being possible, where does that leave many of Yudkowsky's doomsday scenarios? Overall, I'm glad that there are researchers and Deepmind and such working on alignment. Or that orgs like Anthropic AI exist. Because they're doing actual, empirical work on practical alignment and interpretability, with measurable, implementable progress. On the other hand, I think MIRI has sucked up far too much money with negligible progress to show for it. And that is reflected these days in the actual investment these companies receive. Anthropic raised hundreds of millions in its series B funding round. Versus MIRI, who aren't even viewed favourably by GiveWell, a flagship org of an ideology (EA) MIRI has started to cannibalise. Alignment and the control problem are one thing. The singularity and FOOM are something else entirely. Understanding this distinction is a big first step.
I think it makes more sense to write anything serious, lengthy, footnoted, etc., on a different site, not least because finding old comments on Reddit is a pain.
There's nothing spooky about someone smarter than you. It's an opportunity for greater wisdom.
and if you just want an excuse to do that, climate change and capitalism are *right there*
Late stage capitalism, climate change: I sleep The plot of The Terminator: real shit
Right why are these guys so obsessed with AI when there are actual existential problems looming? AI isn't happening, it's just a buzzword for algorithms that let corporations be racist and then excuse it as "just math yo." But anthropogenic climate change is literally already underway.
> other users giving them unhelpful advice like "just develop stoic detachment over the looming end of the world, like I did" I mean, this is half of how people cope with much more real shit like climate grief, so... Although in that same breath, I guess it's less "detachment" and more "acceptance".
grief is a feeling based on actual loss. the proper way to deal with a feeling based on an absurd scenario that is not going to happen is realizing it is absurd and never going to happen.
Yeah, I think that's actually a good tip. If you look at some trends in the rationalist community such as transhumanism, cryogenics, fears about x-risks, I think a lot of this can be explained by a very high anxiety towards death. I think this applies to society in general since it looks like we are collapsing on multiple fronts (climate change mainly), and there is not much an individual person can do beside accepting it. Sure, if AI risk is bullshit you can start from there, but it's not something easy to argue and there are other things that pose a big risk to humanity and people will be anxious about that too.
> Rather than putting their energy towards the real problems in the world Let's be honest, what they'd put their energy to instead is optimizing online ads or some such.
>that is what makes somebody a peer of its author. Holy shit, what is wrong with this dude. Not only have every one of his scenarios been written up by the appropriate experts (science fiction writers), basically every one of them has been discussed at length by actual researchers. Eliezer over here smearing shit on the wall and- wait, I feel like I've already written this comment before, shit I must be in a simulation
I wish I read these comments before I read the OP article. Would’ve saved me some brain damage.
Jesus fuck lol. If AI god was real, they would reconstruct Narcissus to display as example of humility for this fucking guy. > That requires, not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you; that is what makes somebody a peer of its author. 😭😭😭
>They are not leaving to one tired guy with health problems the entire responsibility of pointing out real and lethal problems proactively. "real" is doing a lot of work here.
Bah! These fools do not understand the true genius of ~~Doom's~~ Eliezer's plan! They do not *deserve* to be saved from themselves by ~~Doom~~ Eliezer!
Do you think he’s ever had sex
we are cursed with that knowledge.
Yes, unfortunately like most other cult leaders, Yudkowsky fucks.
Who cares?
Most of the worst people I know have had sex, along with most of the best people. It's kinda like drinking water, in that I assume everyone does and don't really think about it.

I think someone on /r/slatestarcodex said well about this post:

“this has the feel of a doomsday cult bringing out the poisoned punchbowls”

I really really hope that people can see the cultishness of this, I really really hope the so called rationalists can see this is just one very weird guy’s view, who has a vested interest in getting as much money, time and energy from his followers as possible. Probably not seeing how much this was upvoted on Less Wrong.

Jonestown is an apt example, because Jim Jones started out as a sincere guy who wanted to fix the world and was ultimately broken by the monumental impossibility of that task. That's my take on him anyway. I'm not trying to minimize the awful things he did and caused, but he started out as an anti-racist activist in a time when being anti-racist did not win you friends or acclaim. Of course it takes a megalomaniac to think you can fix the world and have a psychotic break when you can't; I'm not excusing him. But there's a level of pathos to it.
[deleted]
I'm giving him zero credit. A few years of activism doesn't make up for Jonestown lol

Huh. So here’s a specific critique:

Somewhere in that mass of words, he links to a made up dialogue about the security mindset in programming. In that dialogue, he asserts that secure programming is an entirely different method of thinking than normal programming, and that people who can do it are somehow special.

But that’s bullshit. Take his example of password files. We’re getting better at protecting passwords by improving our ability to imagine hostile situations and building better tools. If it was just about having the right mindset, someone would have had the mindset in the 80s and we’d never have had unencrypted passwords protected by file system permissions.

He also has this weird stuff about how the range of inputs is huge and we can’t imagine what might be in it. This is true. This is why security researchers invented fuzzing, a programmatic technique to generate unpredictable inputs to your systems.

This means that Yud’s a bad observer and has a tendency to assume magical powers when it’s just incremental progress. If he’s so wrong about secure programming that I, a technologist but not a security engineer, can see his errors… is he more or less likely to be wrong about other fields?

> he asserts that secure programming is an entirely different method of thinking than normal programming Linus Torvalds has long argued there's no difference between good programming and secure programming, and that being security-focused results in bad code. On the other hand, Yud. So who can say.
> Linus Torvalds has long argued there's no difference between good programming and secure programming, and that being security-focused results in bad code. I am a security researcher and believe Torvalds is wrong -- and I think this is the mainstream view among cryptographers and security people. It's clear why Torvalds is motivated to argue this: it's very difficult to come up with a sane way to handle explicitly-marked "security" bugs in a project as transparent and decentralized as the Linux kernel. But the claim that e.g. displaying the wrong text to a user is categorically the same as reading past the end of a buffer is just wrong. I think the GP here got to the point of why Big Yud has gone wrong here: > he asserts that secure programming is an entirely different method of thinking than normal programming, and that people who can do it are somehow special. It's such a strangely written dialogue. In the first couple paragraphs it (correctly) describes a security mindset as one where you consider adversarial inputs rather than trusting common cases (even overwhelmingly common cases). Related to this is a mental habit of trying to break things, such as in his Schneier anecdotes about abusing a mail-in-sea-monkey protocol to spam sea monkeys at people. But Yud explicitly says this **and then** spends the rest of the essay arguing that it's impossible to teach anybody this. Maybe I just don't understand the point he's trying to make, but it doesn't seem like he's argued it effectively.
> I am a security researcher and believe Torvalds is wrong -- and I think this is the mainstream view among cryptographers and security people. You can see how this is a biased sample, right? You listed everyone predisposed to disagree with Linus. All stablehands agree, these new cars are bad for society. > But the claim that e.g. displaying the wrong text to a user is categorically the same as reading past the end of a buffer is just wrong. Linus' first point (there's no difference between good programming and secure programming) is that a buffer overrun that's a security concern may be higher priority to some users, but it's the same technical issue as a buffer-overrun that just corrupts data - a bug. The solution is the same: fixed code. And robust fuzzing testing. His second point (being security-focused results in bad code) means rejecting pull requests that, for example, panic if it thinks a security breach overrun happens. Security people defend turning a bug into a crash because it prevents data breach, and Linus calls them names. IMO there are reasonable arguments on both sides: if the data is valuable enough, a user might prefer a kernel panic even if it's likely not an intruder, while another user prefers maximising performance and stability. ...And then there's Yud who makes none of those reasonable arguments, instead claiming some people have the Gift of God, chosen to deliver good code from the mountaintop.
Maybe his argument is sposed to be that normal security researchers wouldn't consider a wide enough range of contingencies? Like they wouldn't consider all the things an ASI could possibly do or something? Like socially engineering the programmer after it's been turned on to unwittingly implement exploits in the code somehow, etc. I'd guess this is probably his issue even with other alignment researchers; that they're supposedly not FULLY considering every action that an ASI could take.
This is an old comment but still. Torvalds is just wrong here. He is not willing to consider that sometimes there are conflicting goals, and you have to sacrifice efficiency for safety. Torvalds thinks performance is the only goal worth pursuing. He is overly dogmatic and has nothing to back up his dogmas, whereas security features have demonstrably been beneficial.
> Torvalds is just wrong here He's definitely not "just wrong". At worst he has a point, and I think he's more right than wrong. He's also clearly not an absolutist against security, since hardware security bug mitigations were merged without controversy despite serious performance degradation, and AppArmor was also accepted into the kernel. And, as the user, you can choose to disable these if you don't need them. As I said above, > IMO there are reasonable arguments on both sides: if the data is valuable enough, a user might prefer a kernel panic even if it's likely not an intruder, while another user prefers maximising performance and stability. > ...And then there's Yud who makes none of those reasonable arguments, instead claiming some people have the Gift of God, chosen to deliver good code from the mountaintop.
It's not like he had an option there, but it totally obliterated his point, or what little there even was to be obliterated at that time. Not all bugs or design choices have the same security impact. This is a fact, not an opinion. Claiming that they all deserve similar treatment is idiotic.
> Not all bugs or design choices have the same security impact I don't think that was ever his point? He's talking about bug fixes, not bug priority.
Lol the guy doesn't think a security related bug is even a valid concept. Or at least that was his opinion last time I read his rambling on the subject, where he literally called people "f*cking morons". He may have less strict and stupid stance now, who knows.
"a bad observer" one has to question if he is in fact observing anything external to his own thoughts
It's the same kind of tunnel vision that gave us Bitcoin lmao
Well, you see, human brain is a universal learning machine capable of learning novel tasks that never occurred in the ancestral environment, such as going to the Moon. However, normal people can never hope to learn Eliezer's unique cognitive abilities, you have to get born with a special brain. It used to be about "Countersphexism", now it's about a security mindset, but the bottom line is always that it cannot be taught, so [Eliezer is the only one who can save the world](http://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html#plans_change).
Your comment here is the only one that directly addresses his argument, and basically you're saying "we'll just think really hard about possible hostile situations to stay smarter than the AI"
No, I’m not saying that at all. I am saying that Yud has demonstrated an inability to accurately assess how secure programming works, and this leads me to be dubious about his ability to assess how AI programming works.
> basically you're saying "we'll just think really hard about possible hostile situations to stay smarter than the AI" Isn't this exactly what Yudkowsky and MIRI have been taking peoples' money to do for years now?
I would respond but I'm probably already on thin ice in this subreddit and don't want to get banned lol
> Your comment here is the only one that directly addresses his argument This is sneerclub, not actually-address-argumentsclub.
What argument? The man does not provide any evidence for his massive pile of unsubstantiated assumptions and claims. All he does is respond to every suggestion (box the ai, monitor it, fight it) with a plausible-sounding hypothetical science fiction story where the AI wins, and states that because the computer is really smart it will do that. If you point out that one story is utter bollocks (like the nonsensical idea of mixing proteins to produce a nanofactory), he'll just come up with another one.
[removed]
fine fine, I'll stop commenting

It’s fucking crazy that out of 37 different arguments, he only dedicates a single one to how the AI would actually pull off it’s world destruction. And it’s just another science fiction short story:

My lower-bound model of “how a sufficiently powerful intelligence would kill everyone, if it didn’t want to not do that” is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they’re dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth’s atmosphere, get into human bloodstreams and hide, strike on a timer.

Now, this is somewhat nonsensical (The AI persuades people to mix proteins in a lab, which then suddenly becomes a nanofactory connected to the internet? What?) But it’s important that readers take “impossible to stop an AI” on faith, because otherwise there would be social, political, military, cultural solutions to the problem, instead of putting all our faith into being really good computer programmers.

A prime example of how the whole agi will destroy the world idea is based on a long list of linked (crazy science fiction) assumptions.
The sci-fi stories work out very well for him. In order to properly *prove* that each step is bogus, you need to have expertise in many different subjects (in this case molecular biology and nanoscience), but in order to make up the story, you just need to be imaginative enough to come up with something plausible sounding. If we poke holes in this chain, they'll just come up with another one ad infinitum.
Yes, and there is the whole 'if you are wrong all of humanity dies! I'm just trying to save billions (a few other billions are acceptable casualties)!' thing.
It'll probably turn out that Eliezer sees himself as the Captain Kirk who matches wits with the doomsday AI and, using a logic puzzle, causes the doomsday AI to short-circuit and crash. And nobody else would be smart enough to do that.
I've long said that Terminator, complete with skynet and time travel bubbles is more realistic than the kind of crap these people come up with.
in the abstract, there's a tipping point and at a certain level of capacity, the minute you pass a tipping point you're at the maximum, and the entire universe is paperclips. reality offers more checkpoints beyond the tipping point.
not really. the core argument can be boiled down to "ASI is poorly understood territory, seeing as we've never had the chance to study one, and will have a large impact on the world" poorly understood, but large impact is a scenario where there's lots of room for things to go pear-shaped
Nice bailey there.
I'm not yudkowsky, I'm just making sure laymen don't think AI safety is just some dumb sci-fi crap.
> I'm not yudkowsky [Prove it.](https://youtu.be/SAlBWw6lnvo?t=7)
I'm a molecular biologist and what is this
rubbish blood music
That novel blew me away when I read it in the late 80s.
Well if you ever receive an unsolicited package of sketchy proteins with an attached note "fellow meatbag, please mix these together, beep boop"... DON'T DO IT.
> This ability to "notice lethal difficulties without Eliezer Yudkowsky arguing you into noticing them" currently is an opaque piece of cognitive machinery to me, I do not know how to train it into others. >bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker It's telling here how the greatest proof of an AGI being a true AGI is it's ability to engage in persuasion, something that Yudkowsky apparently has not successfully done. His own failure to attract people to his doomsday cult becomes itself an argument for the correctness and threat of his doomsday scenario.
He's really good at self justifications, isn't he?
His mind is a totally closed system, there is no possible way to extract him from himself
> His own failure to attract people to his doomsday cult I would not bet money that he couldn't accumulate 1.29 Heaven's Gates by wooing the most zealous 1% of the LessWrong peanut gallery.
Woah woah woah, slow down there. EY isn't a programmer.
So, the most plausible outcome is germs that shit diamonds.
his "lower bound" model....yeeeesh
seriously, if the best "lower bound" model he can come up with after 2 decades of thinking about it involves 4 or 5 steps of implausible sci-fi gobbledygook then I think humanity is pretty safe.
I mean shit an AI that uses the internet to shut down all networked devices and throws us into the stone age involves less steps and is pretty much just as terrifying.
Oh, there are plenty of plausible scenarios where a "misaligned" AI gets a lot of people killed. Arguably a terrorist being radicalised by the youtube algorithm would satisfy this definition already. But that's not good enough for them, they need AI scenarios where it kills *absolutely everyone* in order to mathematically justify donating to places like MIRI.
Perhaps the complexity is a psychological necessity. If his "lower-bound model" involved fewer steps, it would sound like a movie that already exists, and people would ask, "Wait, are you just doing *WarGames*?" (or whatever). Rather than each step in the chain being a possible point of failure that ought to lower the scenario's probability, they instead make it more captivating by pushing the "nanomachines are cool" mental button.
Tragically, it's the other way around, so we're not even getting nice rocks out of the apocalypse.
It genuinely amazes me that anyone takes this shit seriously. If I didn't know better, I'd think he's trolling and seeing just how much he can get away with in front of his audience.
I enjoyed his fanfic and now know and interact with a lot of people both online and IRL who are a part of the "rat sphere". I'm autistic and it's a good way to meet other people on the spectrum. People dead seriously believe all of this. I get dog piled in arguments fairly regularly about this stuff. I actually witnessed mental breakdowns when he posted the doom post before this one. People are straight up going through mental anguish over it all. It pisses me off.
every cult leader, or plain old bully with an entourage, seems to find their way to the "unreasonable test of loyalty" stage so reliably that I think it might be an unconscious instinct for them instead of a devious plan
Could be that we all say obviously dumb things from time to time but only very manipulative people get others to go along with it.
I can easily see how you can bribe a human being who has no idea they're dealing with an AGI to mix some protiens in a beaker. Then all they have to do is hand it off to a very powerful sorcerer who will cast a magic spell. it's scarily plausible
Meanwhile the AGI that does all that is one that "does not want to not do that". It's just a by-product of achieving a completely different goal. Essentially if we write an AGI in order to minimize pollution in NY, but forgot to add the "and don't kill any NY citizens in order to do it". Oh but wait, even if we do add that line, then it will just lobotomize everyone so they stop polluting, right?

you really have the title game down pat, as I read it I could physically feel my interest in going to the linked post drain into nothingness

okay I succumbed. but only to look at the comments! we have someone responding to another comment that disagrees by citing their 'model of Eliezer'. >My model of Eliezer claims that there are some capabilities that are 'smooth', like "how large a times table you've memorized", and some are 'lumpy', like "whether or not you see the axioms behind arithmetic." While it seems plausible that we can iteratively increase smooth capabilities, it seems much less plausible for lumpy capabilities. just say 'I made it up' i swear to god
Ah shit my capabilities are getting lumpy again
bran, son, bran
That's what happens when you get bit by LSP. [It's, like, werewolf rules](https://adventuretime.fandom.com/wiki/Lumpy_Space_Princess/Quotes).
time to bust out The Mental Iron and smooth things out again
Love me some smooth-brain thinking lol

Honestly it must be very depressing to be Eliezer Yudkowsky

[deleted]
You know, that's a huge twist. The basilisk doesn't even need to simulate these people in order to torture them!
Its hard for narcissists to be happy ..there's always someone who isn't respecting them enough.

It’s hilarious that this document that’s ostensibly supposed to be a summation of his thoughts on this issue contains barely anything resembling a citation, and barely even any links to other writings or articles that might support his arguments. I guess the implication is that if you’re going to take the time to read this, you already agree with him or don’t so there’s no real reason for him to try and include any actual evidence for any of his claims?

it links to his ai box post what more do you want
Call me crazy, but it seems to me like if you're going to say things like: > This happens *in practice in real life*, it is what happened in *the only case we know about*, and it seems to me that there are deep theoretical reasons to expect it to happen again it would help *a good deal* to replace your italics with actual links to *whatever the hell it is you're talking about*.
read the sequences
Even St. Augustine had the brains to explain himself instead of assuming everyone was as infatuated with his words as he was.
This section is not super well-organized but I'm pretty sure the "only case we know about" refers to the case of humans evolving to have values other than genetic fitness. The argument being that we are the "strong AI" that is "an existential threat to evolution" due to our misaligned values. He has a lesswrong post specifically about this ... I had to double-check, and sure enough he didn't bother linking it.
Yeah I think this is the "inner alignment"/"mesa-optimisation" problem
Yeah, he just starts with 'we all already agree on this, so no need to redo the discussions' madness.
What could he possibly link to? He already made it clear that no one else has created a similar post and that only someone who *could* make this post without any further input is his peer. Therefore, EY has no peer in this regard, or at least no peer that could think those thoughts *and* write the post. Therefore, there is nothing he could cite! Seriously though, I thought some parts of the post were interesting, and I'm not even saying he's wrong necessarily (idk), but the parts that appear to put himself on some kind of weird pedestal were off-putting.
Lil tricky to claim you're a peerless thinker with unique, special knowledge if you cite sources other than yourself. That might imply accountability to intellectual peers who could call you out on being wrong.

I suppose it’s at least true that none of the current major EA funders seem to be visibly in denial about orthogonality or instrumental convergence as such; so, fine.

Well this answers a previous question here at sneerclub about how linked EA (effective altruism for you drive by readers, not the game company) is to this whole AGI worry.

E: reading along, ow god i literally have sneers at every other sentence. And for the readers of this who are actually worried about AGI, here is a drinking game every time Yud just assumes something (because he says so) which benefits his cultlike ideas, drink. Good luck making it past -1.

E2: And this is slightly worrying ‘go full ted’ really hope this doesn’t turn into “The LW Gang Does Stochastic Terrorism”. And good sneer this one.

simply destroy all graphics cards; following the extinction of g*mers, the ai will be able to see the parts of humanity worth preserving

*monkey paw curls* Hey hey hey, heard about this new investment opportunity called bitconnect? The newest bestest crypto!
do you want paperclip maximisers because that's how you get paperclip maximisers
a thousand cheeto-stained fingers stirred and began furiously tapping at their neon-glowing keyboards in response to this affront
*claps away the dorito dust* \*Ahem\* As an avid gamer I must say... ​ ​ *REEEEEEEEEEEEEEEEEEEEEEEEEEEEEE*
lol why did someone downvote this, this is funny? true gaimers can take a lil ribbing
Got reported for “ableism, gamers” I don’t even understand or care /u/naxospade
I... what? I don't even begin to understand lol... I was making fun of *myself*. Granted that may be hard to tell just from my comment.
It took me a while but I think the “ableism” was for “reee” which…well I dunno, and I don’t care
Agreed. You gotta be able to laugh at yourself. The part about being an avid gamer was the truth. And my keyboard does, in fact, glow! Hahaha

i started reading Yud’s post then realised i absolutely don’t have to do that

God I wish that was me

[deleted]

crypto is going down and there's not as much ETH on hand

Why don’t the AI doomsday people just use quantum immortality to save themselves and then send a message back from the timelines where they don’t die to explain how to align AI correctly?

quantum immortality just means there's more of you for the basilisks
I thought the Einstein Rosen bridge means you are actually the same consciousness across all the quantum nonsense but in fairness to me I spent less time following this because I don’t try to influence long term policy planning with my sci fi musings
I'm going with the extremely lay misunderstanding of quantum immortality which is that many worlds means every time anything happens in the universe it happens all possible ways and since you're conscious you will remain conscious and it'll really fucking suck as most of the infinite instances of you degrade to the shittiest state that qualifies I base this on (forgive me) greg egan's permutation city (I'm sorry greg you don't deserve this) and knowing that yud is really into many worlds and thinks any scientists who don't fully accept his take on it are wrong and probably lying.

If you don’t know what ‘orthogonality’ or ‘instrumental convergence’ are, or don’t see for yourself why they’re true, you need a different introduction than this one.

yeah I’ve seen those documentaries, I liked the penguin

If there’s a paperclip maximiser. and, idk, a thumbclip maximiser, surely each would realise the other is the biggest threat to their optimisation problem and fight each other? This is just one example of a kind of check on total destruction that I don’t see AI people considering.

TO my eye the problem with all these AI risk scenarios is they proceed from pure thought and have little grounding in concrete reality. On paper, everything can be scaled up to infinity very fast. In reality, every system hits limits.

imo there’s too little thought on the limits that will keep AI in check.

A nice example of an ecosystem where there are lethal, replicating units that are more powerful than all the other units is the animal kingdom. And what you see is that the more dangerous is an animal, the fewer of them there are. The tiger has its territory, and it is lethal inside it, but outside it there are other tigers (and other apex predators) and they keep the tiger in check.

Why dont’ lions eat all the gazelles? the answer is not clear to me but I see that there are still a lot of gazelle herds. Something in the competitive dynamics keeps the apex predator in check. Viruses too - they optimise to become less lethal over time because destroying what you rely on is dumb.

So my question is why are AI models built in domains of pure thought where nothing prevents worst case scenarios coming true, rather than being grounded in the real world of physics and ecosystem dynamics where systems remain in tension?

>If there's a paperclip maximiser. and, idk, a thumbclip maximiser, surely each would realise the other is the biggest threat to their optimisation problem and fight each other? This is just one example of a kind of check on total destruction that I don't see AI people considering. it would require both be made and go superintellegent at the same time, since ASI can presumably grow quickly. and even then, "paperclip maximiser vs. thumbclip maximiser World War III" doesn't sound to appealing either
Pretty much this. I'm working on an AI program right now, it's incredibly frustrating that no one is actually presenting a practical implementation of how an AI would do any of this shit, because what I'm doing is incredibly hard and you run into limitations all the time. Like, at least give me some pseudocode. I bet even a simple limitation like bandwidth speed will be too difficult for AI to overcome, let alone intercontinental material factory annexation.

Without reading past your title, I’m not sure how that’s different from his usual output

he is the real messiah, lel. if only he could deliver anything of substance , give us a reason to belive in u King

I ain’t reading all that. I’m happy for u tho. Or sorry that happened.

For a guy who’s famous for writing Harry Potter fanfic, he has quite some self-regard.

To be fair, AI IS going to kill us all, just not like that. AI is already being used right now to elevate the noise to signal ratio of public discourse, and no civilisation can survive complete detachment from reality.

same as it ever was

What I do not understand how a bunch of smart asses with massive chips on their shoulders flock around the most incompetent of their bunch with the biggest narcissism? Would they not search for “weaker” persons that heap praise upon them? If I want to be the smartest person in the room, I break into an aquarium at night.

I didn’t finish (edit: ok finally I did), I only got to the nanobots will kill us all idea before I couldn’t stand the manic style anymore. I’ll finish it later. So onto my specific critique about nanobots:

We already have superintelligent nanobots working very hard to kill us all off. We call them viruses and bacteria, and before modern medicine they regularly wiped out large swaths of the population. I can already anticipate his counter argument (which is something like how nanobots designed by a super intelligence will somehow be superior and wipe out 100% of humanity guaranteed for reasons?) but at that point how is AGI (as he talks about it) any different from magic? It’s all a giant Pascal’s wager grift scheme cult at that point.

The human race itself is most closely similar to the super intelligence he’s so afraid of, and so by his own argument we’ve already beaten the alignment problem. We still might kill ourselves off but we’re basically aligned against it, we just need to focus on solving real problems like poverty, self-importance, inequality, climate change, narcissism, nuclear proliferation, yada yada. Cheers, fellow cooperating super AIs.

Edit: I finished reading his captain’s logorrhea, and man was it tedious and ever more incoherent as I went along. It reminded me of the tendency in anxiety-type mental illnesses (especially OCD) to make ever-longer casual chains of inference and be utterly convinced at the probability that every step in the inference chain is 100% correct.

I’m sorry is it possible to read this sentence:

Practically all of the difficulty is in getting to “less than certainty of killing literally everyone”.

without imagining the chonkiest nerd, spit-talking the most egregious amount of food/detritus possible whilst reaching the reddest hue available to human skin.

I’m looking to understand if this is possible to the amount no less than the slightest approximation of 1 likelihood of happening.

There's no need to body-shame and make lots of innocent fat folk feel bad when you could instead criticize someone for being a creepy culty alt-right-pipeliner grifter who enables sexual predators and is also really fucking annoying
If he has "a red hue" it's probably him redshifting away from this world and every remotely plausible issue we experience on it, not bc hes some shitty physical stereotype Im dissociating half the time and i still have a better grasp on reality than that man
Yes, that’s much funnier thanks

I was wondering if, and when, Sneer Club would notice this one!

Here comes my own rant, only a few thousand words in length.

A long time ago, I read a sneer against Heidegger. Possibly it was in “Heidegger for Beginners”, but I’m really not sure. The core of it, as I remember, was an attack on Heidegger for contriving a philosophy according to which he, Heidegger, was the messiah of ontology, helping humanity to remember Being for the first time in 2000 years. (That’s my paraphrase from memory; I really wish I had the original text at hand.)

In any case, the crux of the sneer was to allege Heidegger’s extreme vanity or self-importance - placing himself at the center of history - although he didn’t state that directly, it had to be inferred from his philosophy. And ever since, I was interested in the phenomenon of finding oneself in a historically unique position, and how people react to that.

Of course, the archives of autodidacticism (see vixra.org) show innumerable examples of deluded individuals who not only falsely think they are the one who figured everything out, but who elaborate on the social and historical implications of their delusion (e.g. that the truth has appeared but is being ignored!). Then, more rarely, you have people who may be wrong or mostly wrong, but who nonetheless obtain followers; and one of the things that followers do, is to proclaim the unique significance of their guru.

Finally, you have the handful of people who really were right about something before everyone else, or who otherwise really were decisive for historical events. Not everything is hatched in a collegial Habermasian environment of peers. In physics, I think of Newton under his (apocryphal?) apple tree, Einstein on his bike thinking about being a light ray, or (from a very different angle) Leo Szilard setting in motion the Manhattan project. Many other spheres of human activity provide examples.

Generally, when trying to judge if the proponent of a new idea is right or not, self-aggrandizement is considered a very bad sign. A new idea may be true, it may be false, but if the proponent of the idea takes pains to herald themselves as the chief protagonist of the zeitgeist, or whatever, that’s usually considered a good reason to stop listening. (Perhaps political and military affairs might be an exception to this, sometimes.)

Now I think there have been a handful of people in history who could have said such things, and would have been right. But as far as I know, they didn’t say them, in public at least (again, I am excluding political and military figures, whose role more directly entails being the center of attention). Apart from the empirical fact that most self-proclaimed prophets are false prophets, time spent dwelling upon yourself is time spent not dwelling upon whatever it is that could have made you great, or even could have made you just moderately successful. That’s the best reason I can think of, as to why self-aggrandizement should be negatively correlated with actual achievement - it’s a substitute for the hard work of doing something real.

I could go on making point and counterpoint - e.g. thinking of oneself as important might help a potential innovator get through the period of no recognition; and more problematically, a certain amount of self-promotion seems to be essential for survival in some institutional environments - but I’m not writing a self-help guide or a treatise on genius. I just wanted to set the stage for my thoughts on Eliezer’s thoughts on himself.

There are some propositions where I think it’s hard to disagree with him. For example, it is true that humanity has no official plan for preventing our replacement by AI, even though this is a fear as old as Rossum’s Universal Robots. “Avoid robot takeover” is not one of the Millennium Development Goals. The UN Security Council, as far as I know, has not deigned to comment on anything coming out of Deep Mind or OpenAI.

He also definitely has a right to regard himself as a pioneer of taking the issue seriously. Asimov may have dreamed up the Three Laws, the elder intelligentsia of AI must have had some thoughts on the topic, but I can’t think of anything quite like MIRI that existed before it - an organization whose central mission was to make AI “friendly” or “aligned”. Nowadays there are dozens, perhaps hundreds of academics and researchers who are tackling the topic in some way, but most of them are following in his footsteps.

I suspect I will be severely testing the patience of any Sneer Club reader who is still with me, but I’ll press on a little further. I see him as making a number of claims about his relationship to the “AI safety” community that now exists. One is that he keeps seeing problems that others don’t notice. Another is that it keeps being up to him, to take the situation as seriously as it warrants. Still another is that he is not the ideal person to have that role, and that neither he, nor anyone else, has managed to solve the true problem of AI safety yet.

I am also pretty sure that when he was younger, he thought that, if he made it to the age of 40, some younger person would have come along, and surpassed him. I think he’s sincerely feeling dread that (as he sees it) this hasn’t happened, and that meanwhile, big tech is racing lemming-like towards an unfriendly singularity.

To confess my own views: There are a lot of uncertainties in the nature of intelligence, reality, and the future. But the overall scenario of AI surpassing human cognition and reordering the world in a way that’s bad for us, unless we explicitly figure out what kind of AI value system can coexist with us - that scenario makes a lot of sense. It’s appropriate that it has a high priority in human concerns, and many more people should be working on it.

I also think that Eliezer’s CEV is a damn good schematic idea for what a human-friendly AI value system might look like. So I’m a classic case of someone who prefers the earlier ideas of a guru to his more recent ones, like a fan of the Tractatus confronted with the later Wittgenstein’s focus on language games… Eliezer seems to think that working on CEV now is a hopeless cause, and that instead one should aim to make “tool AGI” that can forcibly shut down all unsafe AI projects, and thereby buy time for research on something like CEV. To me, that really is “science fiction”, in a bad way: a technological power fantasy that won’t get to happen. I mean, enormous concentrations of power can happen: the NSA after the cold war, the USA after Hiroshima, probably other examples from the age of empires… I just don’t think one should plan on being able to take over the world and then finish your research. The whole idea of CEV is that you figure it out, and then it’s safe for the AI to take over the world, not you.

Anyway, I’ve run out of steam. It would be interesting to know if there are people in big tech who have a similar sense of destiny regarding their personal relationship to superhuman AI. Like Geoffrey Hinton the deep learning pioneer, or Shane Legg at Deep Mind, or whoever’s in charge at Facebook AI. But I don’t have the energy to speculate about their self-image and compare it to Eliezer’s… He’s certainly being indiscreet to speak of himself in the way he does, but he does have his reasons. Nietzsche called himself dynamite and ended up leaving quite a legacy; if we’re lucky, we’ll get to find out how Eliezer ranks as a prophet.

[deleted]
Where are all the equivalents to the cool early Christian heresies, then? How am I supposed to enjoy life if I can't be a Cathar Rationalist, hmmm?
There is already the Yud vs Scott split, the various (dead? hidden?) more far right sects. The whole various weird twitter groups (For example the ones who thought Scott was too much of a nice guy to join them in their weird semi fascist asshattery (He fooled them good)) etc. LW already split on to orthodox Yuddery, and Catholic Scottism, and now there is the whole Anglican Motte. (E: [some evidence](https://www.reddit.com/r/slatestarcodex/comments/v5qmef/agi_ruin_a_list_of_lethalities_yudkowsky/ibd8j7a/) of my 'split' theory, dunno about the amount of upvotes for that one yet however) They just have not started invading each others places yet and burning each others churches and holybooks. Yet. I look forward to Yuds 'the fact that our website was defaced shows we can never defeat AGI' next depression post.
I stand corrected, then!
[deleted]
Code the Demiurge with your own two hands. Reach cyber-heaven by violence.
The EOF is ALMSIVI
It’s all there my man.
Pascals Wager. Heaven. Hell. All of it resembles not just Religion in general but monotheism and Christianity.
hey hey, don't forget tulpas! but, you know, anime
touhous
> I can't think of anything quite like MIRI that existed before it - an organization whose central mission was to make AI "friendly" or "aligned". Sci-fi clubs have existed for generations, dude.
and had the same delusions of grandeur
> I suspect I will be severely testing the patience of any Sneer Club reader who is still with me can confirm this post contains at least one accurate claim
That's mean.
See the sub’s title for more!
Isn't this just an anti-rationalist subreddit? Being mean in general doesn't seem like a good way of achieving your goals
What goals did you have in mind?
Convincing people that rationalist ideas are wrong or that it is somewhat cultish. If rationalism's main critics can't give serious non-mean counterarguments, that's a point in favour of rationalism. As someone that was "rationalish" even before discovering it, I come here to find mistakes in that kind of reasoning, but instead I mostly find people being mean.
> Convincing people The stickied [rules thread](https://www.reddit.com/r/SneerClub/comments/91th1q/new_rules_for_sneers/) says quite clearly that this isn't a debate club. Why would you think this subreddit is a project of trying to convince people through Rational Argument?
It doesn't need to be a debate club, if someone is wrong on the internet and you reply it should be to try to convince them of the truth (or to get them to convince you), and that should always be done by giving rational arguments. If SneerClub isn't like that, it's an odd anomaly. And it is quite sad that this is the closest thing we have to a proper anti-rationalist subreddit. If rationalists are wrong and they can't tell they are wrong, who will tell them that? And if people here dislike rationalism, shouldn't they actually try to effectively make rationalism disappear? I mean, image that you had some namable ideology, wouldn't it be depressing if the only people against it didn't try to argue in good faith?
> if someone is wrong on the internet and you reply it should be to try to convince them of the truth (or to get them to convince you) Why? > should always be done by giving rational arguments Why?
I know that different people have different values, but it seems very weird to be OK with other people believing wrong things, if someone is wrong, that is a part of the universe that you will eventually need to fix, it might have a low priority, but if you are going to reply you should at least try. And it might turn out that they are the ones who are right, and if that is important you should probably try to confirm it, otherwise you might take wrong decisions. And you should use rational arguments if you have the goals above, since that is the kind of argument that tends to arrive at the truth.
Okay, I've diagnosed your issue. The problem is you think rational arguments on the internet are how people change their minds and arrive at a better understanding of reality. But that's false in most cases. That's not how people actually work.
> it seems very weird to be OK with other people believing wrong things, if someone is wrong, that is a part of the universe that you will eventually need to fix And here I've been for the last 14 years thinking [this xkcd](https://xkcd.com/386/) was a joke.
The art of ridicule has a rich history. You should look it up sometime
>seems very weird to be OK with other people believing wrong things, if someone is wrong, that is a part of the universe that you will eventually need to fix, why? people believing wrong things doesn't usually become an issue bigger than you having to see them being wrong. unless they're in large numbers and taking detrimental actions (like antivaxxers) trying to "fix" them isn't a priority
Here’s the deal: SneerClub didn’t start out intending to be your one-stop shop for anti-rationalist arguments and…wait that’s it, that’s the only thing that matters here. You’re in a cocktail bar complaining that it isn’t The Dome Of The Rock.
Alright, it's just weird that this exists and I thought it wasn't like that, the subreddit's description isn't very descriptive and my prior was too strong. And if I were in a cocktail bar and I thought it wasn't futile and didn't have social anxiety I would probably try to convince them to stop being a bar, and become something else, but the world is like it is and that's sad.
People like cocktails, there’s nothing wrong with that: they taste good and they make you tipsy. Alcohol is practically or literally a sacrament in almost all of human culture. Even Ayatollah Khomeini wrote poems in the sufi tradition extolling the virtues of drinking wine, that’s how deep alcohol runs, and I say that as a teetotal recovering alcoholic with no qualms about the stuff except my own and others excesses. Conversation is the same, in fact it runs deeper: talking shit about things you don’t like is part of that. Nothing is sad about these things, you’re just disappointed the world hasn’t conformed to your own expectations, and I highly doubt you can enumerate a particularly compelling list of reasons that aren’t extended paraphrases of “I thought one thing would be x and it was y”.
More importantly: where did you get your prior from and why was it so strong?
Rationalist-adjacent people seem to be reasonable in general. I don't know any other place that explicitly disallows serious discussion in general. My model of other people tells me they would not want to do that, that model is based mostly on myself and on reading Reddit, which I know isn't representative of the general population, but it's the best I have.
I’m a reasonable person, I do other things A reasonable person logs onto the internet assuming that a username and its comments on one subreddit are not the sine qua non of the person posting under it
[deleted]
Is it that weird? I just don't like people wireheading themselves with alcohol. I know ending bars won't stop that from happening, but it might reduce total alcohol consumption. Do you think the social benefits from bars are big enough to outweigh that? (If you cared as much about anti-wireheading as you currently guess I do)
“Wireheading” is a very specific term that comes from a very specific place with a bunch of very specific (usually negative) presumptions about the nature of adjusting one’s body chemistry Say what you mean in general language
They are taking a decision in part not because of patterns of behavior reinforced by functions of sensory data, but by directly (that is the important part) affecting their brain's chemistry. The extreme version of that is electrically stimulating the brain's pleasure centers like they did with rats. Whether or not this is bad is obviously subjective, but I care about reality and the default way of assigning value and consider wireheading as extreme as drugs to be bad.
Just to remind you of something I mentioned really clearly earlier: I am a teetotal recovering alcoholic. I probably know more about this than you. Not only does drug abuse not exist on a spectrum from no external chemical input to directly stimulating the pleasure centre, but the chemical interactions which take place in the brain do not conform to any straightforward neuro-behaviourist model of reality -> illusion in any mammal *including in rats*. One example is sociality: do you really think you’re learning more about the world by sitting at home than by having a conversation over martinis?
I will have to think more about this topic.
Good!
I mean, that’s just false? Public discussions on the internet are theatre and entertainment first and foremost. They’re self-aggrandizing; and if the participants are trying to convince anyone, they’re trying to convince the onlookers (like how presidential candidates debate to convince the voters instead of each other). It’s the opposite that’s anomalous - it’s vanishingly rare to find a space where (1) people are allowed to hold and discuss differing views without being berated or dogpiled on, and (2) everyone enters with the conscious and unconscious resolve to change their mind. To be clear, I don’t even believe that many rationalist spaces fulfill those two conditions. You can use the internet for private asynchronous chats, which I think can be a good way to have lengthy discussions while allowing ample time to find sources and consider arguments. I’ll always be willing to get into the muck and talk things through with people, but that’s for DMs (speaking of which, feel free to DM me!). This subreddit, though, is primarily about having fun - and it’s way more fun to criticize things than it is to come up with an entire well-reasoned refutation.
An odd anomaly? Have you ever been on the internet before? Really? The vast majority of the internet is a poo flinging contest, not a place for debate. People are here for entertainment. And there are plenty of arguments against their nonsense out there. You can go and do the searching for yourself. I come here for the jokes.
I have been on the Internet before, I think people in general try to be rational, it's just that they are very bad at it. But since I can't read other people's minds nor do they often try to explain what they are trying to achieve, I will update on that. I'm also here for entertainment, but I consider proper discussion to be a kind of entertainment. And believe it or not I haven't been able to find decent counterarguments to the main rationalist ideas, if you know somewhere where I can find them, please send me the links. I would REALLY like to have a step-by-step explanation for instance written for people with CS knowledge (and about modern "AI") that leaves nothing implicit about why we shouldn't worry about AGI and that doesn't reduce to having different priors.
[deleted]
You know that doesn't answer the question. I don't think most libraries have explanations about why we shouldn't worry about AGI, let alone good explanations.
[deleted]
I'm obviously aware of that, but to the extent that I agree with him it's because the logic seems solid. Reading about other people that stupidly predicted the end of the world doesn't change anything, because they weren't able to convince me in the first place. I want object level arguments against AGI doom, and while I still have to read what others have linked, the lack of anything well-reasoned in what I have read so far makes me believe that those counterarguments just don't exist.
[deleted]
6) I don't agree with that point, so there's no need to convince me it's wrong. 5) If you give your AI so little power over the world (and it can gain it on its own) that unplugging it is a viable strategy, someone else will give their own AI that power, and then we die. 4) Human values are just a very small area in the space of all goal systems, and if the AI is an optimizer (an RL agent for instance) it will probably goodhart whatever terrible proxy we give it, it's a hard problem that might not be solved. And then there is the inner alignment problem, which is also hard. 3) There is a chance that AGI might come from DL, in which case it might happen relatively soon. Of course it might be a dead end, but we all have a copy of a generally intelligent algorithm in our heads, I don't think it would take more than 100 years to reverse engineer at least the general purpose part. If that happens it might help our understanding of human values enough to code aligned AIs, but I'm not optimistic. 2) If it is humanely possible to achieve, the incentives to create one are enormous, so it will probably be achieved. 1) Again, brains exist, and I don't think it would be hard or especially complex to make something way better once we know the algorithm. Current DL systems scale very well with more data, parameters and compute, I don't expect it will be much different with AGI. And even if that fails we can make it much faster, with perfect memory (probably, if bad human memory isn't an algorithmic limitation), and run many copies at once. Of course the full argument is longer than this, but the point is that it exists and it isn't obviously wrong. We can't prove it, we can only make educated guesses, but they point to doom by default, and that is the best we have.
5 the power grid is incredibly complicated, and easily disrupted. Unplugging is far far more viable than you seem to think. 4 Incidental omincide requires infinite scaling of all of the complicated processes involved. There are hard physics barriers already being met. This is not a credible threat. 3 This is all gibberish. "we all have a copy of a generally intelligent algorithm in our heads" is a hugely contestable unproven assertion, "I don't think it would take more than 100 years to reverse engineer at least the general purpose part." is both wild conjecture and pure gibberish! 2 You can't just assume that it's possible, there's no proof 1 "once we know the [human brain] algorithm" no proof whatsoever that human intelligence is translatable to algorithms, that DL resembles it at all, or that it's infinitely improvable to the point of FOOM. Human memory is just assumed to map neatly onto binary RAM? 100% scifi, 0% science. > run many copies at once This is my favorite part. If you can digitise the human brain and run many copies, you've merely created a committee of humans, not some runaway FOOM AI. > it isn't obviously wrong it is obviously wrong.
It can be in multiple power grids, or using solar panels, or have one copy secretly escape, or work with the government or whatever, it's superintelligent, if there is a winning strategy it will probably find it, regardless of the details. What kind of scaling are you saying is necessary for omnicide? Computational? I don't think killing a significant fraction of humanity even requires superhuman intelligence. The brain obviously runs an algorithm. It is physical and as far as we can tell reality is a system of differential equations. In the worst case scenario the algorithm is the laws of physics, which we of course can't simulate exactly with physical computers. But we don't care about purely quantic effects, or about the position of every atom. For every neuron we only care about the information that it receives from the others, what it does with it and the information it emits to others. And that function is in all likelihood computable. And it doesn't matter that the brain isn't anything like a digital computer, the brain is noisy and we don't need infinite precision. Modern computers are Turing-complete, they will be able to perform all the needed computations. And whatever physical variables and states represent our memory, we can store them as floats, bools or whatever. What are you saying is gibberish? The algorithm being general or it having a general part? For our purposes human minds are general enough. By general I mean that the neocortex can learn to perform new complex functions if you route to it the relevant data. As opposed to hardcoded parts like instincts. Running many copies is useful, it likely implies that if one copy learns something, everyone can know that too. And having many geniuses working on the problem might be way better than having only one.
You make so many outrageous leaps of logic, then, when pointed out, you leap over the objections back to your groundless outrageous assumptions. You are not being logical or rigorous in your defense of rationalism, which is a pretty good self-sneer, so I guess thanks for that.
What are the leaps of logic? I honestly don't see them. Please explain. Even if you think they are obvious. Edit: and also what you consider to be groundless assumptions
After this I'm not going to continue debating you, because it is clear (and I don't mean this derisively, I understand it's difficult when you're in the midst of it) you're not yet ready to look beyond your own thinking from first principles at the actual state of the world and science, and you're not yet ready to look beyond the self-referential logic of Yudkowsky. As a preamble, part of the reason you've had difficulty finding a really comprehensive single counterargument to Yudkowsky's whole conception of AI is that it relies on very contentious and unproven assumptions about a wide range of fields. At best, these assumptions are widely debated, and at worst, they've been all but disproven (in my other response to you I referenced the Drexler-style nanomachinery that is key to so many of Yud's "failure modes"). I agree actually that it would be a great idea for someone to write up a blow-by-blow takedown of the whole Bostrom argument, as this sort of view of things is getting more mainstream by the day, and would prevent people like myself from developing a fucking anxiety disorder over it all. Some people in AI have done this to an extent, like Chollet's refutation of FOOM, but IMO there's been nothing super duper comprehensive refuting every point of Yud's they can find. The Magnus Vinding contra-FOOM reading list is probably the best starting point in this regard. And maybe you believe that Chollet's refutation of FOOM has already been totally rendered worthless by Yudkowsky's response. This is not the case. As me and another commenter already pointed out, Yud's response was generally pretty weak, and most of Chollet's points still stand very strong. And once again, it bears repeating that Chollet is a very respected member of the ML community. Yudkowsky has not actual experience in ML. He is an amateur philosopher. WRT your actual points - if it escapes to multiple power grids, we shut down multiple power grids. If it builds (or makes?) its own solar panels then we shut down the solar panels. If it builds a zero-point vacuum energy generator then we shut down the zero-point vacuum energy generator. If you look at how cybersecurity (and just actual computer programs at all) works in general, you will see that when people talking about an ASI doing this sort of thing, it's actually incredibly hard, and maybe even impossible. Maybe there is a "winning strategy", but based on everything we know about cybersecurity and programming, it's not at all clear that there actually is. If, as seems very likely based on current trends in AI research and what we know about the brain, it requires a veritable fuckload of compute, then you shut it out from supercomputer access. Maybe it can copy its algorithm (big maybe there), to a machine with less compute. But now that its algo is sitting on a machine with not enough compute, how the fuck is it gonna be able to keep outwitting everyone? Now, there is a real danger of people giving it much more power than is warranted, like, oh, I don't know... controlling the flow of all information over social media. But we're already actually doing that, and it's already Goodharting some fixed objective in a terrible way. And even then, if it is given too much power, it doesn't mean it's game over either. Remember what I said about alignment being a thing, and FOOM/singularity being a completely different, extremely contentious thing? LeCun has suggested several times making an agent specially dedicated to monitoring and/or shutting down any other rogue agent. He even puts forward the idea that we could pair ANY risky agent with a specialist monitor agent, sort of like a moral discriminator. This specialist agent is itself is constrained by being specialised rather than "general" (if that is actually a meaningful term), and for this same reason will also always beat a general agent (just by virtue of being specialised). Maybe that sounds like a horrifying way to manage the issue. I'm not exactly put at ease by it, but we do already do this with anti-spam bots. It may be a good exercise for you to note in future whenever you're putting forward an argument that relies on you going "but it COULD be possible. I don't have any evidence, or precedent, and I will dismiss all of the details and specifics, but it COULD be possible. There are no laws of physics that it contravenes." You have done this in describing the "winning strategy". This sort of argument may be permissible on LW, but in the actual world of credentialed scientists, it will not be taken well. And for very good reason. Maybe note how much Yudkowsky or Bostrom do this same thing also. The book Superintelligence has these sorts of "it may be possible" qualifiers at least every other paragraph. (There are plenty of times that Yud's suggestions actually do contravene physical laws, cf. Drexler nanomachinery.) RE: scaling for omnicide. You have done something I often see people in the ratsphere do. And that is treat cognitive capability as magic. Scaling only computation ignores all the other factors that would be involved in an agent performing actions against the actual real world. It does not matter how smart you are, you still eventually need resources, and will run into resource bottlenecks. Of course a more intelligent agent would be more capable of accruing resources, but that doesn't mean you can handwave any resource considerations away. Not to mention of course that scaling computation itself REQUIRES MANY RESOURCES! I agree that it doesn't take heaps of intelligence to kill a large chunk of humanity. But again, that is a problem we are already dealing with in the real world, with nation states having amassed huge nuclear arsenals, and corporations actively vying to see who can cause the ecosystem to collapse the fastest (and then crafting tremendous disinformation campaigns to misdirect people!). It's not necessarily clear that the Yudkowsky scenario is actually any different than any of this. So yes, omnicide from a lone superintelligence requires incredible scaling of many, many factors. CONT. BELOW
RE: algorithmic brains. Once again, you guys in the ratsphere are really jumping the gun here. It's certainly possible the brain is "just" an algorithm. It's also very possible it's not. I'm not invoking any sort of dualist magic here (I'm very much a substance monist), but there are so many, many things that are so extremely unclear about the brain right now. Many of the people who were instrumental in formulating the Computational Theory of Mind later came to criticise it. Hilary Putnam, an incredibly respected thinker, and the person to formulate CTM in its modern state, rejected many of his previous assertions, and came to align much more closely with John Searle of Chinese Room fame (the Chinese Room experiment itself is still hotly debated, and many, many researchers actually agree that Chinese Rooms are completely possible). Not to mention as well that if we succeed in emulating a human brain, then alignment shouldn't be particularly difficult, because we're literally just dealing with a person. This is what the guy above me said. In fact, we're dealing with a person whose brain we built from scratch. We should be able at that point to identify what we need to change about the brain to make it behave morally. You guys do this weird thing where you acknowledge that an ASI might do new science, but you treat entire fields like neuroscience like they're already solved, even when they're not even close. Actually engage with some reading in philosophy of mind and neuroscience to see how far away from any sort of consensus or workable research path we are. "It's \*obviously\* an alogrithm" is the sort of ridiculous shit that would garner a hearty chuckle from most researchers in this area. Maybe not Deepmind researchers, but they're experts on DNNs, not the brain. DNNs are not brains. Even people in ML understand this. By "quantic", do you mean quantum? Quantic refers to a specific sort of polynomial. Saying that the brain consists only of interactions that can be described by quantics is an assertion I don't even think Yud would agree with. But if you mean quantum, then yeah, we probably don't need the position of every single subatomic particle to simulate the system. But it's also not clear we wouldn't need a veritable shitload of detail or extra info beyond just the cells themselves. And even if it's technically possible to simulate the entire brain on a Turing machine, that ignores how practical it is. We don't actually have any particularly good models for simulating a neuron, because we don't fully understand how they all work. We just in 2019 finished completing our first ever connectome of any animal in C. Elegans. C. Elegans has 302 neurons. 302! And even if we have a connectome, that doesn't mean we can just magically simulate everything! We still don't have connectomes of other extremely simple model organisms like fruit flies. "For every neuron we only care about the information that it receives from the others, what it does with it and the information it emits to others." So we only care about... how the brain works??? No shit. That's a kind of difficult problem. And again, like I said before, if we actually do get to AGI by whole brain emulation, all we've done is create a person on a different substrate, and alignment shouldn't be a problem. I'm not scared at all of WBE. It's all this sort of shit that Yudkowsky just handwaves. And it's necessary for him to handwave it, because otherwise all the "certainties" that he purports his worldview to be based on collapse. Most of your talk about brains is gibberish, because you're not actually doing science or engaging with current models of the brain. You're just spouting programming jargon and saying "neuron" repeatedly. READ THE ACTUAL EXPERTS. If the scenario involves multiple artificial agents rather than a singleton, then you enter into the world of evolutionary dynamics. This presents new problems, but also new solutions. And we will still be able to model and understand many of the behaviours by studying ecosystems. Just like you employ the argument that we think making AI should be possible because we have brains in our heads, I want to employ the argument that alignment is possible because most people (including powerful people) are able to behave in a way that comports enough with our shared values to not cause the apocalypse. We have brains in our heads, and many of us have decently moral brains. FOOM DOOM relies on the idea that we will arrive at an agent, deliberately or inadvertently, that has the power to outwit AND outgun in the very complex real world, not individual humans, but ALL OF HUMAN SOCIETY. Human society, it should be noted, taken as a whole, is the greatest superintelligence we have ever known. And not just that, but we must arrive at this agent before arriving at an agent that can behave in a sufficiently moral way, and before we have developed any techniques that could constrain or manage its behaviour in ANY WAY such that the default outcome is not TOTAL DOOM. Does that not strike you as deeply, deeply unlikely? We arguably already have fantastic methods for constraining behaviour like boxing, or CIRL. (It's not clear that Yud's box experiment actually even achieved the result he said it did, let alone being constructed in a manner consistent with good science.) I understand and agree with certain principles like orthogonality and instrumental convergence. But these concepts are not universal truths about the nature of intelligence, nor are they even widely agreed upon by researchers in the fields they are relevant too. It is possible, in principle, to create an MDP that displays instrumentally convergent behaviour, as Stuart Russell pointed out in his dialogue with LeCun. It's not even particularly hard. But it's also possible to create MDPs that don't display this sort of behaviour. And maybe actual intelligence doesn't even really work well as an MDP. There are computable versions of AIXI that have already been made. These are exactly the sort of things that Bostrom and co are frightened of - pure RL agents. The problem is that they are extremely ineffective. One of them couldn't even do well at Pacman after something like 250,000 generations IIRC. If it can't even get Pacman down after that many generations, think about how it would function IRL. Maybe, if you gave a computable version of AIXI near-infinite compute and near-infinite real world data, you would get a computer God. But you... can't. Because the world just doesn't work like that. And it's not even clear that what this AIXI thing is doing actually resembles cognition in any meaningful way either. It's just doing a tree search on the physical world. That satisfies Hutter and Legg's quite perverse definition of intelligence. But it doesn't resemble any sort of cognition we know. Basically dude, what I'm saying, is if it is genuinely mentally harming you, seek a therapist. There are some pithy little retorts against Yudkowskian thought that exist online (cf. Chollet) that I've found, but in order to refute the full diaspora of Yudkowskian thought, you need to make a serious effort to engage with the scientific literature in a wide array of subjects. Maybe a full and complete single refutation exists that I haven't found. But if you actively research and think critically about these topics, rather than just regurgitating Yud's arguments, you will find the answers you seek. I don't fucking care that Yud says Bayesianism is better than science. Garbage in/garbage out applies to Bayes' theorem too. It's always funny to me that Yud got his whole start by supposedly identifying good ways to overcome bias. And now here he is, declaring himself the most important thinker in history, trumpeting a very weird take on the control problem that even other alignment researchers seem to disagree with in many ways, with even the SSC sub growing sceptical. He is clearly very unwell, and I hope he finds help. You are part of a system of thinking that allows you to explain away any sort of counterargument by going "it will be really smart, it will find a way, even if we don't know the details". This is not good science. This is not even science. This is unfalsifiable. Edit: If you think that anyone in this thread basically throwing in the towel debating with you is evidence that no good counterarguments exist, please pull your head out of your ass. You have already been presented with good counterarguments from several people (if you do not recognise they are actually good, then, like I keep saying, you need to actually engage more with the actual state of research and understanding of modern science, and not just exist within a LW echo chamber). The only reason I've debated you this long is because I'm newer here. I'm sure, after I engage with a few more stray rationalists I'll give up like so many others here. They're not giving up because they can't refute your crazy sick epic arguments. They're giving up because they've already been over this, a million times.
[deleted]
You should care more about other people being wrong on the internet. And I do care that some day I will die, death by AI is just one possible cause, I'm doing what I can to avoid it along with the other ones. If there is just one that I'm not able to solve, given enough time I die, so I want to be sure that I'm not missing the possible elephant in the room or imagining it. The only cause of death that I find acceptable is the heat death of the universe.
It is absurdly bad logic to assume that because you haven’t found a counter argument to something that you agree with, that therefore it doesn’t exist You have *abysmally* failed at your own rational standard here
I'm not talking about counterarguments in general, only valid ones known by other people. If they existed since I searched I would probably have found them, but I haven't, so I have some evidence they don't exist. It's weak evidence but evidence nonetheless.
I, personally, think you’re a long way from having remotely adequate reasoning and research skills to justify the degree of certainty with which you express your opinions
Your discussion in this thread tells me that you wouldn't actually be currently capable of recognising a good counterargument.
> And if people here dislike rationalism, shouldn't they actually try to effectively make rationalism disappear? Euh what? Wow, you might want to explore your assumptions here. There are a lot of things which I dislike, but I'm not going to try and make them disappear. But yeah, perhaps you are right. Hmm perhaps I should start my crusade against people who chew loudly in public. DEUS VULT CHEWIST!
You have my sword.
Damn started an online cult by shitposting again, this is going to be the whole 'joking about being a secret government agent' thing (granted after trying to make it stick 17 times it was getting a bit dated, I was almost out of alphabet) all over again.
The damage done by people chewing loudly in public is low, and there isn't much you can do about it. On the other hand, if most rationalist ideas are wrong, they might be very damaging and should be eliminated from the meme pool, and there is something you can actually do about it.
I'm currently working on a piece about how Scott Alexander/SSC is wrong about Marx, if you're interested in that. Not sure when it'll be done, though.
> Convincing people that rationalist ideas are wrong or that it is somewhat cultish. If rationalism's main critics can't give serious non-mean counterarguments, that's a point in favour of rationalism. The problem is that "rationalism" systematically teaches people to be immune to correcting their beliefs on rational grounds, so that offering arguments to try to get "rationalists" to correct their beliefs on rational grounds is quickly revealed to be a fools errand. I spent a couple years trying to do this with friends of mine who were into LessWrong, and every single one of these efforts ended the same way: I could convince my friends that *to our lights* what EY and LW were saying was plainly and unquestionably incorrect, but part of what my friends had learned from EY and LW was that it's always more likely that we had made an error of reasoning than that anything EY and LW teach is incorrect, so that the only conclusion they could draw from even the most conclusive objection to anything EY and LW is that -- based on what they've learned from EY and LW -- we must have erred and this is only all the more reason why we should have absolute trust in whatever EY and LW teach. There's only so many hours you can piss away on a task whose outcome has been fruitlessly determined in advance in this way before shrugging and deciding to go find something better to do with your time.
Hmm... I have a few questions: Do you think your friends are representative of rationalists in general? Do you think most of them are like that? Did you explicitly tell them that they are falling to cult-like thought patterns and explain to them what their reasoning looks like from the outside, like you explained to me? I, for instance, think that I'm sane enough to be convinced that we shouldn't worry about AGI if I'm presented with a good argument, and I'm definitely (>99.95%) sure that I won't fail in a way as stupid as that one. I'm sure I'm not the only one like that. Maybe posting your thoughts online so that everyone can read them without having to repeat yourself might be a good idea.
> Do you think your friends are representative of rationalists in general? Do you think most of them are like that? Did you explicitly tell them that they are falling to cult-like thought patterns and explain to them what their reasoning looks like from the outside, like you explained to me? Yes.
...well if that's true, that's depressing, but it's one more reason for having some online resource to "deprogram" rationalists, for lack of a better word.
> it's one more reason for having some online resource to "deprogram" rationalists, for lack of a better word. No, the inefficacy of such a resource is not one more reason to have one, but rather a reason to prefer to invest one's time in projects that are more efficient in producing desirable results.
I wasn't talking about the inefficacy, but about the degree of the problem, I might try to do it myself.
The degree of the problem is a reason to try to do something to address it, but not a reason to try to do a particular thing to address it, when there's no reason to think that thing will effectively address it -- especially when that thing has been shown *not* to effectively address it. Like, the degree of the problem of heart disease isn't a reason to put quartz under our pillows, or whatever silly example of an ineffective treatment of heart disease we can imagine. (Incidentally, it's charming that this, of all possible hiccups, is one that's occurred here. Recall a point of GiveWell's assessment of MIRI, that even if MIRI were right about the problems, they've done nothing to show anything they're doing will present any effective solutions. To which the "rationalist" community responded that the problems are so serious we should give all our money to MIRI quite regardless of any question of the efficacy of solutions. This seems to be one of those hiccups of reasoning that "rationalism" is particularly prone to.)
I think you’re confusing the goals you came here with for the goals the sub has I understand the disappointment, but I think the blame is misplaced: sneerclub didn’t advertise itself to you as “the place where you go to find holes in rationalism”, in fact we rather plainly advertise as “place that thinks those people are awful and vents about it” What brought you here expecting the former?
they kinda set themselves up with that one
That doesn't make it less mean. That's like punching someone and then saying it's their fault for being weak.
get in the locker
No.
Real quick , can you tell me the world changing revelations of "venus" [now that the dust has settled?](https://www.reddit.com/r/slatestarcodex/comments/eelfqu/saving_a_genius/) Eager to hear the reality-warping genius that has revolutionized philosophy and truth. Odd that you completely stopped talking about that shortly after meeting up.
well, at least "I found her again this year. She's now 20," is less bad than the *previous* paragraph suggests.
Arguably, the reason EY comes off as someone promoting himself too much, is that he essentially has the role of a communicator. It would be hard to communicate about these things and avoid sounding the way he does. I think that's the main reason why he mainly avoided addressing AGI directly up until his "dignity" essay, he knew people would find his essays off-putting if he was totally honest.
he is a terrible communicator. why do rats need to reinvent terms that already exist and then package them up in grammatically poor sentences that have no reach beyond their weirdo little cult?
It's actually quite easy: Just stop bragging about your IQ and insulting people.
In what possible sense did he avoid “addressing AGI directly” up until that point?

Isn’t it about time to cut off the money to this charlatan and grifter? I mean, seriously, the guy is in his 40’s, and he has never held a real job in his life. He has no business telling the rest of us what we should be doing with our lives, given his lack of experience with the real world.

I don’t know man. This is all so…pointless. There seems to be a whole bunch of people that consider themselves experts in AGI security because they can essentially construct a plausible sci-fi script about the end of the world. The idea of “if we make smart machines there is the danger we will lose control of them when they can self-improve and kill us all” is as old as Dune, if not older, so how can anyone say they are the originators of this is beyond me.

As much as good sci-fi helps us identify problems that *could* arise in the future and at least think about them or talk about how they make us feel at some capacity, sure, that’s useful. Taking that and running away with it, to the point where you’re talking out of your ass and think you’re talking science because you use the jargon…i don’t know, it sets my teeth on edge.

These are the same people that would be completely lost if you ask them to implement(or say anything of value, really) about basic security issues *now* - let’s say “how do I let modders for my Windows game write custom code in lua but stop them from messing with the savegame files”. That’s because people know *something* about this stuff, so the chances of bullshitting are low - the risk of the next guy quoting your post with a source snippet that proves you’re full of shit is very high. I bet there’s a huge overlap with people that couldn’t solve a classical physics problem about pendulums and springs if their life depended on it, but honestly think they can talk about interpretations of quantum mechanics.

This whole thing is like one guy coming up with an idea on how the humans in Terminator could stop Skynet from launching the nukes by not giving it access to the codes and requiring human oversight at that point, and another guy coming up with another idea of how Skynet could do something different, like “oh well Skynet could use 2 functions it is allowed to perform, like making a phonecall and synthesizing voices, and get the code”. This is the exact type of nerd masturbatory conversations from dudes(and let’s face it, it’s mostly dudes) that think ingesting ungodly amounts of nerd shit makes you competent in talking about real tech issues.

Yeah…sure. I guess. Whatever. This…can go on forever. It’s essentially indistiguishable from talking about monkey’s paw and trying to imagine what the perfect, loophole-free wish would look like. You just reskinned it for the tech age. It’s fun, and it take some imagination and capacity to follow through logical conclusions, but that’s it. You’re not *really* talking about AI, you’re talking about an AI-themed script for a Terminator pre-sequel. If your writing is good, you might get people hooked or even fool them into thinking “this could actually happen”. If you’re really, really, *really* good, you might even give an idea to one of those drones that actually built the tech, though you probably wouldn’t actually understand the actual idea without oversimplification. But you *should* be aware that this is still fiction - sure, Jules Verne did predict that we would go to the moon, but we sure as hell didn’t do it by launching ourselves out of a giant cannon. I don’t know how people think this requires any kind of intelligence or skills other than obsessively reading a lot of sci-fi, checking out hackernews and hanging out with other of like tastes. Is this what they think developing tech looks like?