r/SneerClub archives
newest
bestest
longest
EY discovers that publicly calling community members incompetent prevents said members from contributing. From Paul C’s lethalities article (Would’ve like to post more screens but can only attach one) (https://i.redd.it/z8vrcqgny6991.jpg)
86

Don’t worry about the AGI becoming self-aware, worry about Yud becoming self-aware.

Dumb of concernedcitizen64 to post this, as remember: “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES”

(E: while this is obv a joke, it also points to a previous action in the past where Yud actually thought he was smarter than everybody else (for seeing the “risk” of the thought experiment) and he took action against people, so this post is also a reference to a thing related to what concerned is complaining about (it gets even better as yud is claiming he would never do social pushback against people in the linked post (which technically, deleting your posts isn’t of course ;) ))).

>worry about Yud becoming self-aware. No.
The whole thing is a one line set theoretical statement whereby a God AI capable of anything will go super bonkers and delete the observable universe. To debunk it or rather to address it requires thousands of pages of text, physical limitations of the universe, energy limitations, time constraints, all to build a robust model of what an AGI could do. And the typical response is simple, "The AGI is smarter than you." This isnt even an argument, it starts in bad faith and it presumes its conclusion from the beginning. No one ever seems to have made the set theoretical argument that the paperclip maximizer makes half as many paperclips as there are pieces of paper in the universe.
> "The AGI is smarter than you" This reminds me of the way Christians respond to the problem of evil, the problem of divine hiddeness, criticisms of intelligent design on the basis of the fact that biology doesn't seem remotely designed, etc: they just say that God is infinitely smarter than you and so he works in mysterious ways that you won't be able to understand. It's a one-size-fits-all post hoc rationalization that can literally account for any evidence whatsoever and so as a result is completely useless.
> so as a result is completely useless. Yep, and that's why people are afraid to approach it, what they need to be told is simply not to do it, read some Solomon Feferman, get the fuck away from (AGI related) set theory, watch this video: https://www.youtube.com/watch?v=n_PEPW91fBQ Stop believing anything that starts with "if God function".
ngl reading through all those nested parentheses you used was a good workout for my brain's recursive abilities.
> Don't worry about the AGI becoming self-aware, worry about Yud becoming self-aware. my god it's perfect

It really feels like the LessWrong “rationalist” community has just re-invented religion, with a thin veneer of scientism and technology over top of it. They can’t see it because it’s not supernatural in nature, but it has all of the functionality of a religion nonetheless.

I wonder if this happens anytime you think you’ve constructed the perfect method of doing epistemology? Since that seems to be the core of their movement. I’ve seen people claim that’s the case, but I’m not so sure.

[Edit: rewrote this meditation on epistemology to be more general and far clearer.]

Even if you think you’ve found the best way to come up with beliefs that are the most probably true given your available evidence, that shouldn’t give you the kind of certainty these people speak with.

Fundamentally, all the rationality in the world won’t do much more than elaborate on the input you give it - garbage in, garbage out and all that. And what do we have to put into our rational “calculations”? The only input you actually have are your personal, subjective, situated experiences. You don’t actually have access to objective facts or evidence, if those things even exist - which I’m not totally clear on. Everything you have is mediated by your perspective and what you can actually experience.

Statistics and science isn’t a get out of jail free card for getting objective evidence, either: a lot of times, crucial information about a situation simply can’t be communicated in scientific, and especially quantitative, measurements (such as how difficult it actually is to be homeless to use a recent example where Scott was being an idiot). Additionally, what science has available to it or what results it produces is often extremely influenced by the biases and perspectives of the scientists who perform it. The Mismeasure Of Man is an excellent book on this topic.

This leads me into my second point: experiences can be interpreted in an almost infinite number of ways, and while some interpretations have superior intellectual virtues or are more predictive, and therefore more useful, there is a lot of bias and aesthetic or subjective valuation that goes into choosing an interpretation of your experiences, so you have to not only be aware of that, but very careful of that. A lot of people that talk a lot about logic and rationality tend to just take their biased assumptions as unearned axioms: different definitions and categories, or assumptions about how the world works, can vastly shape your interpretation of everything and can seem so obvious to you that anyone questioning them can sound incoherent or irrational, but those assumptions can be completely unjustified and, from another perspective, untrue and useless.

All of this compounds with the fact that while different interpretations can have different practical benefits - such as being more predictive, or simpler, or more phenomenologically conservative, or more rewarding - there’s nothing really to say which of those practical benefits should outweigh the others. Personally, I prefer maximizing predictiveness and as a secondary trait phenomenological conservation, but it’s not objective.

So you end up with a huge number of points of valid difference in what you can believe, where people with personal experience of something have a lot more to say, and situated experience matters a lot, and somehow out of this these people are getting such absolute certainty?

> It really feels like the LessWrong "rationalist" community has just re-invented religion Well, there is also the joke made by some parts of the Rationalist community (or just NRx, not sure about that one) that Scott is the rightful caliph. Guess that makes slatestarcodex (who don't even like lesswrong that much it seems, themotte certainly seems dubious about them at times) Islam to The Pope Yuds Christianity.
their satan? roco's basilisk their pope? yud tithe/charity? uncritically giving 10% of income to 'effective' altruism their god? maybe bayes law or IQ or something, idk what exactly but they definitely have absolute faith in something that while it may be real, the way they interpret it is definitely not real. Like thinking bayes law can model every decision and problem on earth. There are other parallels I'm sure.
Their god is Roko's Basilisk. After all, the thought experiment is literally Pascal's Wager with AGI in place of God. Plus, they think AGI and the singularity can lead us to heaven if done right. Their holy spirit is Bayes Law lol. And personally I think Bayes' law probably *is* the most general purpose decision process for doing epistemology there is, since it's literally just a formulation of Hume's "apportion your measure of belief to the evidence" with a built in explanation of what constitutes evidence for something. I just think they don't have the larger context in epistemology to realize how everything Bayes' law depends on is contextual and subjective!
Their god still needs to be created. The friendly acausalrobotgod who will make everybody immortal. So it is more a dualistic religion with 2 gods. The basilisk or accausalrobotgod, which way modern nerd?
no no, you infidel. The Basilisk is the *good* AI. We get the Basilisk if the rationalist project *succeeds*.
Oh it is definitely re-invented religion. I think even the participants would acknowledge that. It plays the part of a religion in their lives, god or no god. It's almost exactly what you would get if you spec'd it out: define a belief system for atheist nerds that will fulfill the social role of a church. That makes me a little hesitant to sneer at it, honestly. It's not nice to poke fun at other people's strange but deeply-held religious beliefs. Of course like many more traditional faiths, it is not content to simply worship, but has to proselytize, to push itself on the rest of the world. That makes it fair game.

Finally, Yud doing something useful.

Seriously, I didn't think there was anything that could make a LWer stop talking.
A while back I discovered that infogalactic (The wikipedia clone created by the white supremacist vox day) uses lesswrong as a source to claim rationalwiki is bad.
aahaa that article is a hoot https://archive.ph/2mKje says that rationalwiki is "race denialist", cite: literal neo-nazis The Right Stuff
Lw is in great company again ;).
From that article: > Eliezer says a lot of concrete things about how research works and about what kind of expectation of progress is unrealistic (e.g. talking about bright-eyed optimism in list of lethalities). But I don’t think this is grounded in an understanding of the history of science, familiarity with the dynamics of modern functional academic disciplines, or research experience. But once again, the LW'ers are using far too many words to say what they could in less than 20: > Eliezer says a lot of concrete things. But I don’t think this is grounded in an understanding.
[deleted]
Well, that and, "even in my post criticizing Yudkowsky I can't be seen _really_ criticizing Yudkowsky, so I'll give everything so many caveats that it just seems like a minor difference of opinions even as I repeatedly note how unqualified he is".
>Extrapolating the rate of existing AI progress Hmm, let's see, it took about 60 years to create a chatbot that could consistently form a relevant, coherent sentence.

Too many word make head hurt

Gonna go out on a limb here and predict that EY will not actually discover that at all, no matter how self-evident it is.

Let’s read any responses he had to the comments…

Yeah.

lol

lmao, even