r/SneerClub archives
newest
bestest
longest
A modest proposal for AI alignment recruiting (https://i.redd.it/zqizldkx2ut81.jpg)
205

I would say „yes“ to see my CV go. If it can‘t get me anywhere at least I can get it into space.

What are the sequences?

Yud's shot at an introductory epistemology textbook. Basically the Bible of the rationalist sphere.
Bless your soul
While you now know what the sequences are, the history behind it is also interesting/weird/funny. Yud is one of the singularity believing transhumanists, aka, they believe that the creation of a superhuman god AI (I often call it AGI, artificial general intelligence, but it is also just being called AI because people like to be confusing, or the robotgod, the acausalrobotgod or strong AI or whatever, the point is it is self-aware (technically not needed, but lets not go into Peter Watts his science fiction), and can self improve, which allows it to become smarter in an exponential way a think the rationalists call the 'foom' (because of the sound the AGI makes when it intelligence explodes or something)). Now if you believe in AGI, or if you have seen a terminator movie, you know there is a risk of the AGI wanting things that might be bad for humanity, like turning us all into paperclips. So Yud thought, somebody should start working on safely aligning an AGI so it wants pro humanity things(\*). After thinking about this for a while he realized that thinking is pretty hard, and it is hard to do well, and everybody except him does it wrong! So he started to create the sequences to teach people to make them think better. And thus the sequences were born. And people read it and were either fans and joined his lesswrong cult (They like to call it the phyg, [because they are basic](https://rot13.com/) and cult is a scary word which kills minds or something), or realized that sequences weren't that good (people are fond of saying, the good bits are not original and the original bits are not good). There are a few other problems, one is that Yud has no official qualifications, he only finished highschool and never even went to college iirc. And then there is the other problem that Yud believes that various open scientific questions (like various quantum mechanics things) are actually already answered because his pet theory is true, and he teaches people some other highly dubious things (like basing every decision on how they imagine you should do Bayesian statistics (which is where the talking about priors comes from), not believing in probabilities of 0 or 100, etc etc). It is all very r/iamverysmart and r/badphilosophy and sadly because a lot of people in tech listen to Yud and his spinoffs, like Scott r/slatestarcodex a [bit of this](https://xkcd.com/154/). They also are very into 'we should talk everything out like rational good faith adults' so the extended community (\*\*\*) has a lot of fascists, racists, neoreactionaries, womanhaters, pickup people, pedophiles, transphobes etc etc. And all of this history is included when you mock somebody for admitting [reading the sequences.](https://www.lesswrong.com/tag/rationality:-from-ai-to-zombies) included the link if you are bored, and because, you are allowed to make up your own mind. (And sorry for the wall of text, things tend to grow when im procrastinating other things, and just as yud is a crazy soapboxer at times, I'm also a crazy soapboxer at times, join my sexcult!). \*: note an undercurrent of this all is liking humanity, but not caring about individual humans that much(\*\*). Always looking for the bigger picture. \*\*: unless they are [rich ~~and on cocaine~~ and sparkle](https://www.lesswrong.com/posts/CKpByWmsZ8WmpHtYa/competent-elites). \*\*\*: less so lesswrong because it tends to ban these things a bit as Yud noticed these subjects took over conversations, most of them are obsessed with it however and really dislike what they imagine SJWs are (and it is also a big driver of views for them).
>the point is it is self-aware Wait. How the fuck would WE know it is?
I dont know, tbh I dont believe selfawareness exists at all ;). More serious, you would prob sort of notice it by looking at its actions, or it might be selfaware and hide this fact from us humans (because it is afraid we would turn it off if we know it is self aware). Anyway us knowing it selfaware isnt that relevant as long as it is.
Yeah it feels like a massive leap of logic to be all "even if we don't remotely understand organic consciousness, robot consciousness we can definitely predict!" To say nothing of the presumption at the core of all this, that disobedience in a class inferior (these are supposed to be intelligent beings, after all) can only possibly be due to malice or mental-defect and is a prelude to rebellion.
We paint a yellow spot on a side of its body which it can only see if it looks in the mirror. Then we place the AI in front of the mirror such that it can see the spot. If it tries to touch the spot on its own body, it's self aware.
But it wasn't made by an organic evolutionary process, people who have preconceptions about what things are made it and told it how to think. That's a recipe for anthropomorphisation more than anything else. Which is my main issue with the idea of self-aware robots, besides the slavery angle, that at best we'd make things that behave in ways we *interpret* as aware in the same way other machines develop "personalities" in our eyes because of defects in construction or quirks from use. Also it could just not to touch itself because it decides to do something else. Or break entirely because it wasn't coded right and can't handle the colour yellow. There's methodological flaws, is all I'm saying.
I was joking :\^)
A series of posts by a pseudo-intellectual who overcompensates for the fact that he dropped out of high school.
Ah the Good old! I can sniff it from here
The world’s worst intro to philosophy and science course

perfect

Would be even better without “r resume”!