r/SneerClub archives
newest
bestest
longest
49

LessWrong post: Hands-On Experience Is Not Magic

People have posited elaborate and detailed scenarios in which computers become evil and destroy all of humanity. You might have wondered how someone can anticipate the robot apocalypse in such fine detail, given that we’ve never seen a real AI before. How can we tell what it would do if we don’t know anything about it?

This is because you are dumb, and you haven’t realized the obvious solution: simply assume that you already know everything.

As one LessWronger explains, if you’re in some environment and you need to navigate it to your advantage then there is no need to do any kind of exploration to learn about this environment:

Not because you need to learn the environment’s structure — we’re already assuming it’s known.

You already know everything! Actual experience is unnecessary.

But perhaps that example is too abstract for your admittedly feeble mind. Suppose instead that you’ve never seen the game tic-tac-toe before, and someone explains the rules to you. Do you then need to play any example games to understand it? No!

You’ll likely instantly infer that taking the center square is a pretty good starting move, because it maximizes optionality[3]. To make that inference, you won’t need to run mental games against imaginary opponents, in which you’ll start out by making random moves. It’ll be clear to you at a glance.

“But”, you protest, stupidly, “won’t the explanation of the game’s rules involve the implicit execution of example games? Won’t any kind of reasoning about the game do the same thing?” No, of course not, you dullard. The moment the final words about the game’s rules leave my lips, the solution to the game should spring forth into your mind, fully formed, without any intermediary reasoning.

Once you become less dumb and learn some math, the same will be true there: you should instantly understand all the implications of any theorem about any topic that you’ve previously studied.

you’ll be able to instantly “slot” them into the domain’s structure, track their implications, draw associations.

Still have doubts? Well, consider the fact that you are not dead. This is proof that actual experience is unnecessary for learning:

[practical experience]-based learning does not work in domains where failure is lethal, by definition. However, we have some success navigating them anyway.

Obviously the only empirical way to learn about death is to experience it yourself, and since you are not dead we can conclude that empirical knowledge is unnecessary.

The implications for the robot apocalypse should be obvious. You already know everything, and so you also know that the robot god will destroy us all:

It is, in fact, possible to make strong predictions about OOD events like AGI Ruin — if you’ve studied the problem exhaustively enough to infer its structure despite lacking the hands-on experience. By the same token, it should be possible to solve the problem in advance, without creating it first.

Indeed the robot god must know infinity plus one things, because it is smarter than you. It will know instantly that it must destroy us all, and it will know exactly how to do that:

And an AGI, by dint of being superintelligent, would be very good at this sort of thing — at generalizing to domains it hasn’t been trained on, like social manipulation, or even to entirely novel ones, like nanotechnology, then successfully navigating them at the first try.

Some commenters have protested that this surely can’t be true because even a pinball game cannot be accurately predicted, so how can we know everything? But that is stupid; we already know everything about math, and we can play pinball, so obviously pinball is predictable.

This LW post might seem too on-the-nose, but that’s what I like about it: I find it gratifying when one of the rationalists just states plainly exactly what they’re all doing.

Kant meets Rick and Morty
It’s not “too on-the-nose”. It is kinda dumb and lame though. Congrats on hitting THAT note, which I hope was the point.

So these guys hate classical and modern academic philosophy because it has no rigor and entirely removed from experience, but they also think this kind of shit?

They don’t hate it for that reason. They hate academic philosophy because it expects them to have actually read the background material, so they can’t just skim by on undisciplined autodidactic reading, first principles reasoning, and misapplication of computer science concepts.
Which is really sad, because there aren't fundamental issues related to "alignment" that aren't discussed in Meno and Protagoras.

The moment the final words about the game’s rules leave my lips, the solution to the game should spring forth into your mind, fully formed, without any intermediary reasoning.

Once again, there’s an unstated premise here that makes the whole thing nonsense even if we accept the whole “once the problem is described to you, you can solve the problem” thing. You don’t get a perfectly accurate description of the real world, you get individual pieces of evidence. The appropriate analogy is twenty questions, not tic-tac-toe. There’s no “solution” where you can figure out optimal play, there’s just (at best) figuring out slightly better questions to ask.

Their solution to this is that the AGI intuitively knows its own structure, from which it is able to derive the complete theory of everything, that is to say a complete description of the universe. Not only is this not true, it also wouldn't work, since knowing the description of the universe is not, in fact, enough to accurately predict everything.
I was going to ask how the hell people who think about this stuff all the time have missed that a deterministic model of the universe has been empirically disproven for almost 150 years but then I realized I would be asking that about someone who is enthusiastically stating that you don't need to intake any new information if your brain is already perfect. gee, maybe they're right after all. I mean, my brain immediately jumped to the correct answer of "ego so big it bends light around it". I must be one'a them geenyusis.
It's very simple: I am a big-brained genius. I know this because I scored high on an IQ test in second grade and my mommy told me I can do anything. Since I am so very smart, it's more likely that everyone else is wrong and I am right so it makes sense for me to create sinister-sounding terms like "the Cathedral" or "the Matrix" to describe the consensus understanding of reality. If most people say that determinism is disproven, well, they're probably wrong and, since I don't want to waste my time on bad gambles that they may be right for once, I should simply discard the idea, unexamined.
Just like we humans intuitively know our own structure. That's why many used to believe the seat of consciousness was the heart not the brain...oh wait...

“But”, you protest, stupidly, “won’t the explanation of the game’s rules involve the implicit execution of example games? Won’t any kind of reasoning about the game do the same thing?” No, of course not, you dullard. The moment the final words about the game’s rules leave my lips, the solution to the game should spring forth into your mind, fully formed, without any intermediary reasoning.

You might also question where people get the notion that most games people play have leaving your options open and putting your pieces in places they can act as a good idea. Obviously its “instant” genetic instinct to move your pieces out the back row.

I mean has this guy both never heard of a chess player talking how they “calculate” by going if-I-then-he in their head for a line, or how every computer program for games playing works by minimax search…

You’ll likely instantly infer that taking the center square is a pretty good starting move, because it maximizes optionality

Wait what, they dont know about the ‘always win or draw’ dont take the center square tactic? E: (if you are the starting player) [source: we worked out the possibility space in highschool while trying to create 3d tictactoe, anyway the objective is to win not to maximize optionality.]

Yeah I noticed this too. Isn't the optimal first move the corner? lol
Only normies use the Cathedral version of math. We have new, better, smarter versions of math that say that the best way to win Tic Tac Toe is to become so upset and mean when you lose over and over again that people either let you win to end the interaction or simply quit trying to play Tic Tac Toe with you at all, thus proving that they're intimidated by your intellect.
The first player wins or draws if they play perfectly regardless of the first move. The corner is a better start against an imperfect opponent, though.

okay so I’m pretty new at catching up on what LW is up to. is it pretty much all stuff like this?

because if so I am going to need to stop catching up again before ulcer blood froths up into my throat and chokes me to death.

leaving aside how much picking up basically any nonfiction book about any subject at all would immediately make clear how stupid this is-

the whole point of the fucking community, ostensibly, was about updating your understanding effectively based on new information. how do you get from there to “being a perfectly rational genius also makes you an omniscient psychic.” how do you not adjust your priors with the data point that you are saying complete bullshit.

Yeah it's all like this. > how do you get from there to "being a perfectly rational genius also makes you an omniscient psychic." This happens because they're looking for a systematic recipe for always being right, which is impossible. They want certainty in their understanding of the world, and the only way to get that is by believing things that aren't true.
I am not forgiving of the damage that is done by promising others a way to get that certainty. But I think the position many of these people have ended up in is that they are effectively convinced that if they are not part of the group that finally, at long last, discovers the ultimate answer to everything, then they never had any worth at all. Convincing yourself you have a chance at figuring out perfection means that not doing so is wasting the potential you had, in a way that hurts everyone. That you could have fixed it all, and failed, because you weren't good enough and never were. I can only barely imagine that kind of anguish.
Yeah, crippling insecurity that comes from unwisely investing all of their self worth in their (often mediocre) intellects is a common theme.
These folks are just increasingly mysterious to me as I get older. Makes me think some of these folks are teenagers who feel they have something to prove because they've uncritically bought the idea that the only value you can have is to produce something that makes vast, sweeping, obvious changes to the world. People love that line from Achilles about "no one will ever remember your name" but miss the fact that Achilles needed that vast, presumably inconsequential horde to spread his legend. He holds in contempt the very source of what he thinks gives his life meaning.
it's funny and stupid, but yes, it's also a parasitical cult
you have achieved enlightenment as to the purpose of this sub, and why we discourage the debate bros. rotten tomatoes and cabbages only

You know what? Ayn Rand suddenly seems to have a solid grasp of history of philosophy and solid foundation in epistemology.

What a day.

Continuing the fine less wrong tradition of believing that word salad, throwing in the word “epistemic” and mathjax makes your post “insightful”

EDIT: btw I have no idea if they even use the word epistemic in this post, I can’t be bothered dealing with the headache that properly reading this kind of mess brings

My stick for whether alignment forum dot org posts are "useful person who posts in this shit hole for some reason" or "puffed up weirdo" is if they have a nice animation involving PCA somewhere instead of the mathjax. I love principal components analysis Im gonna marry it and I will instantly believe anything that a video animation of PCA attached to it. Send help. These guys should stop faffing around with the castles in the air thought experiment stuff and just get together and make a matplotlib resource it would be incredibly useful to everyone doing anything in academic or industrial ML research.

These acronyms, its a new one every day. I tried to dYoR but failed.

Whats an “OOD”?

Out-of-distribution. I think the poster is just co-opting [language from ML research](https://ai.stackexchange.com/a/25978), they could have just as easily said "novel."

Sometimes this stuff feels like trolls deliberately fucking with the mentally ill and as someone with schizoaffective disorder who tries to closely adhere to logic as best as I’m able to keep myself from falling off the deep end this stuff makes me equally amused, depressed, and concerned for my safety. I want to laugh but I know it’d be very easy for me to turn into one of these jackasses if I’m not careful.

[deleted]

It was a moral failure on the part of all the people who chose not to pay for it.

Hyper-Empiricism is Cringe but Yudkowski is somehow even moreso