r/SneerClub archives
newest
bestest
longest
Reality also doesn’t give a fuck about the anxiety driven priors you assign fanciful sci-fi scenarios (https://i.redd.it/owxl71ngvjya1.jpg)
136

I am betting on AI reaching self awareness before Yud does

Goddamit. Face paint time. I'm so sorry, well, I didn't understand that Mr. Yudkowsky knew the phrase "flying fuck": Thats not what the Midwest Talent Search looks for, one correctly assumes. Like Am I going to have to actually fight this motherfucker or his acolytes at some point? Like Whats the goddamn endgame for this nonsense? Streetfighting preBasilisk? "whatevs" = Earth Hotter Ya Real Bad Fr

Narrator: in a bizarre twist, Yudkowsky claimed to be aware of a reality outside of his own head

Talk about main character syndrome.

His parents told him he was super smart too many times as a child
It is funny that the other main char guy thinks he personally solved the ai alignment problem. 'Just make the agi curious and it will not kill us the interesting humans'
I think I've solved the alignment problem, just make it super self conscious about "acting cringe" can I have my Ted talk now
Best I can do is x.

the projection is strong

Actually my immense psychic powers are capable of altering the world outside my head to be in line with the world inside my head

No reality marbling allowed in this simulation 😡

apprentice to whom? him? lmao

Rephrased by SneeringPost-trainedHuman^TM :

Whenever a mark is tricked into exploring my acausal robo-god apocalypse fanfiction, But common sense prevails with any “more practical concerns to focus on”, I know that they might escape my doomer-cult, and i need to demean them.

In my fanfic, your “practical concerns” are fucking irrelevant, and I fucking wish you’d worry about my fanfic instead. They aren’t properly internalizing my fanfic as being the only truth that matters, refusing to substitute their reality with my own.

You can say the apocalypse won’t happen, but you have to do it through the lens of fanfic, like for example coming up with a way for the basilisk to spare us all, or arguing that it will choose to spare us.

But saying “This isn’t very rational, I’d rather be sane”, you’ve failed to open your eyes to Rationality

Imagine being in his head where he’s managed to convince himself all his scenarios are true; an actually awful thought

“Facts don’t care about your feelings” by people whose feelings don’t care about facts, the 2023 remake

Why does he assume that “I’d rather worry about other things than AGI” is a statement about reality, rather than a statement about what I’d rather do with my limited time on Earth?

If AGI is going to turn us into paperclips soon, surely I want to maximize the value of my time left as a human.

Alternatively, applying a Rationalist’s gift for the contrarian sound-bite thesis… what if I WANT to be a paper clip?

Paperclips have no bias. Paperclips do not wish they were doing one thing, and then do other things for mysterious reasons. Paperclips have function. Human brains read the Sequences and then forget them all, but a paperclip can hold on to the Sequences for decades.

Nobody tells paperclips to touch grass, dammit.

Edit: on second thought, I take it back. no Rationalist would ever say they want to be turned into Clippy.

They’re against wireheading.

Your edit pun is disgraceful. I love it

Getting bored of this dude, always the same…

This but climate change

You're more than welcome to worry about things other than climate change though! Many people who are very concerned about climate change will spend their time worrying about other things on a regular basis.
Healthiest just to worry about everything everywhere all the time. That's what I do
Great movie though
I haven't actually seen it yet. Meaning to

The jokes just write themselves

Baby apprentice 😎

“what is bounded rationality? can you Bayes it?”

That’s so ironic I think I need to take a day off work.

Just place your trust in a theorem called bayes’

i guess that seems illuminating if you don’t know what the word “rather” means

I think this will be avoided with oracle AI. If superintelligences can determine how to maximize paperclips without tiling the universe in computronium, we can create oracle AI that does not tile the universe with computronium.

Okay I know ASI is Ability Score Increase, what’s AGI?

Absolutely Gangsta Intelligence.
Ah tyty
Superintelligence asks tha thangs: what tha fuck happens when machines surpass humans up in general intelligence, biatch? Will artificial agents save or fuck wit us, biatch? Nick Bostrom lays tha foundation fo' understandin tha future of humanitizzle n' intelligent game. Da human dome has some capabilitizzles dat tha domez of other muthafuckas lack. Well shiiiit, it is ta these distinctizzle capabilitizzles dat our species owes its dominant position. I aint talkin' bout chicken n' gravy biatch. If machine domes surpassed human domes up in general intelligence, then dis freshly smoked up superintelligence could become mad powerful -- possibly beyond our control fo' realz. As tha fate of tha gorillas now dependz mo' on humans than on tha species itself, so would tha fate of humankind depend on tha actionz of tha machine superintelligence.
Herbert Kornfeld should work at MIRI. Have this all settled before lunch is over.

I sense emotion. He must be wrong.

anyone here actually have any refutations to what he says lmao

this is like asking someone to refute Star Wars.
false equivalence fallacy
debate club fallacy this doomerism isn't good for you
im not a doomer lmao, but a 5% chance of all humans being killed is not exactly nice
it's made up
How can you falsify something unfalsifiable? Point us to a falsifiable argument. It’s all a squishy motte-and-bailey where he berates critics for misunderstanding but never clarifies.
falsifiable argument: "We currently have no knowledge of how to align AI to all human values" This can be falsified because the amount of knowledge we have on this is a limited amount, and can be searched through without much effort. Also, any individual who finds such a method would become very well known almost immediately, and probably want to share their work. ​ we also have no knowledge of how to create general AI, but GPT4 and its multimodality shows promise, even though its mostly just an LLM.
That’s not an argument, it’s a premise. The argument likely goes something like this 1. AI alignment is necessary to avert catastrophe. 2. We have no idea how to do this alignment. 3. Therefore, catastrophe is guaranteed. I simply reject P1, making P2 (your assertion) irrelevant, so that P3 no longer follows. Yud takes the “alignment problem” (P1) for granted, which is where the lack of falsifiability comes into play. I want to see a falsifiable argument for why the problem exists in the form he believes it does. Technology has never been “aligned to all humans” because **people are not aligned to all people**. How do you expect to solve the AI dimension of that problem without addressing the root sociological issues that have always plagued our species? And why is the AI dimension so singularly important when the overwhelming majority of humans who have ever existed have already lived in squalor, fear, and ignorance? We don’t need GPT9000 in order to wreak technological catastrophe on the world. It’s already happening. It’s the “always has been” astronaut meme.
ignorance is bliss i guess, do what you please
>ignorance is bliss I wouldn’t know, I’m not the one claiming GPT is “mostly” an LLM because some fedora-wearing apocalyptic prophet mistakes his math illiteracy for the objective inscrutability of matrices. You literally did the very thing I said you’d do. As soon as I attempted to shore up whatever you were trying to say into an actual argument, you accused me of “just not getting it”. You can’t rationally defend P1 because it’s an article of faith to be believed and not arrived at logically.
Refutation: his doomerism is entirely based on feels, therefore I am free to reject it entirely based on feels
we literally have no idea how to align models to human values, and they will be willing to minimize the value of every single value that they dont care about in exchange for even small benefits on things they do value
That's still a feel's statement, just phrased as a fact rather than an opinion
what part of it involves feelings
"They will be" is doing very heavy lifting. It's just a statement of what you personally believe will happen in the future but it's not based on any hard facts or scientific principles, just speculation.
its a statement that is true about every single intelligent system we know about lmao
I mean it's demonstrably false just with regards to humans in general. Unless you decide that the concept of indifference just doesn't exist. But given that rationalist thought is very utilitarian I don't think this discussion will be very productive since the concept of intelligent beings who don't function as literal optimization algorithms is unfortunately not very compatible with this line of thought.
would you kill an ant? that ant does not want to die though. Humans don't care about things that don't matter to them, that's how things work. this is just one example of things humans don't care about, and many examples exist. ​ the problem with AI is the more extreme example, plenty of humans would be willing to kill every single mosquito if it was possible, because mosquitos don't align with us. ​ AI would probably have a more narrow range of values depending on the training, and therefore minimize the ones it does not care about for maximal gain in the ones it does. ​ also yeah AI's arent really optimizers, but they are created by optimizers (gradient decent), and that happens to create beings with values.
My point absolutely is, yes some people die hard refuse to kill ants and will be in great anguish even at killing a single one by accident. Plenty of people find it troubling to eradicate an entire animal species even if it means getting rid of malaria. Human ethics and values are far weirder than a utilitarian viewpoint assumes, and thus I have no reason to believe a priori that AI values and ethics would be any more cold and calculating, rather than quirky and odd. Secondly, the fact that AIs are created by optimization is a red herring, all life on earth was created under conditions of optimization of some kind (I.e. natural selection), and this has created all sorts of beings that are intelligent yet can act in ways that are highly detrimental to their own well being. Point being, it's still just speculation. We don't know how it's going to pan out. On the other have very good scientific reasons both theoretical and experimental to believe that global warming is on a trajectory to fuck us beyond belief, hence why many of us view AI x-risk doomerism as needlessly distracting. Note, this doesn't mean I think AI has no risks, but personally I think the risks are not the AI itself but the shitty human behaviour that people will use these technologies to assist with.
if someone said "this bridge has a 5% chance of collapsing" would you call them a doomer lmao
If that's all the context there is to that statement I would say they are. I would go even further to say they're a weirdo talking about bridges collapsing for no apparent reason.
💀🗿🥶
Thus far, machines do what we direct them to do, their 'values' are those of the user. Have you any convincing reason why this may not be the case for AI? You have values and so you feel that any being as intelligent as you must have values, you can't imagine intelligence without values. Ask yourself where human values come from and you will see how fallacious your statement regarding inherent values in a machine appears.
>Have you any convincing reason why this may not be the case for AI? yeah. >You have values and so you feel that any being as intelligent as you must have values, you can't imagine intelligence without values. This is not so. I am not necessarily talking about values in that kind of way. The way AI is currently trained is by trying to get the lowest rate of error. Getting a low rate of error incentivizes having values that reduce the error during training. ​ >Ask yourself where human values come from and you will see how fallacious your statement regarding inherent values in a machine appears. Human values come from brain structure, and also other humans. AI values come from neural network structure, and training. ​ AI is not programmed. Only the training models are (gradient descent). The training models work in a similar way to evolution, modifying the system through something similar to a hill climbing algorithm to do the task the training model wants it to do. No values are actually put into the animal directly, because that would require intelligent manipulation and understanding of the code (we cannot code gigabytes of functions without help from training models). We cannot even program the correct values into the training models, instead we must program a rough estimate. This can be fixed by using even more AI to help in the training, but still is bad. The reason we are worried AI will not be benevolent towards humans is because we don't know what goal will be programmed into it by gradient decent, only that the goal will cause the error rate in testing to be low. This leaves a lot open for chance, and that's not really ideal. ​ Of course, it is completely possible that during training, the fact that the training data reflects human values will mean it aligns with them, but if it is actually more intelligent than people how will we ever change human values again? will they be paralyzed forever in their current state?
right here in this locker