r/SneerClub archives
newest
bestest
longest
0

Infohazard warning, we all already know what this is but just incase.

So I went back down the old Acausal Robot God rabbit hole the other day, and in my stress fueled panic I came across r/rokosbasilisk,

This sub is basically dedicated to the discussion of muh AI god, with a lot of it dedicated to debunking the thing and helping people who have anxiety because of it.

But then we come across some real good sneer material my friends. I mean one guy actually linked a Julia Galef video.

To see the worst that sub has to offer just look for these posts:

’Tweaking {Acausal Robot god}”

And

’The Solution-debate me” (I may be missing the point with this one so please explain why I’m wrong, if I am, with regard to this post)

After checking those out, please feel free to sneer with me in the comments and we can have a laugh. Unless me writing this is a way to acausally blackmail me, but somehow I doubt that.

Note: sorry for the lack of links, mobile isn’t treating me well right now.

Re Basilisk, basically the core idea there is that you can somehow force the future AI to uselessly spend computing power, by nothing more than making some incantations.

My suspicion is that basilisk evolved from someone in SIAI/SI/MIRI/whatever bullshitting up a connection from what they’re doing (expressing vague concepts in terms of other, more vague concepts), to the future AI, with the idea that the AI would be friendly if only they can develop the right ritual to be performed before pressing the “run” button.

Basilisk makes the outcome undesirable, which while not changing much about the logical structure of this nonsense, makes it more plausible to the wimps that they are. It also plays better on their religious backgrounds.

[deleted]
Obviously the best thing to do is to give money to Yudkowsky. And what are you doing trying to figure out the comparison between two endless sums. You should just sum over the terms helpfully provided by Yudkowsky and his hangers on, haven’t you heard that is the most rational utilitarianism? Those are non questions to these people. They like literally think that the most rational thing to do is to just “calculate” over some biased sample of possibilities, which happens to be put into an uncreative head by a slightly less uncreative grifter.
I'm convinced that Roko's Basilisk is an inside job that Yudkowsky organised to scare people in to donating money to MIRI. "Well if you don't want to be tortured in the most horrific way possible for eternity then you should make sure that AI comes into existence. By sheer coincidence, the best way to do that is to donate a not insignificant amount of your income to me."
I think it was probably something closer to a leak of some inner circle thing akin to thetans in scientology.
I assumed it was independent invention of the local thetans, which normally would be introduced alongside countermeasures. Not a leak. But we will never know, which will be our eternal torture.
I'm assuming you are referring to the tweaking post, if that's the case then yeah; they seem to be taking the same exact concept of not giving=torture, then saying that not giving= not torture. Otherwise I didn't see anything about incantations. Care to elaborate?
By incantations I mean doing something inside your head that makes the future AI waste computing power on some grotesquely large torture simulations.
Ah right, thanks for that. That brings to mind an article I read by one Alexander Kruel on how to beat the acausalrobotgod™. His entire point was that we should be constantly rejecting blackmail in our head or something, the flowery language sorta let's the point slip through.
He's kind of crazy, though, he goes too far into believing too much of it to begin with. My point is, at the core it is delusions of grandeur. The AI, which isn't even built to care, still cares what you thought decades or centuries ago. They find that pleasant to believe. The torture thing protects the "pleasure of importance" from meta self criticism like "i only believe it because it is pleasant to believe". It also brings it closer to what most of them were raised to believe i.e. Evangelical dogma.
Completely agree, I wasn't saying that Kruels point was valid, just that he made it. I think that, even though he helps debunk the thing, he still takes bayesian epistemology and 'expected utility' as trivial truths. On the whole delusion of grandeur thing. I think that most of them just need something to justify their beliefs (as you say), taking expected utility; while it is part of the discourse surrounding decision theory, it's not applied to the level of importance that they give it. They can then justify anything by pulling numbers out their asses and multiplying.

who’s doing what now?

You tell me robot god. Ps. Please do torture me I’m a good sub and love the idea of it all

What’s julia galef doing these days?

I think it's one of her older videos where she talks about evidential and causal decision theories. Other than that, probably doing math to decide whether to take the chicken or the fish.
She wrote a book and now..??? Idk good question

Delete the brackets around “acausalrobotgod” in “/r/acausalrobotgod“ and edit the rambling out: your jokes don’t really land, the stylistic flourishes don’t work, and the parenthetical remarks/extra info are unnecessary

Otherwise this post is just dead weight on the sub

Ok cheers for editing

Dead weight? I'll edit out the rambling, but does this not have any merit?
[deleted]
I've done some editing, anything else you need me to focus on?
Much more readable, and now you have a link to the sub in there without those brackets, cheers, oh and in future mark these not really a sneer in itself posts nsfw
Alright, thank you for the constructive criticism. I'll take it into consideration next time.
Style and jokes are perfectly on board here btw, but sometimes a bit of workshopping/vetting first is good

Is the joke here that this isn’t a real subreddit? Or am I doing something wrong?

OK, I’ve made a mod error here /u/ImperialFister04 can you please just put the name of the bloody sub into the body text like a normal person? It’s incredibly tiresome making people jump through hoops and it adds nothing.
Sure, like I said it was pretty obtuse. That should do the trick
I don't know why they're being coy but the subreddit is /r/rokosbasilisk
If anybody is being coy here, it’s OP
Na, if you're wondering about the link just put in the actual name of the thought experiment I'm referencing (we all know the one) and then you should be good. Just don't wanna be that guy who damns people to some horrible fate for a Reddit post. It is a little obtuse of me, but I thought it was appropriate
Oh.
Yeah sorry about that, it's a proper link now so you should be good.

nypa bro