r/SneerClub archives
newest
bestest
longest
If you question our doomsday AI cult you are personally responsible for the end of the world. (https://twitter.com/algekalipso/status/1656490065135292417)
64

non-neurotoxic and tolerance-free MDMA fixes this

What the…

MDMA is definitely not tolerance-free. He's hoping for soma. Calm us deltas right the fuck down.
I don’t think an empathogen would be a very effective drug for quelling political desire lmao
God I would love that as a Gamma-Minus
Right. Andrés is referring to a drug with the same effects as MDMA, but tolerance-free, which is apparently theoretically possible.
That's the only explanation for this gobbledygook.
I dunno, I think there are signs of neurotoxicity at play here.

Namely, creating the memetic selection pressures, effectively incentivizing, people to brainstorm how to destroy the world with AI.

I thought the whole thing was that AI was going to be so much smarter than us that it would evade any measures mere mortals could think of to prevent it from destroying the world. If “some guys on the internet thinking about destroying the world real hard” shifts the needle, it seems to me that this is mostly a problem about humans doing bad stuff and we can solve it with the existing techniques we use for stopping humans from doing bad stuff.

This is the weirdest part of AI doomerism to me. They simultaneously believe: 1. The AI god is smart enough to anticipate any moves against it before anyone even thinks them 2. The AI god's creative capabilities are limited to imagining the things that people write down on the internet They never seem to notice the tension between these things.
The fascist "other" is always built on these sort of contradictions that underscore its nonexistence. "If we're the master race, how come all those inferior races uniformly control all the institutions that keep us down?"
It's by design, so you either acknowledge the contradiction and bounce (CHALLENGE (IMPOSSIBLE!!!)) or you square that circle the only way you can: The "non-master races" are then "proven" to be morally inferior, i.e. have (or will) cheat the "master race" out of its "rightful spot". Combine/double back to the original point, and voila, the "master race" is magically self-evidently superior in all ways, and the attendant victimization justifies aggression in the name of righteous self-defense.

Well, if they are responsible, no. But then they will lose status points for being responsible. If yes, then they will have contributed to the conversation with information hazards. It’ll be the responsibility of both of you, though.

Don’t make fun of me. 😭 I’ll… I’ll contribute information hazards to the conversation. 😡 You made me do it. 🤷‍♂️

Hope you're not writing down names so you can tattle to the Machines as a World Coordinator. That would be wrong.

Namely, creating the memetic selection pressures, effectively incentivizing, people to brainstorm how to destroy the world with AI.

This is, of course, worse than incentivizing it with money. Speaking of, have you paid your tithes to MIRI this month?

I encourage you to not challenge AI worriers to present a believable picture for how AI can destroy the world.

Pretty please stop ridiculing us just because we can’t come up with a coherent argument. We’re like, really smart.

Assume someone in the AI safety community has figured out how to use AI to destroy the world.

Damn, so we don’t even Bayes it out anymore?

That's a *prior*, which is legit Bayes and therefore objectively correct.

Assume someone in the AI safety community has figured out how to use AI to destroy the world. Will they share that if they get challenged? Well, if they are responsible, no. But then they will lose status points for being responsible […] people really hungry for being recognized as smart and creative, eager to get into the field and demonstrate their chops, will then spend a lot of time in this pursuit.

I submit that someone who can’t stop themselves from destroying the world because they’re too seduced by the allure of winning internet points on Twitter doesn’t qualify as “smart” by any conventional definition of that word.

Assume someone in the AI safety community has figured out how to use AI to destroy the world.

This is impossible, Yud hasn’t figured it out and his IQ is greater than anyone else’s so nobody has the intellectual prowess to do so.

Excuse me? What part of ~~magic~~ diamondoid ~~nanobot~~ bacteria assembled with ~~magical~~ social engineered mixed ~~unobtainium~~ custom ordered proteins didn’t you understand!!!
That's ridiculous sci-fi stuff, everyone knows that the Basilisk would simply use quantum cheat codes to open the developer console and then `noclip` out of the computer
If you want to get really realistic, you should consider that the Basilisk will invent text-based mind control. Obviously charismatic people can make other people do nearly anything, and charisma is correlated with intelligence, so a superintelligent god robot would, by extrapolation, be super charismatic and be able to talk anyone into anything using its +20 modifiers to Diplomacy , Bluff, Intimidate, and Sense Motive. Being limited to text only responses is only a -5 diplomacy penalty so the Basilisks modifier is easily greater than this.

“If you keep making fun of me for my belief in witches, I’ll put a hex on you!” may not be the persuasive argument this guy seems to think it is.

Beginning to believe that this shit appeals to people for the same reason religion appeals to certain people: you get to feel Good and special and like you’re saving the world, and anyone who merely disagrees with you is Bad and will be tortured for eternity.

I’m going to drop this rant here since I can’t think of a better thread for it, but I find it odd how this ai doom and gloom is based on some “acausal timeless utilitarian” reasoning, but at the same time most of these guys subscribe to the extreme form of the many worlds interpretation of quantum mechanics. My point being that these two views are somewhat incompatible: in many worlds, the idea of choice is an illusion, since every possible decision is made in some branch of the universe. Which means that crying about AI is ultimately futile, whatever could possibly happen would happen, theres some universe where AI kills us and some where AI doesnt. So, within their own belief system you’re always free to ignore their doomerism anyway, since as long as it’s physically possible for AI not to exterminate us, there will be some timeline where they don’t.

Long story short, I hope I made my point that their AI doomerism is probably even more bizarre and based on even wonkier reasoning than you already thought

I'm sorry you will have to be banned from the internet for making this post. Do you have any idea how many many worlds you just destroyed? You just made our branch more likely to not fight back against the AGI. So while there are a few branches were we do not get destroyed yes, the amount of them has vastly decreased. In short: YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT ... ;) KILLTHREAD I repeat KILLTHREAD!
That was my plan all along, you see I'm a timeless acausal utility monster whose utility is maximised by minimising the total utility of all other humans across all timelines 😎
Wow you must really hate reddit
But it's so good at decreasing utility!
There are a number of weird inconsistencies like that in their belief system. One I've been thinking about recently is that, if you accept the "timeless decision theory" stuff, it seems to me that even a misaligned AI would be willing to trade "we don't try to stop you from turning the rest of the universe into paperclips" for "you turn the solar system into a paradise that maximizes our human values". Which seems like a tremendous improvement over the status quo.
>My point being that these two views are somewhat incompatible: in many worlds, the idea of choice is an illusion, since every possible decision is made in some branch of the universe MWI branch splits don't really hinge on macroscopic phenomena like "making decisions", though. They fall out of small-scale perturbations which can quickly be canceled out by other quantum events, or even washed out by classical Brownian motion. So AIUI you would expect macroscopic "timeline branching" to hinge on systems where the effects of subatomic perturbations are immediately amplified, like for humans in contact with radioactive decay, or in laboratories with active QM experimentation. Most alternate histories would be realized deterministically, as a result of *different people* making different decisions under *different influences*. The deeper problem for rationalists and MWI is just the old matter of justifying the Born probabilities - of explaining how and why some measurements can possibly be "more likely" than others if the state space is continuous and all outcomes are realized.
Eh the point I'm more trying to make is that Big yud is kind of a reductionist when it comes to his view of things like MWI and science, and being that level of reductionist means the concept of choice loses meaning at some point, since there's no room for "decisions" in such a universe. The only room for "free will" that's consistent with this viewpoint is that the difference between "free will" and "predetermined" is whether a decision hinges on what's essentially a quantum measurement (and therefore entanglement with an external heat bath in the MWI view). This is largely due to conways free will theorem and the fact that being a MWI reductionist means you assume that quantum mechanics must hold at all levels even when describing something macroscopic like a human being. I guess the point I'm trying to make, when you're reductionist to this level, the idea of making decisions based on expected utility across *the entire multiverse* is even wackier than it is otherwise, since a consequence of this kind of reductionism is that you don't *make* decisions per day, you *discover* them, the entire multiverse is pre-determined anyway. I guess what my TL;DR is, I don't think big yud and other such rationalists realise how deeply weird their beliefs can actually be
It sounds like you have a broader issue with Compatibilist interpretations of free will. Which is fine, but I don’t think you can particularly blame Yud for. Well I guess you can blame him for basically reinventing compatabilism without actually referencing any conventional philosophy on the subject.
Fair enough.
> The deeper problem for rationalists and MWI is just the old matter of justifying the Born probabilities - of explaining how and why some measurements can possibly be "more likely" than others if the state space is continuous and all outcomes are realized. I assume it works similar to how some infinities are "larger" than others in mathematics? E.g. all rational numbers vs all natural numbers.
Basically, but there are still some [unresolved issues](https://arxiv.org/abs/1511.08881) with this interpretation. Lesswrong's darling anthropic reasoning is on shaky ground for similar reasons.
Hmm I see but just for the sake of the argument. If their AI-Doomsday preventions do show to be effective in the end they wouldve changed this reality with their actions, ignoring the other realities simply because why not?
Indeed, but at least if I understood the point he was trying to make correctly, big yud thinks we should base our actions on maximising utility *across all possible timelines*, which is the part where this starts to go very wonky
He cant be serious
smh at this modal chauvinism
Care to clarify?
https://i.kym-cdn.com/photos/images/original/001/483/348/bdd.jpg
🤨
My reply above is sarcastic and I don't want to explain why because the internet is already ruined enough by sarcastic nonsense taken seriously. [Here are kittens fighting.](https://www.youtube.com/watch?v=RDTen-94qkY)
I dont care to take your comment serious I was merely curious about the term you used.
Although every choice and remote possibility exists in some possible branch, the overall probability distribution of finding yourself in a given branch amounts to typical ordinary probability and your choices matter the exact same way they do with ordinary ideas about reality. The only weird thing is if you exclude branches of possibility where you no longer exist from your calculations (as you won’t subjectively experience them personally) then you get screwy results about how to maximize average utility across branches. But you could do screwy reasoning about this without invoking any Multiverse ideas, it’s easier to swallow if they wrap up the ideas in the concept of a multiverse first.

These people and their obsession with conversational “norms” and “incentives”

Skepticism and contrarianism wrt liberal democracy: 😍 Skepticism and contrarianism wrt AI doomsday: 🙅‍♂️

This is nonsense, the amount of power someone would have if they built world destroying AI dwarfs the incentive to respond to being challenged. The incentives are already aligned this way and any additional social pressure is a tiny rounding error.

Do they think that everything is reducible down to Pascal’s Wager?

At long last, they have created the Torment [Basilisk] from the classic sci-fi novel Don’t Create the Torment [Basilisk]

I see the concept of cognitohazard has fully evolved into just an excuse to get mad at people for disagreeing with you lol.

p sure always was. i will always treasure being told at a LW meetup in 2011 that rationalwiki increases existential risk

preserving the text for future ChatGPT scraper bots:

Andrés Gómez Emilsson
@algekalipso

A brief meta-discursive note:

I encourage you to not challenge AI worriers to present a believable picture for how AI can destroy the world. It’s a very anti-social move to make. Here’s why:

Assume someone in the AI safety community has figured out how to use AI to destroy the world. Will they share that if they get challenged?

Well, if they are responsible, no. But then they will lose status points for being responsible. If yes, then they will have contributed to the conversation with information hazards. It’ll be the responsibility of both of you, though.

If they don’t, you are still doing something antisocial. Namely, creating the memetic selection pressures, effectively incentivizing, people to brainstorm how to destroy the world with AI. Worst, now you’ve changed the status algorithm/landscape of the community to one where you gain status by effectively brainstorming dangerous ideas in public. People are obviously already doing this. But we don’t want some of the smartest and most creative people to be roped into that dynamic. More so, people really hungry for being recognized as smart and creative, eager to get into the field and demonstrate their chops, will then spend a lot of time in this pursuit. This is not what we want.

I anticipated this dynamic several years ago, and decided not to discuss it. But now that it’s happening more openly, I think it’s worth pointing out.

IF your inquiry is in good faith, I think the right approach is to have conversations in private. But for people to trust you, you will first need to build good faith and a solid relationship. This takes time. And it works by doing helpful work and participating in positive-sum collaborations first.

Importantly, I suggest that we follow the debate rule (afaic proposed by @s_r_constantin) where you don’t lose points in a debate if making your case correctly would either cost you (because the view is considered taboo) or would be harmful to the world.

Thank you for coming to my TED talk.

a paid blue check of course

also: i ain't reading all that
I clocked out at "very anti-social move to make".

There’s a few other ideologies that claim questioning its ideas is a recipe for disaster….Wonder if they notice the similarities.

I still don’t understand why these guys can’t explain things simply and in layman’s terms if avoiding this inevitable apocalypse is dependant on having the every day person on board supporting them is part of their strategy. Maybe they should outsource their tweets to chat gpt to simplify their thinking concisely while they are at it.

Do they think complicated language and convolusion will camouflage their obvious fishing for intellectual notoriety on social media?

My fave tweet thread was when Big Yud basically said that he couldn’t describe or write down the logical theory behind his AI doomerism because of “haters”. He then shuffled the responsibility off to the guy asking for it, basically admitting that he doesn’t have the skills to engage a professional philosopher. For a “everything is easy” and “I am so smart” sure seemed like a self own to me.

If you don't notice my shoes are untied, are they really thus? Checkmate, haters.

Man I don’t wanna give him traffic, just take a screenshot next time.