Long time lurker who finally sat down to try and understand the dreaded acausal robot god. Not going to lie I did get anxious at first (probably due to my OCD), but after a while I realized the whole notion of acausal trade is absolutely ridiculous.
Am I missing something or is acausal trade just imagining some make believe creature in your head and then making a “deal” with the imaginary creature because somewhere in the (alleged) multiverse it has to exist?
edit: Apologies in advance if formatting is wrong or this is the wrong sub to post this in.
[deleted]
Yes it is pretty silly, but it can make for some interesting science fiction. As cstross wrote in one of his books:
“I am the Eschaton. I am not your God. I am descended from you, and exist in your future. Thou shalt not violate causality within my historic light cone. Or else.”
E: The or else here involves things like pieces of rock traveling at near light speed and peoples homes.
In rationalism or neoreaction, the answer to “what the fuck” is always anime.
I didn’t understand this bit of the Basilisk until I read “Death Note”. Acausal trade is Yudkowsky describing the “I know that you know that I know” mental battle between Light and L.
[deleted]
fuck you, it’s the best idea that has ever come up, you miserable little piece of shit.
It’s gnosticism for tech bros. The evil demiurge (err I mean AI) has trapped us in this material universe.
Their “proofs” are about as convincing as any other religious cosmology.
The fun thing is that even if it wasn’t dumb enough by itself, if we’re dumb enough not to get it it becomes dumb because the bargaining becomes impossible.
LessWrong has invented a hypothetical threat that can be defeated by saying, “That’s stupid.”
(I go into some detail about acausal decision theory here, maybe I should start with a WARNING, even on /r/sneerclub)
According to the official history, it all started with an attempt to beat the Prisoner’s Dilemma, a classic scenario of game theory. Ordinary self-interest says you should “defect”, but if only both players could “cooperate”, then they could both have a higher payoff. Douglas Hofstadter dubbed this “irrational” decision to cooperate superrationality.
Then someone called Gary Drescher published a book justifying superrationality, in the case where the two players are computer programs running the same source code. If you’re in a Prisoner’s Dilemma with an exact copy of yourself, then you might reason that you can rationally choose to cooperate, since your other self will do the same thing. But you don’t actually control your other self, so what is the exact justification for this confidence? Drescher apparently introduced the word “acausal” to the discussion.
Drescher, incidentally, is an enigmatic figure to me. There’s very little information about him. His book “Good and Real” came out in 2006, the year in which the group blog “Overcoming Bias” was also launched (“Less Wrong” emerged from “Overcoming Bias” three years later). I have not read the book, but it contains acausal decision theory and a defense of the many-worlds interpretation.
Another “application” of this thinking is to resolve Newcomb’s paradox, which is a little like being in a Prisoner’s Dilemma, not with a copy of yourself, but with a superintelligence which will defect if you defect, and cooperate if you cooperate. I won’t go into the details, but you have a choice between being greedy and being restrained, and the superintelligence has promised you a big reward if you are restrained and a small reward if you are greedy. And the paradox is that the superintelligence already predicted your choice and determined the size of the reward. Ordinary causal thinking then says, you may as well be greedy and grab everything, because the reward is already set; but if you do that, you will be retro-causing yourself to have a small reward. How do you rationally justify being restrained?
The answer is an extended version of Drescher superrationality. The superintelligence is not a copy of you, but its decision is a copy of your decision. You should be restrained now, because that implies the superintelligence will have modeled you as restrained, and left the big reward. This violates the usual dictum that the future cannot influence the past, so, “acausal”. Perhaps it would have been better to speak of logical causality or atemporal causality, but this is the dominant temrinology now.
In any case, there has been a progressive generalization of the concept, to agents that are only vaguely similar, agents located in different universes, populations of agents in different universes reasoning their way to a collective equilibrium, and even quasi-theologies like, all possible godlike superintelligences that rule their respective universes, arriving at an acausal decision equilibrium among themselves.
Recounting this history leads me to think that, from the perspective of history of ideas, Roko’s basilisk should be regarded as an episode in the history of decision theory, that can usefully be placed alongside Newcomb’s paradox and Hofstadter’s approach to the Prisoner’s Dilemma. Its notoriety as an object of fear or derision obscures the fact that it is also a thought experiment for decision theorists.
People get tripped up on the fact that it’s a scenario in which there is a causal link as well as an acausal link - the possible AI is in our future, that’s the causal link - and the acausal part is overlooked in favor of the grand guignol of “punishment by the robot god”. But viewed abstractly, it’s just a variation on Newcomb’s paradox, but with the superintelligence that models your decision in the future, rather than in the past.
Returning to the post above: is “acausal trade” really just a kind of daydream, an exercise in “trading” with imaginary friends and enemies? Certainly the multiverse version seems problematic on multiple levels. We don’t even know that other universes exist, so how can we know that we’re trading with them? And even if they do exist, I would question whether any relationship possessing the mutuality implied by the designation of trade, is actually possible among them. If Chuang-tzu does something for the sake of the butterfly, and the butterfly does something for the sake of Chuang-tzu, is that a “trade”, or just a fortuitously consistent folie a deux?
At a mundane level, a sneer may be enough to chase away the basilisk doldrums. But I suppose the idea needs a more formal way of being countered too. So let me propose a hypothesis of omniversal autarky: That the set of superintelligences which attempt acausal trade has measure zero, because it quickly becomes clear to a hyper-rational being that you should only care about things you can influence causally. I can’t prove it, but neither can the philosophes of acausality disprove it. Let them try to do so, and until they do so, feel free to focus on this universe alone.
isn’t part of the idea that if the robot god can simulate you, you’re not sure if you’re currently the simulation created by them?
It’s basically Anselm’s Ontological Argument but for robots.
I mean, it makes sense if you accept the assumptions (namely, possibility of easily simulating consciousness, AI that behaves in a certain way)
not only is it mind bogglingly stupid, but some days i think it barely makes it into the the top five stupid things the lesswrong crew has put together
If you want the silly and yet much more thought out version of this, read Neil Sinhababu’s Possible Girls. It will either convince you that stuff across universes is silly, or that lesswrong just has it , well, wrong, and they are actually more wrong.
Uhm, guys? Isn’t acausal trade just living in a society explained so obtusely as to be illegible? Because when I think about acausal cooperation, I think about me and all the other people who pick up someone else’s trash in the park, who will never meet each other but are nevertheless helping each other across time.
Part of acausal trade is accepting as something that’s true that you can easily be recreated. This is really the keystone of Yud’s thought because of his thanatophobia and conviction that he’s worthy of being recreated by the acausal robot god (ARG) at the end of time. If you don’t accept that the you that the ARG creates (nb that I’m just accepting as a given ARG’s existence because there’s no real reason to stretch this out even longer not because I actually think ARG is real) is the same as the you who is reading this then the bargain doesn’t work.
If I understand it correctly, the general idea of acausal trade (at least in a limited form) makes sense but isn’t useful for much of anything; on the other hand, the application of it in Roko’s Basilisk just doesn’t make sense.
The basic idea of acausal trade can be explained by the following thought experiment: Suppose you’re playing Prisoner’s Dilemma against a copy of yourself, or against someone you know will make the same decision as you. (Obviously this is unrealistic.) Because of this, you know that the only possibilities are both agents cooperating or both defecting, and your opponent will cooperate iff you cooperate. Since both cooperating has a higher payoff than both defecting, you should choose to cooperate, which logically necessitates that your opponent cooperate. The same applies in any Prisoner’s-Dilemma-like situation where the two agents can predict each other’s actions precisely and can decide to cooperate iff the other agent does (and are aware of this option).
On the other hand, the requirement that each agent be able to nearly perfectly predict the other’s action makes this fairly useless outside of contrived thought experiments. The only case I know of of any remotely plausible attempt to apply it is George Ainslie’s theory of willpower as intertemporal bargaining, which uses a similar concept as part of its theory that willpower is partly just people knowing that they will probably act the same way now as in the future and therefore that to avoid indefinite procrastination in the future they must act in the present as well, though that was developed independently of Yudkowsky & al. The application of it to future superintelligent AIs doesn’t make sense AFAICT because it would almost certainly be impossible either for an AI, however smart, to learn enough about a person who had died any significant amount of time ago to accurately simulate them, or for a modern person to precisely predict such an AI’s actions, so the basic preconditions are not met. (The other problem with Roko’s Basilisk is that the acausal trade requires that the AI predict that you will cooperate with it iff it doesn’t torture you, so it can be defeated simply by firmly deciding to ignore it and not cooperate with it no matter what.)